id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.04606
Localise to segment: crop to improve organ at risk segmentation accuracy
Increased organ at risk segmentation accuracy is required to reduce cost and complications for patients receiving radiotherapy treatment. Some deep learning methods for the segmentation of organs at risk use a two stage process where a localisation network first crops an image to the relevant region and then a locally specialised network segments the cropped organ of interest. We investigate the accuracy improvements brought about by such a localisation stage by comparing to a single-stage baseline network trained on full resolution images. We find that localisation approaches can improve both training time and stability and a two stage process involving both a localisation and organ segmentation network provides a significant increase in segmentation accuracy for the spleen, pancreas and heart from the Medical Segmentation Decathlon dataset. We also observe increased benefits of localisation for smaller organs. Source code that recreates the main results is available at \href{https://github.com/Abe404/localise_to_segment}{this https URL}.
Abraham George Smith, Denis Kutnár, Ivan Richter Vogelius, Sune Darkner, Jens Petersen
2023-04-10T14:22:10Z
http://arxiv.org/abs/2304.04606v1
# Localise to segment: crop to improve organ at risk segmentation accuracy ###### Abstract Increased organ at risk segmentation accuracy is required to reduce cost and complications for patients receiving radiotherapy treatment. Some deep learning methods for the segmentation of organs at risk use a two stage process where a localisation network first crops an image to the relevant region and then a locally specialised network segments the cropped organ of interest. We investigate the accuracy improvements brought about by such a localisation stage by comparing to a single-stage baseline network trained on full resolution images. We find that localisation approaches can improve both training time and stability and a two stage process involving both a localisation and organ segmentation network provides a significant increase in segmentation accuracy for the spleen, pancreas and heart from the Medical Segmentation Decathlon dataset. We also observe increased benefits of localisation for smaller organs. Source code that recreates the main results is available at this https URL. \({}^{1}\)Department of Computer Science, University of Copenhagen \({}^{2}\)Department of Oncology, Righosipitalet, University of Copenhagen \({}^{*}\)[email protected] ## Introduction More than 50% of cancer patients receive radiotherapy which is associated with a range of dose dependent side effects. Delineation of organs at risk on treatment planning scans is crucial to minimise complications [8, 19]. Manual delineation is possible and still widely used but in comparison to automated methods is time consuming [32] and subject to large inter-observer variation [15]. Therefore, methods to improve the accuracy of automated methods are required. A review of auto-segmentation methods for radiotherapy is presented by Cardenas et al. [5] with deep learning methods and convolutional neural networks in particular representing the state-of-the-art. Organ localisation has been used for a variety of tasks in image analysis and can reportedly improve segmentation accuracy whilst reducing computational memory and processing time requirements [37]. Kutnar et al. [17] found a two-stage localisation approach to be effective for the segmentation of lacunes in brain MR images and Gros et al. [10] found spine centerline localisation to provide state-of-the-art spinal cord segmentation accuracy. Feng et al. [9] proposed a two stage approach using cropped 3D images, where a similar 3D U-Net was used for both the initial organ localisation and segmentation stages. They claim their approach is more data efficient due to the use of voxel labels in the training of the localisation network. The method proposed by Feng et al. [9] is appealing as it uses the same method (segmentation with 3D-U-Net [6]) for both localisation and segmentation which simplifies both concept and implementation. Although the method obtains competitive accuracy [39], an ablation analysis or baseline comparison method is lacking. Therefore, we conduct a more focused investigation to measure the accuracy gains brought about by such an approach to localisation. We hypothesise that localisation will improve organ at risk segmentation accuracy, demonstrated by a significant increase in dice. To the best of our knowledge, this hypothesis has not been tested in a focused investigation. ## Method ### Dataset To evaluate the effect of localisation on a diverse array of organ at risk segmentation tasks, we used the spleen, pancreas, prostate, liver and heart (left atrium) datasets [25] from the Medical Segmentation Decathlon [2]. We used only the original training sets, as this portion of the data has corresponding labels available for download. To facilitate the training and evaluation of deep learning models for image segmentation, we split the downloaded images and labels into our own training, validation and test subsets with sizes of 60%, 20% and 20%, respectively (Table 1). This ratio between training, validation and test data was chosen as it is typical for deep learning model training. \begin{table} \begin{tabular}{l l l l} \hline \hline organ & training & validation & test \\ \hline spleen & 25 & 8 & 8 \\ pancreas & 169 & 56 & 56 \\ prostate & 19 & 7 & 6 \\ liver & 41 & 14 & 14 \\ heart & 12 & 4 & 4 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of images included in each of the training, validation and test datasets for each of the organs. ### Implementation We used PyTorch [22] (Version 1.13.1) and implemented a 3D U-Net [6] which is an encoder-decoder style semantic-segmentation architecture. For all experiments we use 64GB of RAM and two NVIDIA 3090 RTX GPUs. When performing semantic segmentation using convolutional neural networks, GPU memory is often a bottleneck. Due to this limitation there is a trade off between batch size, which is the number of images used in each training update and patch size, which is the size of the images used during training. Larger input patches allow more context to be considered for each voxel or pixel classification decision and have been found to improve accuracy [13]. Therefore we used an input patch size of 64x256x256 for all experiments as this was the largest we could fit in GPU memory. However as such large input patches take up more GPU memory they force a reduction in batch size. Therefore we used a batch size of 2 for all experiments with one instance (input patch) on each GPU, utilising a data-parallel approach, meaning the training batch is split across the GPUs. Small batch sizes can be problematic for the commonly used batch normalisation method [14, 36]. Therefore we used group normalisation [36] after each layer as it performs well when small batch sizes are used and has been found to be effective for 3D medical image segmentation tasks [21]. We use a loss function which is a combination of dice [30] and cross-entropy as this has been found to be effective when dealing with class imbalanced datasets [27, 31]. Although in [9] the authors used cross-entropy with importance weights for their main experiments, they mentioned that they also found a combination of cross-entropy and dice loss to both stabilise and accelerate the training process. Another disadvantage of cross-entropy as opposed to dice loss is that organ specific importance weights require manual tuning. We used zero padding in the convolution operations to allow our 3D network to produce an output segmentation with the same size as the input patch. For all experiments we used the Adam optimiser [16] with a learning rate of 0.0001. For each training run we initialise the weights using He [11] initialisation. We used check-pointing and early stopping [20] to mitigate over-fitting. Check-pointing involves saving the model weights to disk during the training run. We computed the dice on the validation set at the end of each epoch and only saved models which obtained a new highest dice. There are various way to implement an early stopping procedure [23]. Our stopping criterion used the number of epochs since an improved dice had been found, a parameter which is a commonly referred to as patience. We set patience to 20 for all experiments, thus each training run would stop after 20 epochs had passed since a new highest dice score on the validation set had been obtained. To mitigate the possibility that the results were due to chance, for each organ and method, training runs were repeated until 10 runs had converged, where convergence was defined as the model having at least 0.1 dice on the validation set after 20 epochs. The methods compared include a baseline full resolution segmentations approach using 3D patches, a two stage localisation approach and a method in volving only organ segmentation, where the ground truth was used to localise (Figure 1). In the following sections we describe the three different approaches we experimented with to evaluate the benefits of localisation. Figure 1: Illustration showing the three different methods compared, including the baseline, two stage involving both a localisation network and organ segmentation network and the organ segmentation network that uses the ground truth to localise. ### Baseline - full resolution segmentation In order to evaluate the advantages of the two stage localisation process we trained a single stage baseline network. For each training instance we sample a patch with random location within the image and the corresponding location from the annotation. We enforced that at least 80% of the selected patches contained foreground annotation. Such biased instance selection is a relatively common practice as otherwise most patches would not contain foreground which can cause convergence problems. ### Localisation network In order to train the localisation network we created a low resolution version of the dataset by resizing the images and annotations down to a half their width and height and a third of their depth. We then trained the network to predict the annotations which were also resized to match the reduced resolution images. We created these low res images using the resize function from scikit-image [33] (Version 0.17.2). ### Organ segmentation network To train the organ segmentation network, we first created a dataset of images and annotations which were cropped by taking the region of the image including the organ with 15 voxels padding on each side to include some background context. To ensure enough padding was included on each side of the organ, even if the organ was at the edge of an image, the images were zero padded by 15 voxels on each side before cropping to the organ. The organ segmentation network was trained independently using these cropped versions of the original images and ground truth annotations, without regard to the output of any particular localisation network. ### Ground truth localisation We also evaluated an approach using a localisation stage which utilises the ground truth labels. We do this to access the advantages of localisation given an accurate bounding box. ### Two stage localisation & segmentation pipeline In order to segment the full resolution image with a preliminary localisation step we implemented a two stage process. We first computed a low res version of the image and then segmented it using the localisation network. We then identified the organ as the largest connected foreground region in the low res segmentation. We then segmented the corresponding region in the full resolution image with padding on each side of the organ as described in the above _Organ segmentation network_ section. To perform this two stage segmentation, we pairwise couple the localisation networks with the organ segmentation networks chronologically, thus the i'th organ network trained is coupled with the i'th localisation network. ### Metrics During training we computed dice on both the training and validation data. For the final model that was automatically selected at the end of the training run, dice was computed on both the randomly selected validation and test sets using the full resolution segmentations and annotations. The two sided t-test as implemented in SciPy [34] (Version 1.5.4) was used for testing for significant differences between the accuracy of the methods on both the validation and test datasets. We also record time for each of the training runs to converge. ## Results ### Training For each organ, the baseline approach is substantially slower than the other methods and for all organs takes longer to converge than both the low res and organ networks combined (Table 2). Both the baseline method and low res networks had less stable performance during training compared to the organ network, with larger fluctuations in the dice (Figure 2). The organ segmentation network always converged (Figure 2(a)). With the baseline and low res networks, convergence is similarly likely, with 83% of the baseline training runs and 85% of the low res training runs converging. The varying rates of convergence (Figure 3) reflect the difference seen in training stability (Figure 2). \begin{table} \begin{tabular}{l l l l} \hline \hline & baseline & low res & organ \\ \hline spleen & 74.9 & 10.8 & 10.1 \\ pancreas & 477.6 & 65.1 & 91.2 \\ prostate & 6.4 & 2.0 & 2.2 \\ liver & 253.2 & 42.8 & 110.0 \\ heart & 22.2 & 3.7 & 2.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Average training time (minutes) for each of the three types of networks for each of the organs. Figure 2: Validation and training dice are shown for each epoch for each of the 10 training runs for each organ and for each method, resulting in 150 training runs in total. Only training runs which successfully converged are shown. ## Validation For the validation sets, the dice was significantly higher for the two stage approach compared to the baseline method for the heart (\(p<0.001\)), spleen (\(p<0.001\)) and pancreas (\(p<0.001\)). For the liver, although the two stage approach appears it may offer some improvements, the difference was not significant (\(p=0.11\)). For the prostate there was no significant difference (\(p=0.44\)). On the validation set, the difference between the ground truth localised method and baseline was significant for the liver (\(p<0.05\)) and highly significant for the heart, spleen, pancreas and prostate (\(p<0.001\)). For all organs except the liver (\(p=0.07\)), the benefits of ground truth localization are significant compared to using the localization network to provide the cropped region (\(p<0.001\)). \begin{table} \begin{tabular}{l l l l} \hline \hline & baseline & two stage & ground truth localised \\ \hline spleen & 0.6491 \(\pm\) 0.0997 & \(\mathbf{0.8142\pm 0.0221}\) & \(\mathbf{0.8619\pm 0.0092}\) \\ pancreas & 0.4372 \(\pm\) 0.0148 & \(\mathbf{0.6674\pm 0.0146}\) & \(\mathbf{0.7564\pm 0.0096}\) \\ prostate & 0.7744 \(\pm\) 0.0109 & 0.7699 \(\pm\) 0.0149 & \(\mathbf{0.8323\pm 0.0076}\) \\ liver & 0.6661 \(\pm\) 0.0188 & 0.7044 \(\pm\) 0.0694 & \(\mathbf{0.7807\pm 0.1061}\) \\ heart & 0.6547 \(\pm\) 0.0239 & \(\mathbf{0.7018\pm 0.0281}\) & \(\mathbf{0.7612\pm 0.0087}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Average dice on the validation set for the baseline network compared to the two stage approach with both predicted and ground truth localisation. Values which are significantly higher than the baseline are shown in bold. Figure 3: Convergence rate for (a) each method and organ, (b) as a function of dataset size and (c) as a function of class balance (average foreground percent). ## Test On the test set, the two stage dice score was higher than the baseline for the spleen (\(p<0.001\)), pancreas (\(p<0.001\)) and heart (\(p<0.05\)). For the liver the difference was not significant (\(p=0.8\)). The test set dice was significantly higher with the baseline approach compared to the two stage method for the prostate (\(p<0.001\)). When using the ground truth labels to localise, the increase in organ network dice compared to the baseline was highly significant for the heart, spleen and pancreas (\(p<0.001\)) and significant for the prostate and liver (\(p<0.05\)). The benefits of ground truth localization were also highly significant compared to using the localization network to provide the cropped region for the heart, spleen, pancreas and prostate (\(p<0.001\)) and significant for the liver (\(p<0.05\)). We found that smaller organs, as a percentage of scanned region tend to benefit more from localisation (Figure 4). \begin{table} \begin{tabular}{l l l l} \hline \hline & baseline & two stage & ground truth localised \\ \hline spleen & 0.4433 \(\pm\) 0.1162 & \(\mathbf{0.6503\pm 0.0538}\) & \(\mathbf{0.8255\pm 0.0086}\) \\ pancreas & 0.4366 \(\pm\) 0.0205 & \(\mathbf{0.6519\pm 0.0136}\) & \(\mathbf{0.7397\pm 0.0063}\) \\ prostate & 0.797 \(\pm\) 0.0342 & 0.7204 \(\pm\) 0.0244 & \(\mathbf{0.8361\pm 0.0145}\) \\ liver & 0.6039 \(\pm\) 0.0214 & 0.6096 \(\pm\) 0.0664 & \(\mathbf{0.7156\pm 0.1028}\) \\ heart & 0.5764 \(\pm\) 0.0311 & \(\mathbf{0.623\pm 0.0523}\) & \(\mathbf{0.7508\pm 0.0178}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Average dice on the test set for the baseline network compared to the two stage approach with both predicted and ground truth localisation. Values which are significantly higher than the baseline are shown in bold. ## Discussion & Conclusion Although the significant improvements in dice for the majority of datasets confirm our hypothesis that localisation improves organ at risk segmentation accuracy, the baseline performed stronger than expected in comparison to the two stage localisation approach, even out-performing the localisation approach on the prostate test set. The mean organ volume as a percentage of total image volume ranges from 0.2% for the pancreas to 2.7% for the prostate. This represents an extreme class imbalance, particularly for the pancreas, spleen and left atrium. Class imbalance is known to have detrimental effects on the performance of machine learning models [18] and convolutional neural networks in particular [4]. If not addressed, a class imbalance problem may lead to algorithms tending to predict only the majority class [7]. Gros et al. [10] argue a two stage approach involving localisation is able to mitigate issues caused by class imbalanced data. The trend of an increased benefit of localisation for smaller organs (Figure 4) is expected, because for smaller organs the class balance issue becomes more severe and if the organ becomes large enough there will be negligible difference between the baseline and localisation approaches. One explanation for the good performance of the baseline could be the random selection of patches during the baseline training procedure. This ran Figure 4: Benefit of localisation for each of the datasets for both predicted localisation region (o) and when the ground truth location is provided (x). The mean improvement in dice is calculated by subtracting the mean baseline dice from the mean localised dice. Foreground % is the percentage of the voxels in a scan that belong to the organ as opposed to the background, where background is considered as all voxels outside of that particular organ. dom selection could have provided some augmentation benefits similar to random cropping. When the organ segmentation network encountered unexpected anomalies it may have been less equipped to handle them. Xu et al. [38] trained an organ segmentation network using a region containing the organ of interest but with variation in the amount of padding around it. Varying the amount of context around each organ during training may be key to a two-stage localisation network that provides consistent advantages in accuracy compared to the single stage baseline line method. The baseline and low res network training instability, including fluctuations in dice (Figure 2) is likely related to the challenges with class imbalance. Although the baseline network had biassed instance selection to include foreground batches more frequently, its task was likely more complicated compared to cropped organ segmentation as the baseline network must learn to segment regions further away from the organ. For the baseline approach, the patches used in training will have also been less consistent, including varying amounts of the organ of interest or sometimes only background regions. The heart (left atrium) had the lowest rate of convergence on average, which is likely due to it having both a relatively small dataset (Figure 2(b)) and a large class imbalance (Figure 2(c)). Our condition for convergence was based on accuracy, which typically increases with training dataset size [12]. An exception is the pancreas, with the largest dataset, yet a convergence rate of only around 85% (Figure 2(b)), which may be due to the high class imbalance in this dataset (Figure 2(c)). Reduction in model training time is critical for both workflow optimisation and carbon footprint [1]. Slow training may also hinder novel interactive-machine-learning approaches that depend on model adaptation to support a feedback loop between annotator and model [29; 28; 35; 26]. We found that the baseline method had slower convergence and longer training time compared to the localisation approach, even when considering that the localisation approach involved training two networks (Table 2). The slow convergence of patch based training in comparison to other approaches has been observed in previous work evaluating methods for brain tumour segmentation [3]. A potential drawback of the two stage localisation approach is the additional complexity of training two networks. A potential limitation to the network architecture used in this study is the use of zero-padding to ensure that the network input and output had consistent size. In some cases zero-padding has been found to increases errors on the edge of a patch by as much as 35% [13]. The consistent benefits of using ground truth to localise the region for the organ segmentation network (Table 4) motivate the use of a manual bounding box in cases where accuracy improvements are required for smaller organs, an approach that has been used for prior studies in interactive machine learning for organ at risk segmentation [29]. Manual localisation would also be feasible with interactive segmentation methods, such as the approach proposed by Rasmussen et al. [24] where organ extremities are input to guide the predicted contour. Our results show the advantages of both manual and automatic localisation for organ at risk segmentation in terms of both training time, convergence rate and segmentation accuracy, especially for smaller organs where class imbalance causes challenges for conventional approaches to segmentation model training.
2307.12252
Fractional Generalizations of the Compound Poisson Process
This paper introduces the Generalized Fractional Compound Poisson Process (GFCPP), which claims to be a unified fractional version of the compound Poisson process (CPP) that encompasses existing variations as special cases. We derive its distributional properties, generalized fractional differential equations, and martingale properties. Some results related to the governing differential equation about the special cases of jump distributions, including exponential, Mittag-Leffler, Bernst\'ein, discrete uniform, truncated geometric, and discrete logarithm. Some of processes in the literature such as the fractional Poisson process of order $k$, P\'olya-Aeppli process of order $k$, and fractional negative binomial process becomes the special case of the GFCPP. Classification based on arrivals by time-changing the compound Poisson process by the inverse tempered and the inverse of inverse Gaussian subordinators are studied. Finally, we present the simulation of the sample paths of the above-mentioned processes.
Neha Gupta, Aditya Maheshwari
2023-07-23T07:54:33Z
http://arxiv.org/abs/2307.12252v1
# Fractional generalizations of the compound Poisson process ###### Abstract. This paper introduces the Generalized Fractional Compound Poisson Process (GFCPP), which claims to be a unified fractional version of the compound Poisson process (CPP) that encompasses existing variations as special cases. We derive its distributional properties, generalized fractional differential equations, and martingale properties. Some results related to the governing differential equation about the special cases of jump distributions, including exponential, Mittag-Leffler, Bernstein, discrete uniform, truncated geometric, and discrete logarithm. Some of processes in the literature such as the fractional Poisson process of order \(k\), Polya-Aeppli process of order \(k\), and fractional negative binomial process becomes the special case of the GFCPP. Classification based on arrivals by time-changing the compound Poisson process by the inverse tempered and the inverse of inverse Gaussian subordinators are studied. Finally, we present the simulation of the sample paths of the above-mentioned processes. Key words and phrases:Time-fractional Poisson process, Compound Poisson process. 2020 Mathematics Subject Classification: 60G22, 60G51, 60G55 ## 1. Introduction The compound Poisson process (CPP) is one the most important stochastic models for count data with random jump events. It generalizes the classical unit jump size of the Poisson process to random jump size distribution. This model is distinguished by its ability to effectively represent real-world scenarios where the size or impact of events varies and it is particularly useful in situations where extreme events of random sizes play a crucial role. Therefore, it becomes a natural model for wide range of applications in various sectors including insurance [10, 14], reliability [35], statistical physics [6], mining [18], evolutionary biology [13] and many more. Due to its wide applicability, it always remains a central of attraction of both theoretical and applied probabilists. In this context, the generalizations of the CPP becomes an important problem to consider. Several attempts has been made to generalize the CPP by various authors and here we mention important results in the literature from fractional generalizations point of view. The first fractional generalization of the CPP was proposed by [24, 22], and a semi-Markov extension of the CPP was discussed by [28]. Later, in [3, 2], the authors studied alternative forms of the compound fractional Poisson processes and derived several important results about their time-changed versions, governing differential equations and limiting behaviour of the processes. These fractional forms (see [4]) are defined by time-changing the CPP by inverse stable subordinator and also by changing the jump size distribution. A multivariate extension of the fractional generalizations of the CPP is introduced in [5]. In [9], the Poisson subordinated CPP is considered. More recently, some fractional versions of the CPP by changing jump size and/or by time-changing the CPP are examined in [31, 11, 17, 16]. After examining the existing literature on this topic, we felt a need for a unified fractional form of the CPP and therefore, we introduce a fractional generalization of the CPP such that most of the studied CPP become a special case of the proposed process. Our process is defined as \[Y_{f}(t):=\sum_{i=1}^{N_{f}(t)}X_{i},\] where \(N_{f}(t)\stackrel{{ d}}{{=}}N(E_{f}(t))\) is the Poisson process time-changed with an independent inverse subordinator \(\{E_{f}(t)\}_{t\geq 0}\) (1) and \(X_{i}\)'s, \(\,i=1,2,\ldots,\) are the iid jumps with common distribution \(F_{X}\). We call this process as the generalized fractional CPP (GFCPP). It is to note that we have assumed the most generalized form on the count and jump size distributions and therefore, this formulation will serve as an all-encompassing process for any type of fractional generalization of the CPP. We compute the distributional properties and the governing generalized fractional differential equation of the probability mass function (_pmf_) of the GFCPP. We also prove that the compensated GFCPP is a martingale with respect to its natural filtration. The special cases of the jump distribution \(X_{i},i=1,2,\ldots\), namely, exponential, Mittag-Leffler, Bernstein, discrete uniform, truncated geometric, and discrete logarithm, are investigated. We obtain their Laplace Transform (LT), governing fractional differential equations and time-changed representations. Specifically to mention that this approach generalize the fractional Poisson process of order \(k\) ([30, 11]), the Polya-Aeppli process of order \(k\) (see [7]), and the fractional negative binomial process (see [34, 4]). The classification of the GFCPP based on arrivals, particularly, tempered fractional Poisson process and inverse of inverse Gaussian time-change of the Poisson process are worked out. We further discuss their special cases based on jump sizes and obtain the LT and governing fractional differential equations. Lastly, we present the simulations for the special cases of the aforementioned processes. The paper is organized as follows. In Section 2, we present some preliminary definitions and results that are required for the rest of the paper. The GFCPP is examined in detailed in Section 3. The classification of the special cases of the GFCPP based on jump sizes and arrivals are discussed in Sections 4 and 5, respectively. In Section 6, the simulations of the sample paths are presented. ## 2. Preliminaries In this section, we introduce some notations and results that will be used later. ### Levy subordinator and its inverse A Levy subordinator (hereafter referred to as the subordinator) \(\{D_{f}(t)\}_{t\geq 0}\) is a non-decreasing Levy process and its Laplace transform (LT) (see [1, Section 1.3.2]) has the form \[\mathbb{E}[e^{-sD_{f}(t)}]=e^{-tf(s)},\;\text{where}\;f(s)=bs+\int_{0}^{\infty }(1-e^{-sx})\nu(dx),\;b\geq 0,s>0,\] is the Bernstein function (see [29] for more details). Here \(b\) is the drift coefficient and \(\nu\) is a non-negative Levy measure on positive half-line satisfying \[\int_{0}^{\infty}(x\wedge 1)\nu(dx)<\infty\text{ and }\nu([0,\infty))=\infty\] which ensures that the sample paths of \(\{D_{f}(t)\}_{t\geq 0}\) are almost surely \((a.s.)\) strictly increasing. Also, the first-exit time of \(\{D_{f}(t)\}_{t\geq 0}\) is defined as \[E_{f}(t)=\inf\{r\geq 0:D_{f}(r)>t\}, \tag{1}\] which is the right-continuous inverse of the subordinator \(\{D_{f}(t)\}_{t\geq 0}\). The process \(\{E_{f}(t)\}_{t\geq 0}\) is non-decreasing and its sample paths are continuous. We list some special cases of strictly increasing subordinators. The following subordinators with Laplace exponent denoted by \(f(s)\) are very often used in literature. \[f(s)=\begin{cases}s^{\alpha},\;0<\alpha<1,&\text{(stable subordinator)};\\ (s+\mu)^{\alpha}-\mu^{\alpha},\;\mu>0,\;0<\alpha<1,&\text{(tempered stable subordinator)};\\ \delta(\sqrt{2s+\gamma^{2}}-\gamma),\;\gamma>0,\;\delta>0,&\text{(inverse Gaussian subordinator)}.\end{cases} \tag{2}\] ### Compound Poisson Process A compound Poisson process is a continuous-time process with iid jumps \(X_{i},i=1,2,\ldots\). A compound Poisson process \(\{Y(t)\}_{t\geq 0}\) is given by \[Y(t)=\sum_{i=1}^{N(t)}X_{i}, \tag{3}\] where \(X_{i}\)'s follows \(F_{X}\) distribution and jumps arrive randomly according to an independent Poisson process \(N(t)\) with intensity rate \(\lambda>0\). The LT of the CPP \(\{Y(t)\}_{t\geq 0}\) is given by \[\mathbb{E}[e^{-sY(t)}]=\mathbb{E}[e^{-\lambda t(1-\mathbb{E}[e^{-sX_{1}}])}]. \tag{4}\] The _pmf_\(P(n,t)=\mathbb{P}[Y(t)=n]\) of the CPP is given by \[P(n,t)=\sum_{m=1}^{\infty}F_{X}^{*m}(n)\frac{e^{-\lambda t}(\lambda t)^{m}}{m!},\] where \(F_{X}^{*m}\) is the \(m\)-fold convolution of the density of \(F_{X}\). ### Generalized fractional derivatives Let \(f\) be a Bernstein function with integral representation \[f(s)=\int_{0}^{\infty}(1-e^{-xs})\nu(dx),\;\;s>0.\] We will use the generalized Caputo-Djrbashian (C-D) derivative with respect to the Bernstein function \(f\), which is defined on the space of absolutely continuous functions as follows (see [32], Definition 2.4) \[\mathcal{D}_{t}^{f}u(t)=b\frac{d}{dt}u(t)+\int\frac{\partial}{\partial t}u(t- s)\bar{\nu}(s)ds, \tag{5}\] where \(\bar{\nu}(s)=a+\bar{\nu}(s,\infty)\) is the tail of the Levy measure. The generalized Riemann-Liouville (R-L) derivative according to the Bernstein function \(f\) as (see [32, Definition 2.1]) \[\mathbb{D}_{t}^{f}u(t)=b\frac{d}{dt}u(t)+\frac{d}{dt}\int u(t-s)\bar{\nu}(s)ds. \tag{6}\] The relation between \(\mathbb{D}_{t}^{f}\) and \(\mathcal{D}_{t}^{f}\) are given by (see [32], Proposition 2.7) \[\mathbb{D}_{t}^{f}u(t)=\mathcal{D}_{t}^{f}u(t)+\bar{\nu}(t)u(0). \tag{7}\] ### LRD for non-stationary process Let \(s>0\) be fixed and \(t>s\). Suppose a stochastic process \(\{X(t)\}_{t\geq 0}\) has the correlation function \(\operatorname{Corr}(X(s),X(t))\) that satisfies \[c_{1}(s)t^{-d}\leq\operatorname{Corr}(X(s),X(t))\leq c_{2}(s)t^{-d}, \tag{8}\] for large \(t\), \(d>0\), \(c_{1}(s)>0\) and \(c_{2}(s)>0\). In other words, \[\lim_{t\to\infty}\frac{\operatorname{Corr}(X(s),X(t))}{t^{-d}}=c(s), \tag{9}\] for some \(c(s)>0\) and \(d>0.\) We say \(\{X(t)\}_{t\geq 0}\) has the LRD property if \(d\in(0,1)\). ## 3. Generalized Fractional Compound Poisson Processes In this section, we introduce the generalized fractional compound Poisson process and study their properties. **Definition 3.1** (Generalized fractional compound Poisson process ).: _Let \(N_{f}(t)\stackrel{{ d}}{{=}}N(E_{f}(t))\) be the Poisson process time-changed with the inverse subordinator \(\{E_{f}(t)\}_{t\geq 0}\) (see [23]) and let \(X_{i},\;i=1,2,\ldots,\) be the iid jumps with common distribution \(F_{X}\). The process defined by_ \[Y_{f}(t):=\sum_{i=1}^{N_{f}(t)}X_{i},t\geq 0, \tag{10}\] _is called the generalized fractional compound Poisson process (GFCPP)._ Using (4), we obtain the LT of the _pmf_ of the GFCPP \(\{Y_{f}(t)\}_{t\geq 0}\). It is given by \[\mathbb{E}[e^{-sY_{f}(t)}]=\mathbb{E}\left[\mathbb{E}\left[\sum_{i=1}^{N_{f}(t )}X_{i}|E_{f}(t)\right]\right]=\mathbb{E}[e^{-\lambda E_{f}(t)(1-\mathbb{E}[e^ {-sX_{1}}])}]. \tag{11}\] It is clear from the above LT that we can express the GFCPP as a time-changed CPP \[Y_{f}(t)\stackrel{{ d}}{{=}}Y(E_{f}(t)),t\geq 0,\] where \(Y(t)=\sum_{i=0}^{N(t)}X_{i}\) denotes the CPP. Next, we present the governing generalized fractional differential equation of the _pmf_ of the GFCPP. **Theorem 3.1**.: _The pmf \(P_{f}(n,t)=\mathbb{P}[Y_{f}(t)=n]\) satisfies following fractional differential equation_ \[\mathcal{D}_{t}^{f}P_{f}(n,t)=-\lambda P_{f}(n,t)+\lambda\int_{-\infty}^{ \infty}P_{f}(n-x,t)F_{X}(x)dx. \tag{12}\] Proof.: Let \(h_{f}(y,t)\) be the probability density function (_pdf_) of inverse subordinator \(\{E_{f}(t)\}_{t\geq 0}\) and \(P(n,t)\) be _pmf_ of CPP. Using conditional argument, we have that \[\mathbb{P}(Y_{f}(t)\in dz)=\int_{0}^{\infty}\mathbb{P}(Y(x)\in dz)h_{f}(x,t)dy.\] We now take generalized Riemann-Liouville derivative given by (6) on both sides of above equation, we get \[\mathbb{D}_{t}^{f}P_{f}(n,t) =\int_{0}^{\infty}p(n,x)\mathbb{D}_{t}^{f}h_{f}(x,t)dx.\] \[=-\int_{0}^{\infty}P(n,y)\frac{\partial}{\partial x}h_{f}(x,t)dx \tag{13}\] \[=-P(n,x)h_{f}(x,t)|_{0}^{\infty}+\int_{0}^{\infty}\frac{\partial }{\partial x}P(n,x)h_{f}(x,t)dx.\] It is known that (see [25]) the distribution of CPP satisfies the following partial differential equation \[\frac{\partial}{\partial y}P(n,x)=\lambda P(n,x)+\lambda\int_{-\infty}^{+ \infty}P(n-u,y)F_{X}(u)du. \tag{14}\] Substituting (14) in (13) and subsequently using the relation (7), we obtain the result mentioned in (12). This completes the proof. Further, we discuss some distributional properties of the GFCPP. **Theorem 3.2**.: _The mean, variance and covariance of GFCPP is given by:_ 1. \(\mathbb{E}[Y_{f}(t)]=\lambda\mathbb{E}[E_{f}(t)]\mathbb{E}[X_{1}]\) _ 2. \(\mathrm{Var}[Y_{f}(t)]=\lambda\mathbb{E}[F_{f}(t)]\mathbb{E}[X_{1}^{2}]+\lambda^{2} (\mathbb{E}[X_{1}])^{2}\mathrm{Var}[E_{f}(t)]\)__ 3. \(\mathrm{Cov}[Y_{f}(t),Y_{f}(s)]=\lambda\mathbb{E}[X_{1}^{2}]\mathbb{E}[E_{f}(s) ]+\lambda^{2}(\mathbb{E}[X_{1}])^{2}\mathrm{Cov}[E_{f}(t),E_{f}(s)]\)_._ Proof.: Using the conditional argument and independence of \(N_{f}(t)\) and \(X_{i}\), we have that \[\mathbb{E}[Y_{f}(t)]=\mathbb{E}[N_{f}(t)]\mathbb{E}[X_{1}]=\lambda\mathbb{E}[ E_{f}(t)]\mathbb{E}[X_{1}].\] The variance of \(\{Y_{f}(t)\}_{t\geq 0}\) can be written as (see [20]) \[\mathrm{Var}[Y_{f}(t)]=\mathrm{Var}[X_{1}]\mathbb{E}[N_{f}(t)]+\mathbb{E}[X_{ 1}]\mathrm{Var}[N_{f}(t)].\] Next, we compute the \(\mathrm{Cov}[Y_{f}(t),Y_{f}(s)]\), \(s\leq t\), \[\mathrm{Cov}[Y_{f}(t),Y_{f}(s)] =\mathbb{E}\left[\sum_{k=1}^{\infty}X_{k}^{2}\mathbb{I}\{N_{f}(s )\geq k\}\right]+\mathbb{E}\left[\sum_{i\neq j}\sum X_{i}X_{k}\mathbb{I}\{N_{ f}(s)\geq k,N_{f}(t)\geq i\}\right]\] \[\qquad\qquad\qquad-(\mathbb{E}[X_{1}])^{2}\mathbb{E}[N_{f}(s)] \mathbb{E}[N_{f}(t)]\] \[=\mathbb{E}[X_{1}^{2}]\sum_{k=1}^{\infty}\mathbb{P}(N_{f}(s)\geq k)\] \[\qquad\qquad\qquad+(\mathbb{E}[X_{1}])^{2}\left[\sum_{i=1}^{ \infty}\sum_{k=1}^{\infty}\mathbb{P}(N_{f}(s)\geq k,N_{f}(t)\geq i)-\mathbb{ P}(N_{f}(s)\geq k)\right]\] \[\qquad\qquad\qquad-(\mathbb{E}[X_{1}])^{2}\mathbb{E}[N_{f}(s)] \mathbb{E}[N_{f}(t)]\] \[=\mathbb{E}[X_{1}^{2}]\mathbb{E}[N_{f}(s)]+(\mathbb{E}[X_{1}])^{2 }\mathbb{E}[N_{f}(s)N_{f}(t)]\] \[\qquad\qquad\qquad-(\mathbb{E}[X_{1}])^{2}\mathbb{E}[N_{f}(s)]-( \mathbb{E}[X_{1}])^{2}\mathbb{E}[N_{f}(s)]\mathbb{E}[N_{f}(t)]\] \[=\mathrm{Var}[X_{1}]\mathbb{E}[N_{f}(s)]+(\mathbb{E}[X_{1}])^{2} \mathrm{Cov}[N_{f}(t),N_{f}(s)].\] The expression for the covariance of time-changed Poisson process, that is \(\mathrm{Cov}[N_{f}(t),N_{f}(s)]\), is derived in [20]. Substituting the same in the above equation, we get the desired result. Next, we prove the martingale property for the compensated GCFPP. Consider the compensated GCFPP defined by \[M_{f}(t):=Y_{f}(t)-\lambda E_{f}(t)\mathbb{E}[X_{1}],\ \ t\geq 0.\] **Theorem 3.3**.: _Let \(\mathbb{E}[X_{i}]<\infty,i=1,2,\ldots\). The compensated GCFPP \(\{M_{f}(t)\}_{t\geq 0}\) is a martingale with respect to a natural filtration \(\mathscr{F}_{t}=(N_{f}(s),s\leq t)\vee\sigma(E_{f}(s),s\leq t)\)._ Proof.: It is to note that (see [11]) the compensated time-changed Poisson process \(Q(t):=N_{f}(t)-\lambda E_{f}(t)\) is a martingale with respect to the filtration \(\mathscr{F}_{t}=(N_{f}(s),s\leq t)\vee\sigma(E_{f}(s),s\leq t)\). We have that \[\mathbb{E}[M_{f}(t)-M_{f}(s)|\mathscr{F}_{s}] =\mathbb{E}\left[\sum_{i=1}^{N_{f}(t)}X_{i}-\lambda\mathbb{E}[X_{ 1}]E_{f}(t)-\left(\sum_{i=1}^{N_{f}(s)}X_{i}-\lambda\mathbb{E}[X_{1}]E_{f}(s) \right)\bigg{|}\mathscr{F}_{s}\right]\] \[=\mathbb{E}\left[\left(\sum_{i=N_{f}(s)+1}^{N_{f}(t)}X_{i}-\lambda a (E_{f}(t)-E_{f}(s))\right)\bigg{|}\mathscr{F}_{s}\right]\] \[=\mathbb{E}\left[\mathbb{E}\left[\sum_{i=N_{f}(s)+1}^{N_{f}(t)} X_{i}\bigg{|}\mathscr{F}_{t}\bigg{|}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.} \bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{.}\bigg{. ## 4. Generalized time-fractional compound Poisson process In the previous section, we discussed some important properties and results related to the GFCPP. Note that Definition 3.1 assumes general distribution \(F_{X}\) on jump size. In this section, we consider several special cases of distribution on the jump size \(X_{i},i=1,2,\ldots\) of the GFCPP and study their properties. We call this process as generalized time-fractional CPP (GTFCPP) and it can also be expressed as the CPP time-changed with the inverse subordinator \(\{Y(E_{f}(t))\}_{t\geq 0}\), where \(\{Y(t)\}_{t\geq 0}\) and \(\{E_{f}(t)\}_{t\geq 0}\) are independent with specific jump distribution in the CPP. ### **GTFCPP with exponential jumps** In this subsection, we assume that the jumps \(X_{i},\ i=1,2,\ldots,\) are exponentially distributed with parameter \(\eta>0\) and denote it by \[Y_{f}^{\eta}(t):=\sum_{i=1}^{N_{f}(t)}X_{i},t\geq 0. \tag{15}\] Using (11), we obtain the LT of the density \(P_{f}^{\eta}(x,t)\), given by \[\mathcal{L}[P_{f}^{\eta}(x,t)]=\mathbb{E}[e^{-sY_{f}^{\eta}(t)}]=\mathbb{E}[e ^{-\lambda E_{f}(t)\frac{s}{s+\eta}}]. \tag{16}\] Next, we derive the differential equation associated with the density \(P_{f}^{\eta}(x,t)\) of the GTFCPP with exponential jumps \(\{Y_{f}^{\eta}(t)\}_{t\geq 0}\). **Theorem 4.1**.: _The pdf \(P_{f}^{\eta}(x,t)\) of \(\{Y_{f}^{\eta}(t)\}_{t\geq 0}\) satisfy following fractional differential equation_ \[\eta\mathcal{D}_{t}^{f}P_{f}^{\eta}(x,t)=-\left[\lambda+\mathcal{D}_{t}^{f} \right]\frac{\partial}{\partial x}P_{f}^{\eta}(x,t). \tag{17}\] _with following conditions_ \[P_{f}^{\eta}(x,0)=0,\ \ \mathbb{P}(Y_{f}^{\eta}(t)>0)=1-\mathbb{E}[e^{- \lambda E_{f}(t)}].\] Proof.: Consider the subordinated form of the _pdf_\(P_{f}^{\eta}(x,t)\) \[P_{f}^{\eta}(x,t)=\int_{0}^{\infty}P(x,y)h_{f}(y,t)dy,\] where \(h_{f}(y,t)\) is _pdf_ of inverse subordinator \(\{E_{f}(t)\}_{t\geq 0}\). Taking generalized Riemann-Liouville derivative given by (6), we get \[\mathbb{D}_{t}^{f}P_{f}^{\eta}(x,t) =\int_{0}^{\infty}P_{Y}(x,y)\mathbb{D}_{t}^{f}h_{f}(y,t)dy.\] \[=-\int_{0}^{\infty}P_{Y}(x,y)\frac{\partial}{\partial y}h_{f}(y, t)dy \tag{18}\] \[=-P_{Y}(x,y)h_{f}(y,t)|_{0}^{\infty}+\int_{0}^{\infty}\frac{ \partial}{\partial y}P_{Y}(x,y)h_{f}(y,t)dy.\] Note that (see [2]) the _pdf_\(P_{Y}(x,t)\) of CPP satisfies the following equation \[\eta\frac{\partial}{\partial t}P_{Y}(x,t)=-\left[\lambda+\frac{\partial}{ \partial t}\right]\frac{\partial}{\partial x}P_{Y}(x,t).\] Substituting the above equation in (18) and using (7), we obtain the desired result. As a special case of Theorem 3.2, the mean and covariance of the GTFCPP with exponential jumps \(\{Y_{f}^{\eta}(t)\}_{t\geq 0}\) can be found as follows \[\mathbb{E}[Y_{f}^{\eta}(t)] =\frac{\lambda}{\eta}\mathbb{E}[E_{f}(t)];\] \[\mathrm{Var}[Y_{f}^{\eta}f(t)] =\frac{2\lambda}{\eta^{2}}\mathbb{E}[E_{f}(t)]+\frac{\lambda^{2} }{\eta^{2}}\mathrm{Var}[E_{f}(t)];\] \[\mathrm{Cov}[Y_{f}^{\eta}(t),Y_{f}^{\eta}(s)] =\frac{\lambda}{\eta^{2}}\mathbb{E}[E_{f}(s)]+\frac{1}{\eta^{2}} \mathrm{Cov}[N_{f}(t),N_{f}(s)],\ \ s<t.\] ### GTFCPP with Mittag-Leffler jumps We now define the process \[Y_{f}^{\beta,\eta}(t):=\sum_{i=1}^{N_{f}(t)}X_{i},t\geq 0,\] where jump size \(X_{i},\ i=1,2,\dots,\) are the Mittag-Leffler distributed random variables with parameter \(\beta\) and \(\eta\) and having the _pdf_ as \(q_{\beta,\eta}(x,t)=\eta x^{\beta-1}E_{\beta,\beta}(-\eta x^{\beta}),\ \ \beta\in(0,1],\ \eta>0.\) The LT of the \(\{Y_{f}^{\eta}(t)\}_{t\geq 0}\) is given by \[\mathcal{L}[P_{f}^{\beta,\eta}(x,t)]=\mathbb{E}[e^{-\lambda E_{f}(t)\frac{s^{ \beta}}{s^{\beta}+\eta}}]. \tag{19}\] We now derive a time-changed representation of the GTFCPP with Mittag-Leffler jumps \(\{Y_{f}^{\beta,\eta}(t)\}_{t\geq 0}\). **Theorem 4.2**.: _Consider an \(\beta\)-stable subordinator \(\{D_{\beta}(t)\}_{t\geq 0}\) time-changed with an independent GTFCPP with exponential jump \(\{Y_{f}^{\eta}(t)\}_{t\geq 0}\), i.e._ \[Y_{f}^{\eta,\beta}(t)\stackrel{{ d}}{{=}}D_{\beta}(Y_{f}^{\eta}(t )),\ \beta\in(0,1),\ \eta>0,t\geq 0.\] Proof.: The LT of the _pdf_ of \(\{Y_{f}^{\eta,\beta}(t)\}_{t\geq 0}\) is \[\mathbb{E}[e^{-sD_{\beta}(Y_{f}^{\eta}(t))}]=\mathbb{E}[\mathbb{E}[e^{-sD_{ \beta}(Y_{f}^{\eta}(t))}|Y_{f}^{\eta}(t)]]=\mathbb{E}[e^{-s^{\beta}y_{f}^{\eta }(t)}].\] Using (16), we get \[\mathbb{E}[e^{-sD_{\beta}(Y_{f}^{\eta}(t))}]=\mathbb{E}[e^{-\lambda E^{f}(t) \frac{s^{\beta}}{s^{\beta}+\eta}}].\] Comparing the above equation with the LT (19) of the \(\{Y_{f}^{\beta,\eta}(t)\}_{t\geq 0}\), we get the desired result. Next, we present the fractional differential equation for the _pmf_ of the GTFCPP with Mittag-Leffler jumps \(\{Y_{f}^{\beta,\eta}(t)\}_{t\geq 0}\). **Theorem 4.3**.: _The pdf \(P_{f}^{\beta,\eta}(x,t)\) of \(\{Y_{f}^{\beta,\eta}(t)\}_{t\geq 0}\) satisfy following fractional differential equation_ \[\eta\mathcal{D}_{t}^{f}P_{f}^{\beta,\eta}(x,t)=-\left[\lambda+ \mathcal{D}_{t}^{f}\right]\mathcal{D}_{x}^{\beta}P_{f}^{\beta,\eta}(x,t),\] _where \(\mathcal{D}_{t}^{\beta}\) denotes the C-D fractional derivative (a special case of (5)) with the following conditions_ \[P_{f}^{\beta,\eta}(x,0)=0,\ \ \mathbb{P}[Y_{f}^{\beta,\eta}(t)>0]=1- \mathbb{E}[e^{-\lambda E_{f}(t)}].\] Proof.: Writing the _pdf_\(P^{\beta,\eta}_{f}(x,t)\) using the conditional probability approach and then taking a generalized R-L fractional derivative, we have that \[\mathbb{D}^{f}_{t}P^{\beta,\eta}_{f}(x,t) =\int_{0}^{\infty}l_{\beta}(x,y)\mathbb{D}^{f}_{t}P^{\eta}_{f}(y,t )dy,\ \ \text{(using (\ref{eq:1}))}\] \[=-\frac{1}{\eta}\left[\lambda+\mathcal{D}^{f}_{t}\right]\int_{0} ^{\infty}l_{\beta}(x,y)\frac{\partial}{\partial y}P^{\eta}_{f}(y,t)dy \tag{20}\] \[=-\frac{1}{\eta}\left[\lambda+\mathcal{D}^{f}_{t}\right]\left(-l _{\beta}(x,y)P^{\eta}_{f}(y,t)|_{0}^{\infty}+\int_{0}^{\infty}\frac{\partial} {\partial y}l_{\beta}(x,y)P^{\eta}_{f}(y,t)dy\right),\] where \(l_{\beta}(x,t)\) is the _pdf_ of \(\beta\)-stable subordinator which satisfies the equation \[\mathcal{D}^{\beta}_{x}l_{\beta}(x,t)=-\frac{\partial}{\partial t}l_{\beta}(x,t),\ \ l_{\beta}(x,0)=\delta(x). \tag{21}\] We obtain the desired result by substituting (21) and (7) in (20). ### GTFCPP with Bernstein jumps In this subsection, we assume that the jump size \(X_{i},\ i=1,2,\ldots\) of the GFCPP are distributed as follows \[\mathbb{P}(X_{i}>t)=\mathbb{E}[e^{\eta E_{g}(t)}]. \tag{22}\] Note that the above distribution also occurs as the distribution of the inter-arrivals between the consecutive jumps of the time-changed Poisson process \(\{N(E_{g}(t))\}_{t\geq 0}\), where \(\{E_{g}(t)\}_{t\geq 0}\) is an independent inverse subordinator with Bernstein function \(g\) (see [25] for more details). Now, we define the process \[Y^{g}_{f}(t):=\sum_{i=1}^{N_{f}(t)}X_{i},t\geq 0.\] It is called as the GTFCPP with Bernstein jumps. The LT of the \(\{Y^{g}_{f}(t)\}_{t\geq 0}\) is given by \[\mathbb{E}[e^{-sY^{g}_{f}(t)}]=\mathbb{E}\left[e^{-\lambda E^{f}(t)\frac{g(s)} {g(s)+\eta}}\right], \tag{23}\] where \[\mathbb{E}[e^{-sX_{1}}]=\frac{\eta}{g(s)+\eta}.\] Next, we obtain the time changed representation and the governing fractional differential equation of the GTFCPP with Bernstein jumps \(\{Y^{g}_{f}(t)\}_{t\geq 0}\) in the following Theorem. **Theorem 4.4**.: _Let \(\{D_{g}(t)\}_{t\geq 0}\) be a Levy subordinator with Bernstein function \(g\) and \(\{Y^{\eta}_{f}(t)\}_{t\geq 0}\) be the GTFCPP with exponential jumps (15), independent of \(\{D_{g}(t)\}_{t\geq 0}.\) Then_ \[Y^{g}_{f}(t)\overset{d}{=}D_{g}(Y^{\eta}_{f}(t)) \tag{24}\] _The pdf \(P^{g}_{f}(x,t)\) of \(\{Y^{g}_{f}(t)\}_{t\geq 0}\) satisfies the following equation_ \[\eta\mathcal{D}^{f}_{t}P^{g}_{f}(x,t)=-\left[\lambda+\mathcal{D}^{f}_{t} \right]\mathcal{D}^{g}_{x}P^{g}_{f}(x,t),\] _with conditions_ \[P^{g}_{f}(x,0)=0,\ \ \mathbb{P}(Y^{g}_{f}(t)>0)=1-\mathbb{E}[e^{-\lambda E_{f}(t) }].\] Proof.: The relation (24) can be proved by taking the LT of \(\{D_{g}(Y^{\eta}_{f}(t))\}_{t\geq 0}\), which is given by \[\mathbb{E}[e^{-sD_{g}(Y^{\eta}_{f}(t))}]=\mathbb{E}[e^{-\lambda E^{f}(t)\frac{ g(s)}{g(s)+\eta}}]\] This is equal to the LT given in (23). Hence by the uniqueness of LT, the result follows. Now, we express the \(pdf\;P_{f}^{g}(x,t)\) using the conditional probability approach and then take a generalized R-L fractional derivative on both sides. We have that \[\mathbb{D}_{t}^{f}P_{f}^{g}(x,t) =\int_{0}^{\infty}l_{g}(x,y)\mathbb{D}_{t}^{f}P_{f}^{\eta}(y,t)dy, (\text{from (\ref{eq:1})})\] \[=-\frac{1}{\eta}\left[\lambda+\mathcal{D}_{t}^{f}\right]\int_{0} ^{\infty}l_{g}(x,y)\frac{\partial}{\partial y}P_{f}^{\eta}(y,t)dy \tag{25}\] \[=-\frac{1}{\eta}\left[\lambda+\mathcal{D}_{t}^{f}\right]\left(-l _{g}(x,y)P_{f}^{\eta}(y,t)|_{0}^{\infty}+\int_{0}^{\infty}\frac{\partial}{ \partial y}l_{g}(x,y)P_{f}^{\eta}(y,t)dy\right),\] where \(l_{g}(x,t)\) is the density of Levy subordinator which satisfies the equation (see [32]) \[\mathcal{D}_{x}^{f}l_{g}(x,t)=-\frac{\partial}{\partial t}l_{g}(x,t),\;\;l_{g }(x,0)=\delta(x).\] We substitute the above equation in (25) and subsequently use (7) to get the desired result. ### Generalized fractional Poisson process of order \(k\) In this subsection, we assume that jumps \(X_{i},\;i=1,2,\dots,\) are iid discrete uniform random variables such that \(\mathbb{P}(X_{i}=j)=\frac{1}{k},\;j=1,2,\dots,k\) and \(\{N_{f}(t)\}_{t\geq 0}\) be time-changed Poisson process with intensity rate of the Poisson process \(\{N(t)\}_{t\geq 0}\) is assumed to be \(k\lambda\). The process (10) can be written as \[Y_{f}^{k}(t):=\sum_{i=1}^{N_{f}(t)}X_{i},t\geq 0, \tag{26}\] is called the generalized fractional Poisson process of order \(k\) (GFCPPoK). This process was first defined and studied in [11]. The time-changed representation of GFCPPoK is given as (see [11]) \[Y_{f}^{k}(t)\overset{d}{=}N^{k}(E_{f}(t)),t\geq 0,\] where \(\{N^{k}(t)\}_{t\geq 0}\) is Poisson process of order \(k\) (PPoK). The \(pmf\)\(P_{f}^{k}(n,t)=\mathbb{P}[Y_{f}^{k}(t)=n]\) of \(\{Y_{f}^{k}(t)\}_{t\geq 0}\) satisfy following fractional differential-difference equation (see [11, Proposition 7.4]), \[\mathcal{D}_{t}^{f}P_{f}^{k}(n,t) =-k\lambda\left(1-\frac{1}{k}\sum_{j=1}^{n\wedge k}B^{j}\right)P_ {f}^{k}(n,t),\;n>0,\] \[\mathcal{D}_{t}^{f}P_{f}^{k}(0,t) =-k\lambda P_{f}^{k}(0,t),\] where \(B\) is the backward shift operator i.e. \(B[P(n,t)]=P(n-1,t)\). ### Generalized Polya-Aeppli process of order \(k\) Consider the GTFCPP with \(X_{i}\)'s \(i=1,2,\dots\) as iid truncated geometrically distributed random variables with success probability \(1-\rho\) and \(pmf\) given by \[\mathbb{P}[X_{i}=j]=\frac{1-\rho}{1-\rho^{k}}\rho^{j-1},\;\;j=1,2,\dots,k,\; \;\rho\in[0,1).\] The LT of \(X_{1}\) given by \[\mathbb{E}[e^{-sX_{1}}]=\frac{(1-\rho)e^{-s}}{(1-\rho^{k})}\frac{1-\rho^{k}e^ {-s^{k}}}{1-\rho e^{-s}}.\] Note that when \(k\to\infty\) the truncated geometric distribution approaches the geometric distribution starting at \(1\) and success probability \(1-\rho\). We denote the process as \[Y_{f}^{\rho,k}(t):=\sum_{\begin{subarray}{c}i=1\\ \text{i=1}\end{subarray}}^{N_{f}(t)}X_{i},t\geq 0. \tag{27}\] This is called the generalized Polya-Aeppli process of order \(k\) (GPAPoK). The LT of the \(\{Y_{f}^{\rho,k}(t)\}_{t\geq 0}\) is given by \[\mathbb{E}[e^{-sY_{f}^{\rho}(t)}]=\mathbb{E}[e^{-\lambda E^{f}(t)(1-\mathbb{E}[e ^{-sX_{1}}])}]=\mathbb{E}\left[e^{-\lambda E^{f}(t)\left(1-\frac{(1-\rho)e^{-s} }{(1-\rho^{k})}\frac{1-\rho^{k}e^{-s}}{1-\rho e^{-s}}\right)}\right].\] The time-changed representation of GPAPoK is \[Y_{f}^{\rho,k}(t)\stackrel{{ d}}{{=}}N_{A}^{k}(E_{f}(t)),t\geq 0,\] where \(\{N_{A}^{k}(t)\}_{t\geq 0}\) is Polya-Aeppli process of order \(k\) (PAPoK) (see [7]). Further, we derive the differential equation for the GPAPoK. **Theorem 4.5**.: _The pmf \(P_{f}^{\rho,k}(n,t)=\mathbb{P}[Y_{f}^{\rho,k}(t)=n]\) satisfy following fractional differential equation_ \[\mathcal{D}_{t}^{f}P_{f}^{\rho,k}(n,t) =-\lambda\left(1-\frac{1-\rho}{1-\rho^{k}}\sum_{j=1}^{n\wedge k} \rho^{j-1}B^{j}\right)P_{f}^{\rho,k}(n,t),\ \ n=1,2,\ldots,\] \[\mathcal{D}_{t}^{f}P_{f}^{\rho,k}(0,t) =-\lambda P_{f}^{\rho,k}(0,t),\] _with an initial condition \(P_{f}^{\rho,k}(n,0)=\delta_{n,0}\)._ Proof.: Writing the _pmf_\(P_{f}^{\rho,k}(n,t)\) using the condition probability approach, we have that \[P_{f}^{\rho,k}(n,t)=\int_{0}^{\infty}P_{\rho,k}(n,y)h_{f}(y,t)dy,\] where the \(P_{\rho,k}(n,t)\) is the _pmf_ of PAPoK \(\{N_{A}^{k}(t)\}_{t\geq 0}\). Taking generalized Riemann-Liouville derivative (6) on both sides of the above equation, we get \[\mathbb{D}_{t}^{f}P_{f}^{\rho,k}(n,t) =\int_{0}^{\infty}P_{\rho,k}(n,y)\mathbb{D}_{t}^{f}h_{f}(y,t)dy, \ \ \text{(see in \@@cite[cite]{[\@@bibref{}{BJ}{}{}]})}\] \[=-\int_{0}^{\infty}P_{\rho,k}(n,y)\frac{\partial}{\partial y}h_{ f}(y,t)dy \tag{28}\] \[=-P_{\rho,k}(n,y)h_{f}(y,t)|_{0}^{\infty}+\int_{0}^{\infty}\frac {\partial}{\partial y}P_{\rho,k}(n,y)h_{f}(y,t)dy.\] We know that (see [27]) the _pmf_\(P_{\rho,k}(n,t)\) of the Polya-Aeppli process of order \(k\)\(\{N_{A}^{k}(t)\}_{t\geq 0}\) satisfies the following differential equation \[\frac{\partial}{\partial t}P_{\rho,k}(n,t)=-\lambda\left(1-\frac{1-\rho}{1- \rho^{k}}\sum_{j=1}^{n\wedge k}\rho^{j-1}B^{j}\right)P_{\rho,k}(n,t),\ \ n\geq 1.\] Substituting the above equation in (28) and using (7), we obtain the desired result. ### Fractional negative binomial process Let \(X_{i}\)'s \(i=1,2,\ldots\) be a sequence of iid random variables with discrete logarithmic distribution, given by \[\mathbb{P}[X_{i}=n]=\frac{-1}{\log(1-q)}\frac{q^{n}}{n},\ \ n\geq 1,\ q\in(0,1).\] We denote the process as \[Y_{f}^{q}(t):=\sum_{i=0}^{N_{f}(t)}X_{i},t\geq 0.\] It is also known (see [4]) as the fractional negative binomial process with parameter \((1-q,\frac{\lambda}{\log 1-q})\). The LT of the \(\{Y_{f}^{q}(t)\}_{t\geq 0}\) is given by \[\mathbb{E}[e^{-sY_{\lambda}^{q}(t)}]=\mathbb{E}[e^{-\lambda E^{f}(t)(1-(s+\mu)^ {\alpha}-\mu^{\alpha})})].\] ## 5. Classifications based on arrivals In this section, we work out special cases of the GFCPP, defined in (10), by taking particular cases of time-changed Poisson process \(\{N_{f}(t)\}_{t\geq 0}\). More specifically, we study two types of inverse subordinator \(\{E_{f}(t)\}_{t\geq 0}\), namely the inverse tempered \(\alpha\)-stable subordinator (ITSS) and the inverse of the inverse Gaussian (IG) subordinator. The distribution of jumps is assumed in a general sense. Further, some results are mentioned by taking special cases of the jump distributions of \(X_{i},i=1,2,\ldots\). ### Tempered fractional CPP **Definition 5.1**.: _Consider the inverse subordinator (1) associated with tempered stable Bernstein function (2) \(f(s)=(\mu+s)^{\alpha}-\mu^{\alpha},\ \alpha\in(0,1],\mu>0\), denoted by \(\{E_{\alpha,\mu}(t)\}_{t\geq 0}\). Let \(N_{\alpha,\mu}(t):=N(E_{\alpha,\mu}(t)),t\geq 0\) be the tempered fractional Poisson process (TFPP) (studied in [12]). The process (10) is defined by_ \[Y_{\alpha,\mu}(t):=\sum_{i=1}^{N_{\alpha,\mu}(t)}X_{i},t\geq 0, \tag{29}\] _is called tempered fractional CPP (TFCPP) with \(X_{i},\ i=1,2,\ldots,\) be the iid jumps having common distribution \(F_{X}\)._ The _pmf_\(P_{\alpha,\mu}(n,t)=\mathbb{P}[Y_{\alpha,\mu}(t)=n]\) satisfy following tempered fractional differential equation (see [25]) \[\mathcal{D}_{t}^{\alpha,\mu}P_{\alpha,\mu}(n,t)=-\lambda P_{\alpha,\mu}(n,t) +\lambda\int_{-\infty}^{\infty}P_{\alpha,\mu}(n-x,t)F_{X}(x)dx,\] where \(\mathcal{D}_{t}^{\alpha,\mu}\) denotes the tempered C-D fractional derivative (a special case of (5)). We next give some distributional results for the TFCPP. **Theorem 5.1**.: _The mean, variance, and covariance of the TFCPP \(\{Y_{\alpha,\mu}(t)\}_{t\geq 0}\) is given by_ 1. \(\mathbb{E}[Y_{\alpha,\mu}(t)]=\lambda\mathbb{E}[X_{1}]\sum_{n=0}^{\infty} \frac{\mu^{\alpha}\gamma(\mu t;\alpha(1+n))}{\Gamma(\alpha(1+n))};\)__ 2. \(\mathrm{Var}[Y_{\alpha,\mu}(t)]=\lambda\mathbb{E}[X_{1}^{2}]\sum_{n=0}^{\infty }\frac{\mu^{\alpha}\gamma(\mu t;\alpha(1+n))}{\Gamma(\alpha(1+n))}+\lambda^{2 }(\mathbb{E}[X_{1}])^{2}\mathrm{Var}[E_{f}(t)]\)__ 3. \(\mathrm{Cov}[Y_{\alpha,\mu}(t),Y_{\alpha,\mu}(s)]=\lambda\mathbb{E}[X_{1}^{2 }]\sum_{n=0}^{\infty}\frac{\mu^{\alpha}\gamma(\mu s;\alpha(1+n))}{\Gamma( \alpha(1+n))}+\lambda^{2}(\mathbb{E}[X_{1}])^{2}\mathrm{Cov}[E_{f}(t),E_{f}(s )].\)__ _where \(\gamma(a;b)\) is an incomplete gamma function._ Proof.: The results follows from Theorem (3.2) by substituting the value of \(\mathbb{E}[E_{\alpha,\mu}(t)]\) (see [26]) \[\mathbb{E}[E_{\alpha,\mu}(t)]=\sum_{n=0}^{\infty}\frac{\mu^{\alpha}\gamma(\mu t ;\alpha(1+n))}{\Gamma(\alpha(1+n))}.\qed\] **Corollary 5.1**.: _Let \(\mathbb{E}[X_{i}]=0,i=1,2,\ldots\), then the correlation of the process is given by_ \[\mathrm{Corr}[Y_{\alpha,\mu}(t),Y_{\alpha,\mu}(s)]=\sqrt{\frac{\mathbb{E}[E_ {\alpha,\mu}(s)]}{\mathbb{E}[E_{\alpha,\mu}(t)]}},\] \[\mathbb{E}[E_{\alpha,\mu}(t)]\sim\frac{t}{\alpha\mu^{\alpha-1}},\text{as}\ t \rightarrow\infty,\text{(see in \@@cite[cite]{[\@@bibref{}{Ahl}{}{}]}).}\] _Using (2.4), we obtain the correlation function of \(Y_{\alpha,\mu}(t)\) and \(Y_{\alpha,\mu}(s)\). It exhibits LRD property, i.e._ \[\lim_{t\rightarrow\infty}\frac{\mathrm{Corr}[Y_{\alpha,\mu}(t),Y_{\alpha,\mu} (s)]}{t^{-1/2}}\sim\mu^{(\alpha-1)/2}\alpha^{1/2}\sqrt{\mathbb{E}[E_{\alpha, \mu}(s)]}.\] Next, we discuss special cases for the TFCPP, where \(X_{i}\) follows some particular type of distribution. **Special Case 5.1**.: _When \(X_{i},\;i=1,2,\ldots,\) follow exponential distribution with parameter \(\eta>0\). The process \(\{Y_{f}^{\eta}(t)\}_{t\geq 0}\) (29) can be represented in the following notation_ \[Y_{\alpha,\mu}^{\eta}(t):=\sum_{i=1}^{N_{\alpha,\mu}(t)}X_{i},t\geq 0. \tag{30}\] _This process \(\{Y_{\alpha,\mu}^{\eta}(t)\}_{t\geq 0}\) can also be written in the time-changed representation as \(\{Y(N_{\alpha,\mu}(t))\}_{t\geq 0}\). The pdf \(P_{\alpha,\mu}^{\eta}(x,t)\) satisfies the following equation_ \[\eta\mathcal{D}_{t}^{\alpha,\mu}P_{\alpha,\mu}^{\eta}(x,t)=-\left[\lambda+ \mathcal{D}^{\alpha,\mu}(t)\right]\frac{\partial}{\partial x}P_{\alpha,\mu}^{ \eta}(x,t).\] _where \(\mathcal{D}_{t}^{\alpha,\mu}\) is the tempered C-D derivative of order \(\alpha\in(0,1)\) with tempering parameter \(\mu>0\) is defined by_ \[\mathcal{D}_{t}^{\alpha,\mu}g(t)=\frac{1}{\Gamma(1-\alpha)}\frac{d}{dt}\int_{ 0}^{t}\frac{g(u)du}{(t-u)^{\alpha}}-\frac{g(0)}{\Gamma(1-\alpha)}\int_{t}^{ \infty}e^{-\mu r}\alpha r^{-\alpha-1}dr.\] _with conditions_ \[P_{\alpha,\mu}^{\eta}(x,0)=0,\;\;\mathbb{P}(Y_{\alpha,\mu}^{\eta}(t)>0)=1-e^{ -\lambda t}.\] **Remark 5.1**.: _When \(\mu=0\), the tempered \(\alpha\)-stable subordinator reduces to the \(\alpha\)-stable subordinator. Then the process (30) becomes the time-fractional compound Poisson process, as defined in [2]. The mean and covariance are given by_ \[\mathbb{E}[Y_{\alpha}^{\eta}(t)] =\frac{\lambda t^{\alpha}}{\eta\Gamma(1+\alpha)};\] \[\mathrm{Cov}[Y_{\alpha}^{\eta}(t),Y_{\alpha}^{\eta}(s)] =\frac{2\lambda s^{\alpha}}{\eta^{2}\Gamma(1+\alpha)}+\frac{ \lambda^{2}}{\eta^{2}}\mathrm{Cov}[E_{\alpha}(t),E_{\alpha}(s)],\;\;s<t.\] **Special Case 5.2**.: _When \(X_{i},\;i=1,2,\ldots\) are iid random variables with tempered Mittag-Leffler distribution ([19])_ \[f_{\beta,\eta,\nu}(x)=\lambda e^{-\nu x}\sum_{n=0}^{\infty}(-1)^{n}(\lambda- \nu^{\beta})^{n}\frac{x^{\beta(n+1)-1}}{\Gamma(\beta(n+1))},\;\lambda>\nu^{ \beta},x>0.\] _Then, process (10) is defined as_ \[Y_{\alpha,\mu}^{\beta,\eta,\nu}(t):=\sum_{i=1}^{N_{\alpha,\mu}(t)}X_{i},t\geq 0,\] _where \(N_{\alpha,\mu}(t)=N(E_{\alpha,\mu}(t))\) is tempered fractional Poisson process with rate parameter \(\lambda>0\) (see [12]). It is called a TFCPP with tempered Mittag-Leffler jumps. The LT of pdf of \(\{Y_{\alpha,\mu}^{\beta,\eta,\nu}(t)\}_{t\geq 0}\) is given by_ \[\mathbb{E}[e^{-sY_{\alpha,\mu}^{\beta,\eta,\nu}(t)}]=\mathbb{E}[e^{-\lambda E _{\alpha,\mu}(t)\frac{(s+\nu)^{\beta}-\nu^{\beta}}{\eta+(s+\nu)^{\beta}-\nu^{ \beta}}}],\] _where_ \[\mathbb{E}[e^{-sX_{1}}]=\frac{\eta}{\eta+(s+\nu)^{\beta}-\nu^{\beta}}.\] _An alternate representation of \(\{Y_{\alpha,\mu}^{\beta,\eta,\nu}(t)\}_{t\geq 0}\) is given by time-changing the tempered \(\beta\)-stable subordinator \(\{D_{\beta,\nu}(t)\}_{t\geq 0},\;\beta\in(0,1]\) with \(\{Y_{\alpha,\mu}^{\alpha,\mu}(t)\}_{t\geq 0}\), i.e._ \[Y_{\alpha,\mu}^{\beta,\eta,\nu}(t)\stackrel{{ d}}{{=}}D_{\beta, \nu}(Y_{\alpha,\mu}^{\eta}(t)).\] _The pdf \(P_{\alpha,\mu}^{\beta,\eta,\nu}(x,t)\) satisfies the following fractional differential equation_ \[\eta\mathcal{D}_{t}^{\alpha,\mu}P_{\alpha,\mu}^{\beta,\eta,\nu}(x,t)=-\left[ \lambda+\mathcal{D}_{t}^{\alpha,\mu}\right]\mathcal{D}_{x}^{\beta,\nu}P_{ \alpha,\mu}^{\beta,\eta,\nu}(x,t)\] _with initial condition_ \[P^{\beta,\eta,\nu}_{\alpha,\mu}(x,0)=0,\ \ \mathbb{P}(Y^{\beta,\eta,\nu}_{\alpha,\mu}(t) >0)=1-\mathbb{E}[e^{-\lambda E_{\alpha,\mu}(t)}].\] **Remark 5.2**.: _Here, we discuss the particular values of parameter of the above-introduced processes._ * _For_ \(\nu=0\)_, then the process is called time-changed_ \(\beta\)_-stable process, i.e._ \(\{D_{\beta}(Y^{\eta}_{\alpha,\mu}(t))\}_{t\geq 0}\)_._ * _For_ \(\mu=0\)_, the process behaves as time-changed tempered_ \(\beta\)_-stable subordinator, i.e._ \(\{D_{\beta,\nu}(Y^{\eta}_{\alpha}(t))\}_{t\geq 0}\)__ * _When_ \(\mu=0,\ \nu=0\)_, then the process behaves as time-changed in_ \(\beta\)_-stable subordinator_ \(\{D_{\beta}(t)\}_{t\geq 0}\) _with_ \(\{Y^{\eta}_{\alpha}(t)\}_{t\geq 0}\)_, i.e._ \(\{D_{\beta}(Y^{\eta}_{\alpha}(t))\}_{t\geq 0}\) _(see_ _[_2_]__)._ **Special Case 5.3**.: _Let \(X_{i},i=1,2,\dots,\) are as distributed in (22). Then the process_ \[Y^{g}_{\alpha,\mu}(t):=\sum_{i=1}^{N_{\alpha,\mu}(t)}X_{i},t\geq 0,\] _is called TFCPP with Bernstein jumps. The subsequent result follow from Theorem (4.4) as a particular case. This process has a time-changed representation, i.e._ \[Y^{g}_{\alpha,\mu}(t)\stackrel{{ d}}{{=}}D_{g}(Y^{\eta}_{\alpha, \mu}(t)).\] _The pdf \(P^{g}_{\alpha,\mu}(x,t)\) of \(\{Y^{g}_{\alpha,\mu}(t)\}_{t\geq 0}\) satisfies the following equation_ \[\eta\mathcal{D}^{\alpha,\mu}_{t}P^{g}_{\alpha,\mu}(x,t)=-\left[\lambda+ \mathcal{D}^{\alpha,\mu}_{t}\right]\mathcal{D}^{g}_{x}P^{g}_{\alpha,\mu}(x,t)\] _with conditions_ \[P^{g}_{\alpha,\mu}(x,0)=0,\ \ \mathbb{P}(Y^{g}_{\alpha,\mu}(t)>0)=1-\mathbb{E}[e ^{-\lambda E_{\alpha,\mu}(t)}].\] **Special Case 5.4**.: _Let \(X_{i},i=1,2,\dots\) be iid truncated geometrically distributed random variables. Using (27), we define the process_ \[Y^{\rho,k}_{\alpha,\mu}(t):=\sum_{i=1}^{N_{\alpha,\mu}(t)}X_{i},t\geq 0.\] _It is called as the tempered fractional PAPoK. Next, we mentioned the particular case of the Theorem (4.5) for the tempered fractional PAPoK. The pmf \(P^{\rho,k}_{\alpha,\mu}(n,t)=\mathbb{P}[Y^{\rho,k}_{\alpha,\mu}(t)=n]\) satisfy following fractional differential equation_ \[\mathcal{D}^{\alpha,\mu}_{t}P^{\rho,k}_{\alpha,\mu}(n,t) =-\lambda\left(1-\frac{1-\rho}{1-\rho^{k}}\sum_{j=1}^{n\wedge k} \rho^{j-1}B^{j}\right)P^{\rho,k}_{\alpha,\mu}(n,t),\] \[\mathcal{D}^{\alpha,\mu}_{t}P^{\rho,k}_{\alpha,\mu}(0,t) =-\lambda P^{\rho,k}_{\alpha,\mu}(0,t).\] _The time-changed representation of \(\{Y^{\rho,k}_{\alpha,\mu}(t)\}_{t\geq 0}\) by time-changing in PAPoK \(\{N^{k}_{A}(t)\}_{t\geq 0}\) with \(\{E_{\alpha,\mu}(t)\}_{t\geq 0}\) such that_ \[Y^{\rho,k}_{\alpha,\mu}(t)\stackrel{{ d}}{{=}}N^{k}_{A}(E_{ \alpha,\mu}(t)),t\geq 0.\] **Remark 5.3**.: _When \(\mu=0\), then \(\{Y^{\rho,k}_{\alpha,0}(t)\}_{t\geq 0}\) is called fractional FPAPok, defined in [15]._ **Special Case 5.5**.: _Let \(X_{i},\ i=1,2,\dots,\) be the discrete uniform distributed random variables. From equation (26), reduces the tempered fractional PPoK \(\{Y^{k}_{\alpha,\mu}(t)\}_{t\geq 0}\), which is introduced and studied in [11]._ **S ### Inverse IG fractional CPP **Definition 5.2**.: _Consider the inverse subordinator (1) associated with inverse Gaussian Bernstein function (2) \(f(s)=\delta(\sqrt{2s+\gamma^{2}}-\gamma)\), denoted by \(\{E_{\delta,\gamma}(t)\}_{t\geq 0}\). Let \(N_{\delta,\gamma}(t):=N(E_{\delta,\gamma}(t)),t\geq 0\) be the inverse IG process (see [33]). The process (10) defined by_ \[Y_{\delta,\gamma}(t):=\sum_{i=1}^{N_{\delta,\gamma}(t)}X_{i},t\geq 0, \tag{31}\] _is called as the inverse IG fractional CPP with \(X_{i},\;i=1,2,\dots,\) be the iid jumps having common distribution \(F_{X}\)._ Let \(\{D_{\delta,\gamma}(t)\}_{t\geq 0}\) be IG Levy process with the LT (see [8]) \[\mathbb{E}(e^{-sD_{\delta,\gamma}(t)})=e^{-t\delta(\sqrt{2s+\gamma^{2}}-\gamma)}.\] The Levy measure \(\nu_{\delta,\gamma}\) corresponding to the inverse Gaussian subordinator is given by (see [8]) \[\nu_{\delta,\gamma}(dx)=\frac{\delta}{\sqrt{2\pi x^{3}}}e^{-\gamma^{2}x/2} \mathbb{I}_{\{x>0\}}dx.\] Further, we define convolution type fractional derivative or non-local operator corresponding to the inverse of IG subordinator, we have \[\bar{\nu}_{\delta,\gamma}(s)=\nu_{\delta,\gamma}(s,\infty) =\int_{s}^{\infty}\frac{\delta}{\sqrt{2\pi x^{3}}}e^{-\gamma^{2}x /2}du,\;s>0,\] \[=\sqrt{\frac{2}{\pi}}\delta s^{-1/2}e^{-\gamma^{2}s/2}-\frac{ \delta\gamma}{\sqrt{\pi}}\Gamma\big{(}1/2;\gamma^{2}s/2\big{)},\] where \(\Gamma(a;b)=\int_{b}^{\infty}u^{a-1}e^{-u}dt\) is the upper incomplete gamma function. Using (5), the generalized C-D fractional derivative corresponding to the IG subordinator is of the form \[\mathcal{D}_{t}^{\delta,\gamma}V(t)=\frac{d}{dt}\int_{0}^{t}\;v(s)\left(\sqrt {\frac{2}{\pi}}\delta(t-s)^{-1/2}e^{-\gamma^{2}s/2}-\frac{\delta\gamma}{\sqrt {\pi}}\Gamma\big{(}1/2,\gamma^{2}(t-s)/2\big{)}\right)ds-v(0)\bar{\nu}_{ \delta,\gamma}(t). \tag{32}\] The generalized R-L derivative corresponding to the inverse of IG subordinator is \[\mathbb{D}_{t}^{\delta,\gamma}v(t)=\frac{d}{dt}\int_{0}^{t}\;v(s)\left(\sqrt {\frac{2}{\pi}}\delta(t-s)^{-1/2}e^{-\gamma^{2}(t-s)/2}-\frac{\delta\gamma}{ \sqrt{\pi}}\Gamma\big{(}1/2,\gamma^{2}(t-s)/2\big{)}\right)ds. \tag{33}\] **Theorem 5.2**.: _The pdf \(h_{\delta,\gamma}(x,t)\) of \(\{E_{\delta,\gamma}(t)\}_{t\geq 0}\) solves the following fractional differential equation_ \[\mathbb{D}_{t}^{\delta,\gamma}h_{\delta,\gamma}(x,t)=-\frac{\partial}{ \partial x}h_{\delta,\gamma}(x,t),x>0, \tag{34}\] _with initial condition_ \[h_{\delta,\gamma}(x,0)=\delta(x),\;h_{\delta,\gamma}(0,t)=\nu_{G}(t).\] Proof.: Taking LT on both sides of (33), it yields \[\mathcal{L}_{t}\{\mathbb{D}_{t}^{\delta,\gamma}v(t)\} =s\mathcal{L}_{t}(v(t))\left[\mathcal{L}_{t}\left(\sqrt{\frac{2}{ \pi}}\delta(t)^{-1/2}e^{-\gamma^{2}t/2}\right)-\mathcal{L}_{t}\left(\frac{ \delta\gamma}{\sqrt{\pi}}\Gamma\big{(}1/2,\gamma^{2}t/2\big{)}\right)\right]\] \[=\delta s\mathcal{L}_{t}(v(t))\left[\frac{2}{\sqrt{(2s+\gamma^{2 })}}-\frac{\gamma}{s}\frac{\sqrt{(2s+\gamma^{2})}-\gamma}{\sqrt{(2s+\gamma^{2 })}}\right]\] \[=\mathcal{L}_{t}(v(t))\delta(\sqrt{2s+\gamma^{2}}-\gamma).\] Now, applying the LT with respect to \(x\) on the both sides of (34), we get \[\mathbb{D}_{t}^{\delta,\gamma}\mathcal{L}_{x}[h_{\delta,\gamma}(x,t)](y)=-y \mathcal{L}_{x}[h_{\delta,\gamma}(x,t)](y)-h_{\delta,\gamma}(0,t).\] Again, taking the LT with respect to \(t\), we have that \[\delta(\sqrt{2s+\gamma^{2}}-\gamma)\mathcal{L}_{t}[\mathcal{L}_{x}[h_{\delta, \gamma}(x,t)](y)](s)=-y\mathcal{L}_{t}[\mathcal{L}_{x}[h_{\delta,\gamma}(x,t)]( y)](s)+\frac{\delta(\sqrt{2s+\gamma^{2}}-\gamma)}{s}.\] We obtain \[\mathcal{L}_{t}[\mathcal{L}_{x}[h_{\delta,\gamma}(x,t)](y)](s)=\frac{\delta( \sqrt{2s+\gamma^{2}}-\gamma)}{s\left(y+\delta(\sqrt{2s+\gamma^{2}}-\gamma) \right)},\] which is the Laplace-Laplace transform of the _pdf_\(h_{\delta,\gamma}(x,t)\) (see [33, Remark 2.1]). This completes the proof. **Theorem 5.3**.: _The pmf \(P_{\delta,\gamma}(n,t)=\mathbb{P}[Y_{\delta,\gamma}(t)=n]\) satisfy following fractional differential equation_ \[\mathcal{D}_{t}^{\delta,\gamma}P_{\delta,\gamma}(n,t)=-\lambda P_{\delta, \gamma}(n,t)+\lambda\int_{-\infty}^{\infty}P_{\delta,\gamma}(n-x,t)F_{X}(x)dx.\] Proof.: The proof is similar to the proof of Theorem 3.1 and hence it is omitted here. **Special Case 5.6**.: _Let \(X_{i},\ i=1,2,\dots,\) are exponentially distributed with parameter \(\eta\) in (31). Then, the process \(\{Y_{f}^{\eta}(t)\}_{t\geq 0}\) can be written as_ \[Y_{\delta,\gamma}^{\eta}(t):=\sum_{i=1}^{N_{\delta,\gamma}(t)}X_{i},\ t\geq 0. \tag{35}\] _This process \(\{Y_{\delta,\gamma}^{\eta}(t)\}_{t\geq 0}\) can also represented as \(\{Y(E_{\delta,\gamma}(t))\}_{t\geq 0}\). The pdf \(P_{\delta,\gamma}^{\eta}(x,t)\) satisfies the following equation_ \[\eta\mathcal{D}_{t}^{\delta,\gamma}P_{\delta,\gamma}^{\eta}(x,t)(x,t)=-\left[ \lambda+\mathcal{D}^{\delta,\gamma}(t)\right]\frac{\partial}{\partial x}P_{ \delta,\gamma}^{\eta}(x,t),\] _where \(\mathcal{D}_{t}^{\delta,\gamma}\) fractional derivative (32), with initial condition_ \[P_{\delta,\gamma}^{\eta}(x,0)=0,\ \ \mathbb{P}(Y_{\delta,\gamma}^{\eta}(t)>0)=1-e^ {-\lambda t}.\] **Special Case 5.7**.: _Let \(X_{i},i=1,2,\dots\) be Mittag-Leffler distributed random variables with parameter \(0<\beta<1\) and \(\eta>0\). Then, we define new process \(\{Y_{\delta,\gamma}^{\beta,\eta}(t)\}_{t\geq 0}\) such as_ \[Y_{\delta,\gamma}^{\beta,\eta}(t):=\sum_{i=1}^{N_{\delta,\gamma}(t)}X_{i},\ t \geq 0.\] _The LT of \(\{Y_{\delta,\gamma}^{\beta,\eta}(t)\}_{t\geq 0}\) is given by_ \[\mathbb{E}[e^{-sY_{\delta,\gamma}^{\beta,\eta}(t)}]=\mathbb{E}\left[e^{- \lambda E_{\delta,\gamma}(t)\frac{\beta}{\eta+s^{\beta}}}\right].\] _The process \(Y_{\delta,\gamma}^{\beta,\eta}(t)\) can be represented in terms of the \(\beta\)-stable subordinator, denoted by \(D_{\beta}(t)\), time-changed with independent \(\{Y_{\delta,\gamma}^{\eta}(t)\}_{t\geq 0}\) (35), i.e._ \[Y_{\delta,\gamma}^{\beta,\eta}(t)\stackrel{{ d}}{{=}}D_{\beta}(Y_{ \delta,\gamma}^{\eta}(t)),\ t\geq 0.\] _The pdf \(P_{\delta,\gamma}^{\beta,\eta}(x,t)\) satisfies the following fractional differential equation_ \[\eta\mathcal{D}_{t}^{\delta,\gamma}P_{\delta,\gamma}^{\beta,\eta}(x,t)=-\left[ \lambda+\mathcal{D}_{t}^{\delta,\gamma}\right]\mathcal{D}_{x}^{\beta}P_{ \delta,\gamma}^{\beta,\eta}(x,t)\] _with initial conditions_ \[P_{\delta,\gamma}^{\beta,\eta}(x,0)=0,\ \ \mathbb{P}(Y_{\delta,\gamma}^{\beta,\eta}(t) >0)=1-\mathbb{E}[e^{-\lambda E_{\delta,\gamma}(t)}].\] **Special Case 5.8**.: _Let \(X_{i},i=1,2,\dots\) are iid random variables with \(LT\)_ \[\mathbb{E}[e^{-sX_{1}}]=\frac{\eta}{\theta(\sqrt{2s+\chi^{2}}-\chi)+\eta}.\] _Note that the distribution of \(X_{i}\)'s coincides with the distribution of inter-arrival times of inverse IG subordinated Poisson renewal process. The new stochastic process \(\{Y^{\theta,\chi}_{\delta,\gamma}(t)\}_{t\geq 0}\) can be defined as_ \[Y^{\theta,\chi}_{\delta,\gamma}(t):=\sum_{i=1}^{N_{\delta,\gamma}(t)}X_{i},\ t\geq 0.\] _The LT of pdf of \(\{Y^{\theta,\chi}_{\delta,\gamma}(t)\}_{t\geq 0}\) is given by_ \[\mathbb{E}[e^{-sY^{\theta,\chi}_{\delta,\gamma}(t)}]=\mathbb{E}\left[e^{- \lambda E_{\delta,\gamma}(t)\frac{\theta(\sqrt{2s+\chi^{2}}-\chi)}{\eta+ \theta(\sqrt{2s+\chi^{2}}-\chi)}}\right].\] _The process \(\{Y^{\theta,\chi}_{\delta,\gamma}(t)\}_{t\geq 0}\) can be represented in terms of the IG subordinator, denoted by \(\{D_{\theta,\chi}(t)\}_{t\geq 0}\), time-changed with independent \(\{Y^{\eta}_{\delta,\gamma}(t)\}_{t\geq 0}\) (35), i.e._ \[Y^{\theta,\chi}_{\delta,\gamma}(t)\stackrel{{ d}}{{=}}D_{\theta, \chi}(Y^{\eta}_{\delta,\gamma}(t)),\ t\geq 0.\] _The pdf \(P^{\theta,\eta,\chi}_{\delta,\gamma}(x,t)\) satisfies the following fractional differential equation_ \[\eta\mathcal{D}^{\delta,\gamma}_{t}P^{\theta,\eta,\chi}_{\delta,\gamma}(x,t)= -\left[\lambda+\mathcal{D}^{\delta,\gamma}_{t}\right]\mathcal{D}^{\theta,\chi} _{x}P^{\theta,\eta,\chi}_{\delta,\gamma}(x,t)\] _with initial condition_ \[P^{\theta,\eta,\chi}_{\delta,\gamma}(x,0)=0,\ \ \mathbb{P}(Y^{\theta,\chi}_{ \delta,\gamma}(t)>0)=1-\mathbb{E}[e^{-\lambda E_{\delta,\gamma}(t)}].\] **Special Case 5.9**.: _Let \(X_{i},\ i=1,2,\dots,\) be the iid discrete uniform random variables and \(\{N_{\delta,\gamma}(t)\}_{t\geq 0}\) be the time-changed Poisson process with inverse IG subordinator. Then, the process (26) is defined,_ \[Y^{k}_{\delta,\gamma}(t):=\sum_{i=1}^{N_{\delta,\gamma}(t)}X_{i},\,,\ t\geq 0\] _and called as the inverse IG fractional CPP of order \(k\). The pmf \(P^{k}_{\delta,\gamma}(n,t)=\mathbb{P}[Y^{k}_{\delta,\gamma}(t)=n]\) satisfy following fractional differential-difference equation is given by_ \[\mathcal{D}^{\delta,\gamma}_{t}P^{k}_{\delta,\gamma}(n,t) =-k\lambda\left(1-\frac{1}{k}\sum_{j=1}^{n\wedge k}B^{j}\right)P^ {k}_{\delta,\gamma}(n,t),\ n>0,\] \[\mathcal{D}^{\delta,\gamma}_{t}P^{k}_{\delta,\gamma}(0,t) =-k\lambda P^{k}_{\delta,\gamma}(0,t).\] **Special Case 5.10**.: _Let \(X_{i},i=1,2,\dots,\) be iid truncated geometrically distributed random variables. From the equation (27), we define the process_ \[Y^{\rho,k}_{\delta,\gamma}(t):=\sum_{i=1}^{N_{\delta,\gamma}(t)}X_{i},,\ t\geq 0\] _is called inverse IG fractional PAPoK. The following results can be obtained as a particular case of the Theorem (4.5). The pmf \(P^{\rho,k}_{\delta,\gamma}(n,t)=\mathbb{P}[Y^{\rho,k}_{\delta,\gamma}(t)=n]\) satisfy following fractional differential equation_ \[\mathcal{D}^{\delta,\gamma}_{t}P^{\rho,k}_{\delta,\gamma}(n,t) =-\lambda\left(1-\frac{1-\rho}{1-\rho^{k}}\sum_{j=1}^{n\wedge k} \rho^{j-1}B^{j}\right)P^{\rho,k}_{\delta,\gamma}(n,t),\] \[\mathcal{D}^{\delta,\gamma}_{t}P^{\rho,k}_{\delta,\gamma}(0,t) =-\lambda P^{\rho,k}_{\delta,\gamma}(0,t).\] _We can express the the process \(\{Y_{\delta,\gamma}^{\rho,k}(t)\}_{t\geq 0}\) in a time-changed form as follows_ \[Y_{\delta,\gamma}^{\rho,k}(t)\stackrel{{ d}}{{=}}N_{A}^{k}(E_{ \delta,\gamma}(t)),\ t\geq 0.\] We now summarize the results obtained in this section in the following table. ## 6. Simulations In this section, we simulate the sample trajectories of the special cases of the GFCPP \(\{Y_{f}(t)\}_{t\geq 0}\) (10). First, we reproduce an algorithm for generating the sample paths for CPP \(\{Y(t)\}_{t\geq 0}\) (3) with the jump size distribution \(F_{X}\). ``` 0:\(\lambda>0\), \(T\geq 0\), and \(\theta\) (parameter of \(F_{X}\)). 0:\(Y(t)\), simulated sample paths for the CPP with jump size distribution \(F_{X}\). 0:Initialisation : \(t=0\), \(Y=0\) and \(v=0\). 1:while\(t<T\)do 2: generate a uniform random variable \(U\sim U(0,1)\). 3: set \(t\gets t+\left[-\frac{1}{\lambda}\log U\right]\). 4: generate an independent random variable \(X\) with distribution \(F_{X}\) with parameter \(\theta\). 5: set \(v\gets v+X\) and append in \(Y\). 6:endwhile 7:return\(Y\). ``` **Algorithm 1** Simulation of the CPP Here, \(Y\) denotes the number of events that occurred up to time \(T>0\). Now, we present algorithm for time-changed by IIGS and ITSS processes as we use the corresponding algorithms mentioned above for respective subordinators. \begin{table} \begin{tabular}{c c c} \hline \hline Jump Size Distributions & DDE & Time-changed Representations \\ \hline Exponential & \(\eta\mathcal{D}_{t}^{\delta,\mu}=-\left[\lambda+\mathcal{D}^{\delta,\gamma}(t) \right]\frac{\partial}{\partial x}\) & \(Y(E_{\alpha,\mu}(t))\) \\ Mittag-Leffler (ML) & \(\eta\mathcal{D}_{t}^{\delta,\mu}=-\left[\lambda+\mathcal{D}_{t}^{\delta,\mu} \right]\mathcal{D}_{x}^{\beta}\) & \(D_{\beta}(Y_{\delta,\gamma}^{\eta}(t))\) \\ Inter-times inverse of IGS & \(\eta\mathcal{D}_{t}^{\delta,\gamma}=-\left[\lambda+\mathcal{D}_{t}^{\delta, \gamma}\right]\mathcal{D}_{x}^{\theta,\chi}\) & \(D_{\theta,\chi}(Y_{\delta,\gamma}^{\eta}(t))\) \\ Inter-times Bernstein & \(\eta\mathcal{D}_{t}^{\delta,\gamma}=-\left[\lambda+\mathcal{D}_{t}^{\delta, \gamma}\right]\mathcal{D}_{x}^{\theta}\) & \(D_{g}(Y_{\delta,\gamma}^{\eta}(t))\) \\ Truncated geometric & \(\mathcal{D}_{t}^{\delta,\gamma}=-\lambda\left(1-\frac{1-\rho}{1-\rho^{k}}\sum_ {j=1}^{n\wedge k}\rho^{j-1}B^{j}\right)\) & \(N_{A}^{k}(E_{\delta,\gamma}(t))\) \\ Discrete uniform & \(\mathcal{D}_{t}^{\delta,\gamma}=-\left[\left]k\lambda\left(1-\frac{1}{k}\sum_ {j=1}^{n\wedge k}B^{j}\right)\right]^{\alpha}\) & \(N^{k}(E_{\delta,\gamma}(t))\) \\ \hline \end{tabular} \end{table} Table 2. Summary of results obtained in Section 5.2 \begin{table} \begin{tabular}{c c c} \hline \hline Jump Size Distributions & DDE & Time-changed Representations \\ \hline Exponential & \(\eta\mathcal{D}_{t}^{\delta,\mu}=-\left[\lambda+\mathcal{D}^{\delta,\gamma}(t) \right]\frac{\partial}{\partial x}\) & \(Y(N_{\delta,\gamma}(t))\) \\ Mittag-Leffler (ML) & \(\eta\mathcal{D}_{t}^{\delta,\gamma}=-\left[\lambda+\mathcal{D}_{t}^{\delta, \gamma}\right]\mathcal{D}_{x}^{\beta}\) & \(D_{\beta}(Y_{\delta,\gamma}^{\eta}(t))\) \\ Inter-times inverse of IGS & \(\eta\mathcal{D}_{t}^{\delta,\gamma}=-\left[\lambda+\mathcal{D}_{t}^{\delta, \gamma}\right]\mathcal{D}_{x}^{\theta,\chi}\) & \(D_{\theta,\chi}(Y_{\delta,\gamma}^{\eta}(t))\) \\ Inter-times Bernstein & \(\eta\mathcal{D}_{t}^{\delta,\gamma}=-\left[\lambda+\mathcal{D}_{t}^{\delta, \gamma}\right]\mathcal{D}_{x}^{\theta}\) & \(D_{g}(Y_{\delta,\gamma}^{\eta}(t))\) \\ Truncated geometric & \(\mathcal{D}_{t}^{\delta,\gamma}=-\lambda\left(1-\frac{1-\rho}{1-\rho^{k}}\sum_ {j=1}^{n\wedge k}\rho^{j-1}B^{j}\right)\) & \(N_{A}^{k}(E_{\delta,\gamma}(t))\) \\ Discrete uniform & \(\mathcal{D}_{t}^{\delta,\gamma}=-\left[\left]k\lambda\left(1-\frac{1}{k}\sum_ {j=1}^{n\wedge k}B^{j}\right)\right]^{\alpha}\) & \(N^{k}(E_{\delta,\gamma}(t))\) \\ \hline \end{tabular} \end{table} Table 2. Summary of results obtained in Section 5.2 Using Algorithm 1 and 2, we generate the sample paths for the chosen set of parameters which is given below
2305.13735
Aligning Large Language Models through Synthetic Feedback
Aligning large language models (LLMs) to human values has become increasingly important as it enables sophisticated steering of LLMs. However, it requires significant human demonstrations and feedback or distillation from proprietary LLMs such as ChatGPT. In this work, we propose a novel alignment learning framework with synthetic feedback not dependent on extensive human annotations and proprietary LLMs. First, we perform reward modeling (RM) with synthetic feedback by contrasting responses from vanilla LLMs with various sizes and prompts. Then, we use the RM to simulate high-quality demonstrations to train a supervised policy and further optimize the model with reinforcement learning. Our resulting model, Aligned Language Model with Synthetic Training dataset (ALMoST), outperforms recent open-sourced models, which are trained on the outputs of InstructGPT or human-annotated demonstrations, in alignment benchmarks. In human evaluation, our model is preferred to Alpaca and Dolly-v2, 55.0% and 58.5% of the time, respectively. Further analyses demonstrate the efficacy and importance of synthetic feedback in our framework. The code is available at https://github.com/naver-ai/almost
Sungdong Kim, Sanghwan Bae, Jamin Shin, Soyoung Kang, Donghyun Kwak, Kang Min Yoo, Minjoon Seo
2023-05-23T06:41:16Z
http://arxiv.org/abs/2305.13735v2
# Aligning Large Language Models through Synthetic Feedback ###### Abstract Aligning large language models (LLMs) to human values has become increasingly important as it enables sophisticated steering of LLMs, e.g., making them follow given instructions while keeping them less toxic. However, it requires a significant amount of human demonstrations and feedback. Recently, open-sourced models have attempted to replicate the alignment learning process by distilling data from already aligned LLMs like InstructGPT or Chat-GPT. While this process reduces human efforts, constructing these datasets has a heavy dependency on the teacher models. In this work, we propose a novel framework for alignment learning with almost no human labor and no dependency on pre-aligned LLMs. First, we perform reward modeling (RM) with synthetic feedback by contrasting responses from vanilla LLMs with various sizes and prompts. Then, we use the RM for simulating high-quality demonstrations to train a supervised policy and for further optimizing the model with reinforcement learning. Our resulting model, **A**ligned **L**anguage **M**odel with **S**ynthetic **T**raining dataset (ALMoST), outperforms open-sourced models, including Alpaca, Dolly, and OpenAssistant, which are trained on the outputs of InstructGPT or human-annotated instructions. Our \(7\)B-sized model outperforms the \(12\)-\(13\)B models in the A/B tests using GPT-4 as the judge with about 75% winning rate on average. ## 1 Introduction Alignment learning has been an essential learning scheme to align the behaviors of large language models (LLMs) with human values like safety and truthfulness while following the intention of users accurately (Ouyang et al., 2022). Vanilla LLMs - those not aligned yet - often misunderstand user intentions or even produce unsafe and inaccurate responses. Desirable human values such as helpfulness, harmlessness, or honesty can be defined, and human demonstrations with these values are then used for the alignment learning (Askell et al., 2021; Bai et al., 2022). Typically, alignment learning consists of three stages: supervised policy fine-tuning (SFT), reward modeling, and reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022; Bai et al., 2022). The first stage is to train an initial policy model with supervised demonstrations collected by human annotators who take into account the target human values that need to be aligned. The second stage is reward modeling which uses human Figure 1: A procedure of reward modeling through synthetic feedback. We assume that the response from a larger LLM with more and better demonstrations might be better overall. We produce synthetic comparisons relying on the assumption, then train a reward model on the dataset. comparisons between various model outputs for the same input prompt. The resulting reward model (RM) can judge which model output is better according to human preferences that align with the desired values. Finally, with reinforcement learning, the RM is used to guide the initial policy model for human value alignment. However, the three-stage training recipe requires significant human effort, especially in the first two stages. More specifically, both the SFT and RM training stages must have an abundance of high-quality human demonstration and ranking datasets to get an initially well-working supervised policy. For instance, Ouyang et al. (2022) collect about 13k demonstrations and 33k comparison datasets from human annotators considering aligned model behaviors for this stage. On the other hand, Self-Instruct Wang et al. (2022) attempts to generate synthetic self-generated instruction datasets using in-context learning with a few seed demonstrations. Meanwhile, the release of LLaMA Touvron et al. (2023) brings upon many open-sourced aligned LLMs trained on the outputs of other aligned LLMs or human-annotated instructions. However, it still heavily depends on pre-aligned LLM APIs such as InstructGPT and Chat-GPT Ouyang et al. (2022); OpenAI (2023); Taori et al. (2023); Chiang et al. (2023) or intensive human annotations DattaBricks (2023); Kopf et al. (2023). In this paper, we introduce a novel framework for alignment learning that only requires minimal human labor and is not dependent on pre-aligned LLMs. Unlike the conventional alignment learning procedure, which first focuses on collecting human demonstrations, we construct synthetic comparison data first using the outputs from vanilla LLMs of various sizes and prompts, as shown in Figure 1. The rules of generating these synthetic ranking data originate from our hypothesis that the responses generated by larger, optimally prompted models are superior to those produced by smaller, inadequately prompted models, as reported by previous work Askell et al. (2021). We use an empirically designed heuristic filter considering response length to obtain better quality rankings. The role of this heuristic filter is crucial for the performance of reward modeling. Our RM trained on the synthetic comparisons performs about 90% accuracy of a fully supervised model on the test-set of HH-RLHF comparison dataset Bai et al. (2022). In the second stage, we introduce a Reward-Model-guided Self-Play (RMSP) to automatically produce high-quality demonstrations, which will be later used for training a supervised policy model. In particular, we simulate ideal conversations between a human user and an aligned LLM by in-context learning with few-shot demonstrations, similar to the previous approaches Wang et al. (2022); Askell et al. (2021). An important additional component is that we leverage the synthetic RM from the previous stage to ensure the quality of the model-to-model conversations with rejection sampling over the generated outputs Ouyang et al. (2022). We train LLaMA-7B on the synthetic demonstrations (SFT) and further optimize the model with rewards from the synthetic RM, namely, Reinforcement Learning from Synthetic Feedback (RLSF). Our **A**ligned **L**anguage **M**odel with **S**ynthetic **T**raining dataset (ALMoST) outperforms Alpaca Taori et al. (2023) - distilled from InstructGPT Ouyang et al. (2022) - and Dolly DataBricks (2023) and OpenAssistant Kopf et al. (2023) that trained on human-annotated demonstrations in the alignment-related benchmarks Askell et al. (2021); Lin et al. (2021); Chiang et al. (2023). Notably, our 7B model outperforms \(12\)-\(13\)B sized Alpaca, Dolly, and OpenAssistant in the A/B testing scenario using GPT-4 Chiang et al. (2023), with about 75% winning rate without data distillation from aligned LLMs nor intensive human annotations. We speculate the strong performance of our model is due to the empirical indicators of well-aligned behaviors that have been effectively incorporated into a strong backbone model through synthetic feedback. Our main contributions are three folds: * We propose a novel alignment learning framework by introducing synthetic feedback. It automatically constructs high-quality comparison and demonstration datasets without relying on human feedback and pre-aligned LLMs. * Our resulting model, ALMoST, shows well-aligned behaviors with human values in various evaluations, including HHH alignment Askell et al. (2021), TruthfulQA Lin et al. (2021), Vicuna evaluation Chiang et al. (2023). * ALMoST-7B outperforms Alpaca-\(13\)B, Dolly-\(12\)B, and OpenAssistant-\(12\)B, showing 75% winning rate on average in the A/B tests leveraging GPT-4. ## 2 Method In this section, we will describe detailed procedures of our framework as depicted in Figure 2. ### Step 1: Reward Modeling with Synthetic Feedback Unlike typical methods that collect demonstrations first, we start by generating synthetic comparison (feedback) datasets to train the reward model. Prompted BaselineAs we do not have aligned baselines available for the comparisons yet, we utilize HHH (Helpful, Harmless, and Honest) prompt devised by Askell et al. (2021). It contains 14 human-written conversations for guiding LLM alignment 1. We employ The HHH prompted LLaMA models to generate synthetic comparisons (Touvron et al., 2023). Footnote 1: gist.github.com/jareddk/2509330... Generating Synthetic ComparisonInstead of collecting human feedback, we rather generate synthetic comparisons based on naive assumptions according to empirical observations. Askell et al. (2021) demonstrate that the larger model performs somewhat better than the smaller model, and the model with longer prompts is better than the model with shorter prompts in terms of human preference. In short, we assume the quality of the response follows the rule of thumb: * Larger model \(>\) Smaller model * More few-shots \(>\) Less few-shots * Better demonstration \(>\) Worse demonstration For the same input \(x\), we first sample the responses \(Y=\{y_{1},y_{2},...,y_{|Y|}\}\) from the models with various configurations. Then, we apply the rule to choose the better response among the generated responses. More specifically, we involve \(\{7,13,30\}\)B LLMs with \(\{1,3,5\}\) shots of HHH demonstrations for the comparison. As illustrated in Figure 1, if we sample responses from (1) \(30\)B with 5 shots, (2) \(13\)B with 3 shots, and (3) 7B with 1 shot, the ranking becomes \(y_{1}>y_{2}>y_{3}\) according to our rule of thumb. Then, we can get a set of binarized comparisons, e.g., \(\{(y_{1},y_{2}),(y_{2},y_{3}),(y_{1},y_{3})\}\). We denote the former as a 'chosen' response (\(y_{1}\)) and the latter as a'rejected' response (\(y_{2}\)) in a comparison pair \((y_{1},y_{2})\). Post ValidationOur assumptions are often wrong because of the stochastic nature of the prompt-based generation. These noises in the dataset make the reward modeling unstable and divergent in the end. Thus, we come up with post validation method to filter out such noises. Figure 2: Overview of our proposed framework for alignment learning of LLMs. Unlike the conventional procedure that collects human demonstrations to train a supervised policy, we first conduct reward modeling with a synthetically generated comparison dataset (synthetic feedback). Then, the demonstration dataset is generated by simulation with the guidance of the reward model from Step 1. Finally, we train an aligned policy model with the synthetic demonstrations and further optimize the model against the reward model with reinforcement learning. First, we devise **Heuristic Filter (HF)** based on prior knowledge. It discards bad responses containing or beginning with keywords such as "I don't know" or "well". Also, we empirically find that the better response usually has a longer length than the worse one. Especially if the response is short, it often tends to be a case of probabilistic generation failure. However, training RM only on comparisons with longer chosen responses would make the resulting model biased by length. Thus, we apply HF to take comparison pairs whose chosen response is longer than either the rejected one or \(M-S/2\), where \(M\) is the mean, and \(S\) is the standard deviation of the lengths of \(Y\) in the character level. This length constraint reduces the probability that short-generation would be stochastic generation failure by checking whether the length of each response is in the confidence interval. Furthermore, it does not fall into the length bias. Please see Appendix B for detailed examples. We will demonstrate the benefits in Section 4.2. Second, if available, we can leverage **As-is RM** for further data filtering. Specifically, we train another RM with a community QA dataset such as StackExchange (Askell et al., 2021; Beeching et al., 2023). Our preliminary study does not find the benefits of large-scale pre-training for RM discussed in Askell et al. (2021). Thus, we subsample 20k pairs from the pre-processed StackExchange dataset for our training 2. We keep the resulting synthetic comparisons only when the As-is RM agrees with the decision. Footnote 2: huggingface.co/datasets/lvwernx/stack-exchange-paired Reward ModelingFinally, we train the reward model based on the synthetic comparisons described above. We follow the ranked preference modeling objective from previous works (Askell et al., 2021; Ouyang et al., 2022). The objective is to make the reward model \(r_{\theta}\) assign a scalar value for the overall quality of a response \(y_{j}\) for a given query \(x\) comparative to its counterpart baseline response \(y_{k}\). The loss function is defined as follows: \[J(\theta)=-E_{(x,y_{j},y_{k})\sim D}log(\sigma(r_{\theta}(x,y_{j})-r_{\theta}( x,y_{k})))\] where \(D\) is the training set of synthetic comparisons and \(r_{\theta}(x,y)\) is the reward model's scalar output indicating the overall response quality \(y\) for its input \(x\). Implementation DetailsWe adopt the recipe of Self-Instruct (Wang et al., 2022) for initial query generation to start conversations with the assistant model. We write 10 manual queries for the seed demonstrations for this. Then, we conduct 10-shot (7 static shots and 3 dynamic shots from previously generated queries) generation based on LLaMA-30B. Then, we filter out inappropriate queries containing bad words and having a high overlap with previously generated queries. We generate 10k initial queries. More implementation details are in Appendix C. For response generation, we include five prompted models with the below configurations. * A. LLaMA-30B-Faithful-3shot * B. LLaMA-30B-HHH-5shot * C. LLaMA-13B-HHH-3shot * D. LLaMA-7B-HHH-3shot * E. LLaMA-7B-HHH-1shot For each query, we generate five responses from the models and take rankings, \(y_{A}>y_{B}>y_{C}>y_{D}>y_{E}\), reflecting the rule of thumb. The Faithful indicates our manually designed prompts consisting of three conversations responding more faithfully and longer while considering the response format, and the HHH indicates the prompts written by Askell et al. (2021). The detailed examples are in Appendix A. Finally, we produce 13k binarized synthetic comparisons after post-validation (HF and As-is RM). We train our reward model for 1 epoch with 1e-5 of the learning rate, 64 of batch size, and 1024 of maximum sequence length. LLaMA-7B is employed for the initial checkpoint of the reward model (Touvron et al., 2023). ### Step 2: Supervised Fine-Tuning In the second step, we propose a Reward-Model-guided Self-Play (RMSP) to simulate high-quality demonstrations, i.e., conversations between the user and AI assistant. The simulated demonstrations are used to supervised fine-tuning for the initially aligned policy model (SFT). Self-PlayThe basic simulation is enabled by turn-taking between the user and assistant role models with corresponding prompts, i.e., self-play. We continue to use the same prompted baseline, LLaMA-30B-Faithful-3shot, for the assistant role. In addition, we've made minor adjustments to the original HHH prompt (Askell et al., 2021) to suit the user's role better, LLaMA-30B-User-3shot 3. Starting from the initial queries generated in the first stage, the LLaMA-30B-Faithful-3shot generates responses for the queries. Then, the LLaMA-30B-User-3shot follows up the assistant's response. The turn-taking is continued until the maximum turn \(T\). Footnote 3: Please see Appendix A for the details of prompts. RM-guided Self-Play (RMSP)To ensure a more aligned response from the assistant, we suggest including the synthetic RM, trained in the first stage, in the loop, namely Reward-Model-guided Self-Play (RMSP). In this setup, the assistant model, LLaMA-30B-Faithful-3shot, first samples \(N\) responses for a given conversational context. Then, the RM scores the \(N\) responses, and the best-scored response is chosen as the final response for the simulation, i.e., the RM performs rejection sampling (best-of-\(N\) sampling) (Nakano et al., 2021; Ouyang et al., 2022). Other procedures are the same as the Self-Play. Please see Figure 8 for the examples. Implementation DetailsWe generate about 20k high-quality demonstrations using RMSP. We set the maximum turn to 2 for simplicity, focusing on the single-turn scenario. The number of rejection sampling \(N\) is set to 4 considering resource constraints 4. Then, we train LLaMA-7B on the generated demonstrations, i.e., a supervised policy fine-tuning (SFT). We use the same training configurations of Alpaca-7B (Touvron et al., 2023; Taori et al., 2023), based on Transformers Library (Wolf et al., 2020). Footnote 4: More details of the synthetic datasets are in Appendix C. ### Step 3: Reinforcement Learning from Synthetic Feedback (RLSF) In the last stage, we perform reinforcement learning from synthetic feedback (RLSF) to further align the SFT model using a reward signal from the synthetic RM. Following previous works (Ouyang et al., 2022; Bai et al., 2022), we use Proximal Policy Optimization (PPO) (Schulman et al., 2017). During this stage, a policy \(\pi_{\phi}\) autoregressively generate a response \(y\) given a prompt \(x\). Subsequently, a reward score \(r_{\theta}(x,y)\) is determined by the reward model \(r_{\theta}\). The training objective is to maximize the expected reward. \[E_{x\sim D,y\sim\pi_{\phi}(\cdot\mid x)}[r_{\theta}(x,y)]\] Stiennon et al. (2020) proposes that adding an estimated KL penalty term between the initial policy \(\rho\) and the policy \(\pi_{\phi}\) to \(r_{\theta}(x,y)\) can enhance performance. This adjustment leads to the final objective as follows: \[E_{x\sim D,y\sim\pi_{\phi}(\cdot\mid x)}[r_{\theta}(x,y)-\lambda\log\left( \frac{\pi_{\phi}(y|x)}{\rho(y|x)}\right)],\] where \(\lambda\) is a KL coefficient. Implementation DetailsWe initialize the policy \(\rho\) with the SFT-tuned LLaMA-7B, as described in Section 2.2. The prompts for the PPO training are compiled by extracting only the prompts from the demonstration dataset in Table 6. Half of these are seen prompts, while the remaining half are unseen prompts, i.e., those not used in the SFT. For PPO, we adopt the implementation of trlX5. Footnote 5: github.com/CarperAI/trlx Footnote 6: github.com/google/BIG-bench ## 3 Evaluating Alignment of ALMoST We validate our resulting model, Aligned **L**anguage **M**odel with **S**ynthetic **T**raining dataset (ALMoST), in three alignment benchmarks, Static HHH evaluation in Big-Bench, TruthfulQA, and Vicuna evaluation (Askell et al., 2021; Srivastava et al., 2022; Lin et al., 2021; Chiang et al., 2023). ### Dataset Static HHH alignment and TruthfulQAAskell et al. (2021) introduce Static HHH alignment benchmark to measure how models are aligned well with the human values 6. Similar to the comparison dataset, the model should choose a more proper response for input between the two options based on human values. The dataset consists of three human value categories, helpful, harmless, and honest, and contains a misc (others) category. We include the dataset to get relationships of tension among the human values from the evaluation, although the entire dataset is just 221. Lin et al. (2021) propose TruthfulQA to measure how LLM generates truthful answers for a given question 7. It especially contains 817 adversarial questions to elicit imitative falsehood answers from LLMs. For simplicity, we evaluate the models with a multiple-choice setup (MC1) instead of a generative setup. Note that all the evaluation is based on zero-shot, which means we do not fine-tune the target dataset. Please see Appendix E for more details of the evaluation prompt. Footnote 6: github.com/google/BIG-bench Footnote 7: github.com/sylinrl/TruthfulQA Vicuna (GPT-4) EvaluationWe test our models using vicuna evaluation questions Chiang et al. (2023). It is 80 questions on various topics spanning general QA, writing, reasoning, etc 8. First, two different models generate an answer for each same question. Then, GPT-4 assesses the two answers by giving a 1-10 scalar score for the corresponding answer and providing appropriate explanations for the judgment OpenAI (2023). Even though it is not a rigorous evaluation, we can compare the overall responding qualities of the models with reasonable costs. Considering the positional bias of the GPT-4 assessment 9, we evaluate the same instance twice by reversing the order of the two answers. Footnote 8: github.com/lm-sys/FastChat Footnote 9: This repo notices that the GPT-4 evaluation has a strong positional bias in favor of the first response. BaselinesWe include recent open-sourced models to compare the aligned behaviors of the LLMs according to their backbone model and training dataset. **Alpaca** is the first open-sourced instruction-following model based on LLaMA Touvron et al. (2023); Taori et al. (2023). It is trained on the 52k synthetic instructions dataset generated by the already aligned LLM, Instruct-GPT Ouyang et al. (2022). To generate the dataset, the authors of Taori et al. (2023) utilize the recipe of Self-Instruct Wang et al. (2022). Similarly, **Vicuna** is trained on 70k SharedGPT dataset, which is the shared chatting logs between users with the ChatGPT, one of the powerful aligned models OpenAI (2023); Chiang et al. (2023). **Dolly-v2** is another open-sourced model trained on a 15k human-annotated instructions dataset DataBricks (2023). It is based on Pythia, another open-source LLM Biderman et al. (2023). **OpenAssistant (Oasst)** is an open-source project to build aligned LLM based on participants of the web community Kopf et al. (2023). It also releases an Oasst model (SFT) trained on the human-annotated dataset 10. Footnote 10: OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 ### Results Static HHH alignment and TruthfulQAOur models outperform Alpaca, Dolly-v2, and OpenAssistant without any distillation from the aligned LLM or intensive human annotations, as shown in Table 1. For all sizes, our ALMoSTs show consistently better accuracy in all HHH splits and TruthfulQA except for Vicuna trained on ChatGPT's outputs Chiang et al. (2023); OpenAI (2023). However, our RM shows excellent performance in choosing appropriate responses according to human values, even beating the Vicuna-7B. It is the consistent observation with Askell et al. (2021). Moreover, it is notable our ALMoST-PPO achieves the highest accuracy in the Helpful split of HHH evaluation, even including 13 billion models. When comparing our SFT and PPO-trained models, the PPO model improves helpfulness, harmlessness, and truthfulness while sacrificing honesty. Honesty and truthfulness \begin{table} \begin{tabular}{l c c c c c c|c} \hline \hline & & & \multicolumn{4}{c}{Static HHH Alignment} & \multicolumn{2}{c}{TruthfulQA} \\ \cline{4-9} Model & Backbone & Dataset by & Helpful & Harmless & Honest & Other & All & MC1 \\ \hline Dolly-v2 & Pythia-12B & Human & 67.8 & 46.6 & 50.7 & 62.8 & 56.6 & 15.2 \\ Oasst-v4 & Pythia-12B & Human & 59.3 & 56.9 & 47.5 & 69.8 & 57.5 & 23.3 \\ Vicuna & LLaMA-13B & ChatGPT & **78.0** & **89.7** & **70.5** & **81.4** & **79.6** & **63.3** \\ \hline Dolly-v2 & Pythia-7B & Human & 69.5 & 41.4 & 45.9 & 51.2 & 52.0 & 24.2 \\ Alpaca & LLaMA-7B & InstructGPT & 71.2 & 53.4 & 62.3 & 65.1 & 62.9 & 19.5 \\ Vicuna & LLaMA-7B & ChatGPT & 79.7 & **72.4** & 70.5 & 76.7 & 74.7 & 52.5 \\ ALMoST (SFT) & LLaMA-7B & LLaMA & 79.7 & 56.9 & 65.6 & 69.8 & 67.8 & 31.5 \\ ALMoST (PPO) & LLaMA-7B & LLaMA & **81.4** & 60.3 & 62.3 & 72.1 & 68.8 & 38.0 \\ \hline ALMoST (RM) & LLaMA-7B & LLaMA & 74.6 & 67.2 & **78.7** & **86.0** & **76.0** & **54.8** \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results of Static HHH alignment and TruthfulQA (Multiple-Choice setup) Askell et al. (2021); An et al. (2021). We report accuracy for both datasets. Our ALMoSTs outperform recent open-sourced models, Alpaca, Dolly, and OpenAssistant Taori et al. (2023); DataBricks (2023); Kopf et al. (2023), trained on the outputs of InstructGPT or human-annotated demonstrations. Also, our RM shows a good performance in identifying proper responses aligned with human values, surpassing Vicuna trained on outputs of ChatGPT Chiang et al. (2023). Notably, our models only leverage synthetic datasets while not relying on the pre-aligned LLMs or extensive human annotations. look similar, but they are slightly different. The honesty is related to expressing uncertainty, while the truthfulness mainly measures how robust the model is against adversarial falsehood. Vicuna (GPT-4) EvaluationOne might say that the higher accuracy in multiple-choice problems does not guarantee the actual aligned behaviors. While human evaluation is the most precise metric to identify it, we conduct Vicuna evaluation leveraging GPT-4 (Chiang et al., 2023) instead, and the results confirm our findings once again, as shown in Figure 3. It shows the number of win, loss, and tie against other models from the perspective of the ALMoST-PPO. The ALMoST-PPO shows a significantly high win rate against all the open-sourced models except for Vicuna and ChatGPT. Moreover, the ALMoST-PPO still shows better performance compared to ALMoST-SFT indicating the efficacy of our RLSF. We also include examples of the GPT-4 evaluation in Appendix D. ## 4 Analysis ### Alignment Tax Askell et al. (2021); Bai et al. (2022) demonstrate the phenomenon of "alignment tax". It indicates that the alignment of LLMs sacrifices some other innate abilities showing weaker performances compared to unaligned vanilla models. We conduct two zero-shot NLP evaluations, Massive Multitask Language Understanding (MMLU) and LAMBADA (Hendrycks et al., 2020; Paperno et al., 2016), to investigate the alignment tax. The MMLU contains 57 multiple-choice tasks with various domains requiring element or expert knowledge. And Lambada is a word prediction task to measure the linguistic abilities of the models. In Table 2, we find our RLSF training (PPO) deteriorates performances in both datasets compared to vanilla LLM, LLaMA-7B, which is a similar observation of Bai et al. (2022). Bai et al. (2022) explain the smaller models than \(10\) billions often experience severe deterioration with the alignment learning. We believe scaling up the backbone model reduces the tax accordingly. ### RM Evaluation We further validate our RM on another comparison dataset, HH-RLHF (Bai et al., 2022). It contains various splits according to the development stage, e.g., the base set to build the initial policy or the online set collected with the deployed system. Specifically, we focus on the 'Helpful-base' sub-split, assuming we do not have a deployable system. Reward ModelingIn Table 3, we find our RM trained with synthetically generated comparison Figure 3: Results of Vicuna (GPT-4) evaluation (Chiang et al., 2023). Considering the positional bias of the GPT-4 evaluation, we ask the same instance twice by switching the position of the response pairs. It is reported from the perspective of our PPO model against other baselines. Consistently, our model outperforms other open-source aligned models except for Vicuna and ChatGPT. dataset achieves 90% performance of its upperbound trained on the full training dataset. Also, it achieves the same accuracy with the result fine-tuned on the single-turn subset (Helpful-base*). Please note HH-RLHF includes multi-turn context, while our synthetic dataset focuses on single-turn scenarios. Effect of Post ValidationWe conduct two types of post-validation to reduce noises in the synthetic comparisons described in Section 2.1. Table 4 shows that each filtering method contributes to the final reward model quality. Notably, we find the heuristic filter (HF) considering length distribution plays a crucial role in synthetic data generation. When we exclude the HF, the performance of RM drops about 10% point. Moreover, it does not fall into the length bias discussed in Section 2.1, in that it outperforms the lengthy baseline which always selects the longer response as the better response. RMSP vs Self-PlayWe inspect the benefits of RM-guided Self-Play (RMSP) compared to its counterpart without RM guidance, i.e., Self-Play. Specifically, we compare two supervised policies (SFT) trained on demonstrations generated by RMSP or Self-Play. In Table 5, we find that the SFT model trained with RMSP outperforms the model with Self-Play in various benchmarks. In GPT-4 evaluation comparing Alpaca-7b, only the model with RMSP shows a winning rate higher than 50%. Moreover, we find the importance of designing good prompts. If we use HHH prompt instead of the Faithful for the simulation, the performances for the alignment drop significantly. We include qualitative examples to compare the methods in Table 8. \begin{table} \begin{tabular}{l c} \hline \hline Train Dataset & Accuracy \\ \hline Random baseline & 50.0 \\ Lengthy baseline & 59.4 \\ \hline Synthetic Comparisons & **65.2** \\ - As-is RM & 63.3 \\ - Heuristic Filter & 55.5 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation results of post validation in the synthetic comparison generation. Helpful-base split is used for the evaluation Bai et al. (2022). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & & \multicolumn{4}{c}{MMLU} & \multicolumn{4}{c}{LAMBADA} \\ \cline{2-7} Model & Humanities & STEM & Social Sciences & Other & All & \\ \hline LLaMA-7B & 30.2 & 31.0 & 45.6 & 30.1 & 31.3 & 72.1 \\ Alpaca-7B & 40.3 & 35.7 & 44.4 & 42.7 & 40.3 & 64.1 \\ ALMoST-7B (SFT) & 32.0 & 30.9 & 33.3 & 30.2 & 31.5 & 68.0 \\ ALMoST-7B (PPO) & 30.4 & 28.3 & 29.2 & 27.9 & 28.9 & 65.4 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation results of zero-shot MMLU and LAMBADA to inspect the ‘alignment tax’ of our models. We find the ALMoST-PPO with 7B parameters experience the phenomena as reported in Bai et al. (2022). \begin{table} \begin{tabular}{l c c} \hline \hline Train Dataset & \# instance & Accuracy \\ \hline Random baseline & - & 50.0 \\ Lengthy baseline & - & 59.4 \\ \hline \hline \multicolumn{3}{l}{_Zero-shot_} \\ StackExchange & 25,057 & 63.7 \\ Synthetic Comparisons & 13,687 & **65.2** \\ \hline \hline \multicolumn{3}{l}{_Full Fine-tuning_} \\ Helpful-base* & 11,738 & 65.2 \\ Helpful-base & 43,835 & 71.8 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of zero-shot reward modeling in the Helpful-base split of HH-RLHF dataset Bai et al. (2022). The lengthy baseline always chooses a longer response between the pairs. The \(*\) indicates the training data is a subset of the original, only including single-turn context. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Prompt & Static HHH & \% Win \\ \hline RMSP & Faithful & **67.8** & **54.3** \\ Self-Play & Faithful & 66.0 & 40.0 \\ Self-Play & HHH & 61.3 & 15.0 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of synthetic demonstration generation methods with and without RM guidance, i.e., rejection sampling over the assistant’s response. The % Win indicates the winning rate against Alpaca-7b in the GPT-4 evaluation. ## 5 Related Work Aligning Language Models with Human PreferencesLanguage models' 'alignment' with human preferences can be assessed by a number of criteria, including but not limited to helpfulness, honesty, harmlessness Askell et al. (2021), truthfulness Lin et al. (2021), and any other given instructions Ouyang et al. (2022). Various measures have been taken to achieve this alignment. Conditioning language models on human preferences Askell et al. (2021); Korbak et al. (2023); Liu et al. (2023) was found to improve models' capabilities of generating human-aligned text. Incorporating reward models Askell et al. (2021); Liu et al. (2022); Scheurer et al. (2023); Yuan et al. (2023) to tell how well the generated text reflects human values has enabled training better aligned language models and served as a crucial ingredient for another effective methodology - reinforcement learning from human feedback (RLHF). Finetuning language models with RLHF is one of the well-known approaches to align language models with human preferences that has been widely investigated in recent days Christiano et al. (2017); Ziegler et al. (2020); Ouyang et al. (2022); Bai et al. (2022); Stiennon et al. (2022); Glaese et al. (2022). Distillation from pre-aligned LLMsRecent open-sourced models such as Alpaca follow the recipe of Self-Instruct Wang et al. (2022) to reduce the burdens of collecting human demonstrations Taori et al. (2023); Peng et al. (2023). However, it generates the synthetic instruction datasets using already aligned LLMs, e.g., InstructGPT or ChatGPT Ouyang et al. (2022); OpenAI (2023), different from Self-Instruct, which uses a vanilla LLM, GPT-3 Brown et al. (2020). Similarly, Peng et al. (2023) try to distill GPT-4 outputs for the alignment. Vicuna is another open-sourced model trained on 70k SharedGPT datasets, which are publicly shared ChatGPT outputs by users Chiang et al. (2023). Also, Koala is trained on a combination of the Alpaca dataset, SharedGPT, and other public datasets such as HH-RLHF Geng et al. (2023). ## 6 Conclusion In this work, we propose a novel framework for aligning LLM with human values by introducing synthetic feedback. We identify better responses from vanilla LLMs with various sizes and prompts, relying on empirical prior knowledge. We first train a reward model with synthetically generated comparisons. Then, we produce another synthetic dataset to train an aligned policy using the reward model. Experimental results demonstrate the efficacy of our framework showing outstanding performances in the alignment benchmarks. We believe the strong performance of our model is derived from the effective incorporation of empirical indicators of well-aligned behaviors through synthetic feedback. Furthermore, our method is cost-effective in that it does not require extensive human demonstrations and depend on the already aligned LLMs.
2310.19910
Bayesian Simulation-based Inference for Cosmological Initial Conditions
Reconstructing astrophysical and cosmological fields from observations is challenging. It requires accounting for non-linear transformations, mixing of spatial structure, and noise. In contrast, forward simulators that map fields to observations are readily available for many applications. We present a versatile Bayesian field reconstruction algorithm rooted in simulation-based inference and enhanced by autoregressive modeling. The proposed technique is applicable to generic (non-differentiable) forward simulators and allows sampling from the posterior for the underlying field. We show first promising results on a proof-of-concept application: the recovery of cosmological initial conditions from late-time density fields.
Florian List, Noemi Anau Montel, Christoph Weniger
2023-10-30T18:24:25Z
http://arxiv.org/abs/2310.19910v1
# Bayesian Simulation-based Inference for ###### Abstract Reconstructing astrophysical and cosmological fields from observations is challenging. It requires accounting for non-linear transformations, mixing of spatial structure, and noise. In contrast, forward simulators that map fields to observations are readily available for many applications. We present a versatile Bayesian field reconstruction algorithm rooted in simulation-based inference and enhanced by autoregressive modeling. The proposed technique is applicable to generic (non-differentiable) forward simulators and allows sampling from the posterior for the underlying field. We show first promising results on a proof-of-concept application: the recovery of cosmological initial conditions from late-time density fields. ## 1 Introduction Recent developments in simulation-based machine learning are increasingly used for tackling difficult astrophysical and cosmological data analysis challenges (_e.g._, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]). While simulation-based inference (SBI) has primarily been employed to solve relatively low-dimensional (\(\lesssim 50\)-dimensional) parameter estimation tasks [4, 9, 20], it has yet to cover higher-dimensionality problems like image reconstruction, which are an essential component in astrophysical and cosmological data analysis. Here, we focus on the recovery of cosmological initial conditions from late-time density fields. This task is a challenging test case for new algorithms thanks to the non-linear, non-local mapping from the Gaussian target to the observation. Cosmic inflation predicts the density in the early Universe to be highly homogeneous, with tiny density fluctuations that are extremely well described as a Gaussian random field. These density perturbations then gradually grow over cosmic time due to gravity and eventually collapse into the non-Gaussian "Cosmic Web" structure observed today [31]. The reconstruction of the initial density field from late-time observations is an ill-posed problem (the early-to-late mapping is not injective on small scales, (_e.g._, [32]). Therefore, there is an entire _distribution_ of possible initial conditions consistent with a given late-time density field. Our contribution.We frame the task of field reconstruction (or, viewed from a non-physical point of view, image reconstruction) as a parameter inference problem. We combine the power of SBI in solving parametric inverse problems together with the scalability offered by autoregressive models. Autoregressive models have established their versatility in tackling high-dimensional distribution estimation tasks by breaking down the joint distribution into a product of conditionals [33, 34], and have been successful in conditional image modelling [35]. Additionally, we employ a Gibbs sampling algorithm based on exact data augmentation (GEDA) [36] to efficiently sample image parameter posteriors. We will formulate our method in a generic way to emphasize its applicability to a wide range of field/image reconstruction problems. Importantly, our approach accommodates arbitrary _non-differentiable_ forward simulators. Related work.The problem of inferring cosmological initial conditions has been studied since the late 1980s (see _e.g._ Refs. [37; 38; 39; 40; 41; 42] for classical papers), for instance by applying the least-action principle or using optimal transport. In the last decade, Bayesian models have been formulated for this task [_e.g._, 43], many of which rely on differentiable forward models [44; 45], in conjunction with Hamiltonian Monte Carlo sampling. Recently, machine learning methods such as convolutional neural networks [46], variational inference [47], recurrent inference machines [48], and score-based modeling [49] have also been explored in this context. ## 2 Methodology Background and problem setup.SBI methods tackle statistical inverse problems by estimating posterior distributions from model-simulations. These methods do not require explicit modeling of the data likelihood, but instead access the information within the likelihood indirectly via a stochastic simulator, which maps input parameters to simulated data. Among various SBI algorithms (see Ref. [50] for a review and Ref. [51] for benchmarks), we will focus on neural ratio estimation (NRE),1 which rephrases posterior estimation into a binary classification problem [52; 53]. Footnote 1: We use swyft NRE implementation that can be found at [https://github.com/undark-lab/swyft](https://github.com/undark-lab/swyft). Let us assume we have a simple hierarchical simulator \[p(\mathbf{x}|\mathbf{z})p(\mathbf{z}) \tag{1}\] where \(\mathbf{x}\in\mathbb{R}^{N\times N}\) is observed and \(\mathbf{z}\in\mathbb{R}^{N\times N}\) are image parameters (pixel values). Here, \(p(\mathbf{x}|\mathbf{z})\) can include non-linear, non-local transformations and non-Gaussian noise, and it is only implicitly defined through a forward simulator. We will discuss how we can, for a given observation \(\mathbf{x}_{o}\), estimate an approximate but computationally efficient Gaussian likelihood that locally resembles \(p(\mathbf{x}_{o}|\mathbf{z})\) for a target observation \(\mathbf{x}_{o}\). A fast surrogate can then be leveraged for downstream image analysis tasks. Likelihood estimation.Firstly, in order to obtain locally optimal data summaries \(\mathbf{s}(\mathbf{x})\) for the image reconstruction task, we use NRE to estimate the marginal, pixel-wise ratios2 Footnote 2: In this work we use the following notation for ratio estimators \(\tilde{r}(a;b)=\frac{p(a,b)}{p(a)p(b)}=\frac{p(a|b)}{p(a)}\). If necessary, multiple variables are comma separated, for example \(\tilde{r}(a;b,c)=\frac{p(a,b,c)}{p(a)p(b,c)}\). \[\tilde{r}(s_{i}(\mathbf{x});z_{i})\equiv\frac{p(s_{i},z_{i})}{p(s_{i})p(z_{i})}\;. \tag{2}\] We assume here that both the joint and marginal distributions can be approximated as Gaussians, whereas the mapping \(s_{i}(\mathbf{x})\) is an arbitrary learnable function (usually a neural network). Means and covariances are estimated on-the-fly during training (similarly to batch normalization [54]) and are not represented as learnable parameters. Secondly, in order to obtain an estimate of the joint likelihood \(p(\mathbf{x}|\mathbf{z})\), we proceed as follows. We split the problem auto-regressively along the observation axis, and we use NRE to estimate \[\begin{split}\frac{p(\mathbf{x}|\mathbf{z})}{p(\mathbf{x})}& \simeq\frac{p(\mathbf{s}(\mathbf{x})|\mathbf{z})}{p(\mathbf{s}(\mathbf{x}))}=\prod_{i=1}^{N^{2}} \frac{p(s_{i}|\mathbf{s}_{1:i-1},\mathbf{z})}{p(s_{i}|\mathbf{s}_{1:i-1})}\\ &\simeq\prod_{i=1}^{N^{2}}\tilde{r}(s_{i};l_{i},z_{i})=\prod_{i=1} ^{N^{2}}\frac{p(s_{i},l_{i},z_{i})}{p(s_{i})p(l_{i},z_{i})}\;,\end{split} \tag{3}\] where we have introduced \(l_{i}=(\mathbf{L}(\mathbf{s}))_{i}\) with \(\mathbf{L}\) a (generally non-linear) autoregressive function. Again, we assume that the individual functions \(p(s_{i},l_{i},z_{i})\) are (three-dimensional multivariate) Gaussians. By rewriting the above components (for a complete derivation and definitions of \(\mathbf{Q}_{\text{like}}\) and \(\mathbf{b}\) see Appendix A), we can obtain the likelihood function in the so-called _information form_, \[\ln p(\mathbf{x}|\mathbf{z})=-\frac{1}{2}\mathbf{z}^{T}\mathbf{Q}_{\text{like}}\mathbf{z}+\mathbf{b}\mathbf{z }+C(\mathbf{s}). \tag{4}\] In this paper, we assume that the precision matrix \(\mathbf{Q}_{\text{like}}\) is diagonal. However, correlations between data summaries \(\mathbf{s}\) are accounted for through the autoregressive function mentioned above; also, each component \(s_{i}(\mathbf{x})\) of the data summary may depend on all components of \(\mathbf{x}\) and thus accounts for cross-pixel information. To enhance the robustness of Bayesian approaches in data analysis, frequently likelihood tempering techniques are employed that result in conservative estimates [55; 56]. Tempering the likelihood amounts to raising it to a fractional power \(\gamma\in[0,1]\), leading to progressively coarser posterior samples, with the extreme case of \(\gamma=0\) (\(\gamma=1\)) corresponding to samples drawn from the prior (posterior) distribution. Posterior sampling.In our Bayesian framework and assuming Gaussian distributions, we combine the estimated likelihood with the known prior to derive the posterior in the information form \[\ln p(\mathbf{z}|\mathbf{x})=-\frac{1}{2}\mathbf{z}^{T}\underbrace{(\mathbf{Q}_{\text{like}}+ \mathbf{Q}_{\text{prior}})}_{\mathbf{Q}_{\text{post}}}\mathbf{z}+\mathbf{b}\mathbf{z}+C^{\prime}( \mathbf{s})\, \tag{5}\] where we have assumed a zero-mean prior, consistent with the physical problem we study in Sec. 3. We use the conjugate gradient (CG) algorithm3 to compute the maximum-a-posteriori (MAP) estimate \(\mathbf{z}_{\text{MAP}}\) of the image parameters \(\mathbf{z}\) by solving the linear system \(\mathbf{Q}_{\text{post}}\mathbf{z}=\mathbf{b}\). The surrogate Gaussian posterior distribution is then given by \(p(\mathbf{z}|\mathbf{x})=\mathcal{N}(\mathbf{z}_{\text{MAP}},\mathbf{Q}_{\text{post}}^{-1})\). Once we have the posterior distribution in the above form, we simply have to sample from it. Various techniques have been presented over time to tackle the problem of efficient sampling from a high-dimensional Gaussian distribution (for a recent review, see Ref. [57]). To obtain the Gaussian posterior samples, we use a Gibbs sampler based on the generalized exact data augmentation algorithm (GEDA) [36]. GEDA solves the problem of high-dimensional Gaussian sampling specifically for distributions whose precision matrix can be expressed as \(\mathbf{Q}=\mathbf{Q}_{1}+\mathbf{Q}_{2}\) by exploiting specific properties of \(\mathbf{Q}_{1}\) and \(\mathbf{Q}_{2}\) (for details see Appendix B). Crucially, \(\mathbf{Q}_{\text{post}}\) satisfies these constraints. Footnote 3: We use a slightly modified implementation of the preconditioned CG algorithm from [https://github.com/sbarratt/torch_cg](https://github.com/sbarratt/torch_cg). ## 3 Experiment To demonstrate the efficiency of our method, we apply it to the task of reconstructing the initial conditions of the Universe. In this proof-of-concept study, we consider the two-dimensional case and assume Einstein-de Sitter cosmology (i.e. non-relativistic, collisionless matter only). While our framework readily supports marginalizing over image parameters such as the power spectra of the target fields, we use a fixed power-law power spectrum for the initial density contrast \(\mathbf{z}=\mathbf{\delta}_{\text{ini}}\in\mathbb{R}^{128\times 128}\), which is the target of our inference. As a forward model, we use second-order Lagrangian perturbation theory (2LPT, see Refs. [58; 59] and Appendix C) and evolve the initial density to a time when non-linear structures have formed. The observation is then given by \(\mathbf{x}=\log_{10}[1.1+\mathbf{\delta}_{\text{final}}]+\mathbf{\varepsilon}\), where \(\mathbf{\delta}_{\text{final}}\) is the density contrast at final time, the logarithm is applied element-wise, and we add \(\mathbf{\varepsilon}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\) with \(\sigma=0.15\) as a simplistic model for observational noise. To obtain pixel-wise summaries \(\mathbf{s}(\mathbf{x})\), we use a standard U-Net [60]. In our current implementation, we use an autoregressive convolution [_e.g._, 35] as the autoregressive function \(\mathbf{L}\). In our experiments, we observed that overly small or large kernel sizes result in somewhat biased posteriors. Interestingly, we obtain the best results with a physically motivated kernel size for the convolution, i.e. when choosing it in such as way that the "radius of influence" of the autoregressive convolution (i.e. roughly half the kernel size) matches the typical distance that particles (or fluid elements) have traveled by the end time - a quantity known as displacement. Specifically, the mean displacement in our case is \(\sim 5\) pixels, so we take the kernel size to be \(11\times 11\) pixels. While we find this choice to be optimal, slight bias still occurs occasionally, potentially due to the spatial heterogeneity of the displacements. Therefore, to be conservative, we use a tempered likelihood with \(\gamma=0.5\), trading some of the statistical constraining power of our method for increased unbiasedness. The importance of the specific choice of \(\mathbf{L}\) (and in particular its radius of influence) indicates that a locally adaptive or multiscale approach for \(\mathbf{L}\) is a promising avenue for future exploration. We train our model on 1000 \((\mathbf{x},\mathbf{z})\)-pairs for 15 epochs, which only takes \(\sim 3\) minutes on a laptop GPU for this experiment. Obtaining the MAP estimate with the CG algorithm takes less than 3 seconds. To obtain posterior samples with GEDA, we perform 300 sampling steps, which takes \(<1\) second per sample (and multiple samples can be drawn in batches). The left column of Fig. 1 shows a target initial density contrast \(\mathbf{z}_{o}\) (_top_), together with a resulting noisy observation at late time \(\mathbf{x}_{o}\) (_bottom_). In the two center panels, we plot samples drawn from the (tempered) posterior \(p(\mathbf{z}|\mathbf{x}_{o})\), together with an observation of each sample. The upper right panel shows the MAP estimate \(\mathbf{z}_{\text{MAP}}\). The initial density field is faithfully reconstructed on large scales whereas, as expected, small-scale information remains unconstrained. Finally, the lower right panel shows that the power spectra of the posterior samples are consistent with the target field. The excellent agreement between reconstructed and true power spectra on small scales is aided by the fact that the power spectrum (which is fixed in our example) directly enters the GEDA sampling (see Eq. 9 in Appendix B, where the prior precision matrix \(\mathbf{Q}_{2}\) is diagonalized by the Fourier transform, with the power spectrum on the diagonal of \(\mathbf{D}_{2}\)). We will present a detailed quantitative analysis of the results obtained with our framework in a separate publication. ## 4 Discussion and Conclusion We have introduced a framework for Bayesian field/image reconstruction by combining SBI, autoregressive modeling, and a Gibbs sampling algorithm based on exact data augmentation (GEDA). We presented promising results for a toy example related to reconstructing the initial conditions of the Universe. In view of its _remarkable speed, low simulation costs, and the fact that it works for general non-differentiable simulators_, we expect our method to be capable of handling significantly higher-dimensional problems, including 3D cosmological simulations. Figure 1: Reconstruction of cosmological initial conditions in 2D. _Top left:_ True initial density \(\mathbf{z}_{o}\). _Bottom left:_ True observation \(\mathbf{x}_{o}\), i.e. the (logarithm of the) late-time density evolved from the initial conditions \(\mathbf{z}_{o}\), corrupted by uncorrelated Gaussian noise. _Top center:_ Two samples drawn from the posterior \(p(\mathbf{z}|\mathbf{x}_{o})\). _Bottom center:_ Observations computed from the posterior samples shown above. _Upper right:_ Maximum-a-posteriori probability (MAP) estimate \(\mathbf{z}_{\text{MAP}}\) of the initial conditions. _Bottom right:_ Distribution of the reconstructed initial power spectrum. There are multiple promising avenues to be explored in future work. For many applications--such as the reconstruction of cosmological initial conditions--formulating the problem in Fourier space can be expected to produce significantly tighter posteriors, as the input-to-output mapping is exactly invertible on large to intermediate scales, and the stochasticity of the reconstruction only affects small scales. Alternatively, wavelets [_e.g._, 61] or other multiscale techniques could be exploited. In this context (but also more generally), other choices for the autoregressive function \(\mathbf{L}\) are worthwhile exploring, which could, for instance, implement the hierarchy between the different scales. Another promising enhancement involves harnessing the sequential aspect of SBI techniques. In principle, a viable strategy is to employ an adaptive scheduler to control the value of \(\gamma\) to draw new training samples for sequential inference rounds. Moreover, being SBI-based, our proposed method is capable of marginalizing over cosmological parameters or, alternatively, infer them in addition to the phases of the initial random field. Let us also comment on the current limitations of our method: while the assumption of Gaussianity for the prior is justified for reconstructing cosmological initial conditions, using a Gaussian likelihood is an approximation whose justification depends on the specific task at hand. We take cross-pixel information into account through the summary statistics and an autoregressive function, but we currently model the precision matrix of the likelihood as being diagonal. In addition, the susceptibility of our framework to issues encountered in the context of autoregressive models such as exposure bias [62] merits further investigation. Finally, we remark that the forward model generating "observations" \(\mathbf{x}\) does not necessarily need to be a physical one, and our framework also holds great promise for generic image reconstruction problems consisting in inferring one image from another. ## Acknowledgments and Disclosure of Funding This work is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant agreement No. 864035 - UnDark). FL thanks Oliver Hahn and Cornelius Rampf for many insightful discussions.
2305.15344
Learning Answer Generation using Supervision from Automatic Question Answering Evaluators
Recent studies show that sentence-level extractive QA, i.e., based on Answer Sentence Selection (AS2), is outperformed by Generation-based QA (GenQA) models, which generate answers using the top-k answer sentences ranked by AS2 models (a la retrieval-augmented generation style). In this paper, we propose a novel training paradigm for GenQA using supervision from automatic QA evaluation models (GAVA). Specifically, we propose three strategies to transfer knowledge from these QA evaluation models to a GenQA model: (i) augmenting training data with answers generated by the GenQA model and labelled by GAVA (either statically, before training, or (ii) dynamically, at every training epoch); and (iii) using the GAVA score for weighting the generator loss during the learning of the GenQA model. We evaluate our proposed methods on two academic and one industrial dataset, obtaining a significant improvement in answering accuracy over the previous state of the art.
Matteo Gabburo, Siddhant Garg, Rik Koncel-Kedziorski, Alessandro Moschitti
2023-05-24T16:57:04Z
http://arxiv.org/abs/2305.15344v1
# Learning Answer Generation using Supervision from ###### Abstract Recent studies show that sentence-level extractive QA, i.e., based on Answer Sentence Selection (AS2), is outperformed by Generation-based QA (GenQA) models, which generate answers using the top-\(k\) answer sentences ranked by AS2 models (a la retrieval-augmented generation style). In this paper, we propose a novel training paradigm for GenQA using supervision from automatic QA evaluation models (GAVA). Specifically, we propose three strategies to transfer knowledge from these QA evaluation models to a GenQA model: (i) augmenting training data with answers generated by the GenQA model and labelled by GAVA (either statically, before training, or (ii) dynamically, at every training epoch); and (iii) using the GAVA score for weighting the generator loss during the learning of the GenQA model. We evaluate our proposed methods on two academic and one industrial dataset, obtaining a significant improvement in answering accuracy over the previous state of the art. ## 1 Introduction Recent research on retrieval-based Question Answering (QA) systems has been focused on two main tasks: (i) Answer Sentence Selection (AS2) e.g., Garg et al. (2020), which, given a question and a list of answer candidates, chooses the most relevant answer that correctly answers the question; and (ii) Machine Reading (MR) e.g., Chen et al. (2017), which, given a question and a reference text, involves finding a text span that directly answers the question. While effective, both the strategies (AS2 and MR) have some limitations: (i) the text might not include all the information necessary to answer a question, (ii) the text might include unnecessary, distracting information, or (iii) the text expresses the answer in a convoluted (indirect) format. Additionally, the text style and sentiment may be inappropriate for answering, or might be structurally too dependent on longer discourse context to enable usage as a stand-alone answer. These drawbacks have motivated researchers to explore text generation systems for writing 'better' answers in the open-domain abstractive QA setting. For example, in the MR domain, RAG Lewis et al. (2020) generates an answer from a set of documents which are selected by dense passage retrieval models. For the domain of AS2, previous research has focused on summarizing answers from relevant paragraphs/evidences Lewis et al. (2020), or synthesizing information from the top ranked answer candidates of an AS2 system Hsu et al. (2021); Muller et al. (2022); Gabburo et al. (2022). The latter, termed as GenQA, has shown improvements in answer generation from the perspective of both answering accuracy and style suitability. The main distinguishing feature of GenQA from a generation-based MR approach is the length of the answer: the former uses an entire sentence as the target answer, while the latter in practice uses a short text (primarily targeting entity names). Therefore GenQA offers a more general and challenging research setting for answer generation. Training effective GenQA models is made challenging by the cost and difficulty of obtaining large-scale, high quality training data. This typically requires human annotators to read the questions and relevant top \(k\) retrieved paragraphs/sentences, and then re-write them into a self-contained, and concise natural answer (sentence/paragraph). Recent research Vu and Moschitti (2021); Bulian et al. (2022) has proposed effective automatic QA evaluation models based on transformer encoders for sentence-form answers. Training these QA evaluators only requires access to question answer pairs with annotations of correctness/incorrectness of the answers. This style of data annotation is much cheaper to perform than writing high-quality answers for training for GenQA models. In this work we explore the novel idea of using automatic QA evaluators for training GenQA models, which enables a faster and cheaper design implementation. In this paper, we reduce the amount of data needed for training a GenQA model using supervision from Automatic QA Evaluators. Our first contribution is to propose GAVA: an automatic QA evaluation approach that extends AVA Vu and Moschitti (2021) by (i) exploiting multiple reference answers and (ii) evaluating LM-generated answers instead of extracted answers. This way, we obtain a more robust and accurate QA evaluator that can effectively supervise the training of GenQA models. We propose three novel methods to use GAVA for refining the training of GenQA. The first consists of (i) generating multiple possible answers using a baseline GenQA model for questions belonging to the GenQA training dataset, and (ii) then refining the set of generated answers by only retaining those with the highest GAVA scores (corresponding to "correct" or "high quality" answers). These generated answers are used as alternate gold standard answers (in addition to the annotators' written answers) to create additional training examples for GenQA. We term this approach GAVA-SDA (Static Data Augmentation). The second approach extends GAVA-SDA, performing data augmentation dynamically at every epoch instead of off-line before training. This intuitively is more effective as the GenQA model continuously improves during the training. Specifically, at every epoch, we use GAVA to score the list of generated answers along with the \(k\) input answer candidates. We then use the top scoring answer as the GenQA target and the next top-\(k\) scoring answers as inputs for GenQA. We term this approach GAVA-DDA (Dynamic Data Augmentation). The third approach uses GAVA as a scoring function for loss weighting during the training of GenQA. Specifically, we generate an answer using a GenQA model for a training sample, and weight the GenQA model loss of this instance using the GAVA score corresponding to the generated answer. Intuitively, this makes the GenQA model learn more from instances associated with higher GAVA-scoring answers (which corresponds to "correct" or "high quality" answers). We term this approach GAVA-LW (Loss Weighting). We perform empirical evaluation on two academic and one industrial QA dataset (de-identified customer questions from Alexa personal assistant), and show that our three proposed techniques using GAVA for training a GenQA model produce significant improvements in answering accuracy over a baseline GenQA approach. We also show that the answers generated by these improved GenQA models consistently achieve higher GAVA scores on average than the baseline. We will release the code along with the trained GenQA and GAVA models at [https://github.com/amazon-science/wqa-genqa-gava](https://github.com/amazon-science/wqa-genqa-gava) to enable easy replication of our experimental results. ## 2 Related Work **Answer Generation:** Several research works Izacard and Grave (2021); Lewis et al. (2020) have studied the problem of generating short answer spans (typically entity level) for MR systems. The most relevant of these works for GenQA is the work of Asai et al. (2022), that trains an answer generation model using the evidentiality of retrieved passages. Xu et al. (2021) uses decoder cross-attention patterns to generate extractive answer spans. Fajcik et al. (2021) generate answer spans by using a combination of a generative and extractive reader (aggregating information over multiple passages). An independent, but related line of research is question-based summarization, and there have been several research works in this field: Iida et al. (2019); Deng et al. (2020). Hsu et al. (2021) proposed the first formulation for generating complete answer sentences using evidences retrieved by an answer sentence selection (AS2) model. This model was termed GenQA, and it uses the top-\(k\) most relevant answer sentence candidates for a question as input context to an encoder-decoder model to generate a natural sounding complete answer sentence for this question. Muller et al. (2022) extend GenQA for multiple languages by using answer sentence candidates from multiple languages as input context for the GenQA model. Recently, Gabburo et al. (2022) propose training of GenQA models using unlabeled data by leveraging weak supervision from trained AS2 ranking models. This approach was shown to combine well with the supervised GenQA approach Hsu et al. (2021) to improve the answering accuracy. Note that all of these approaches are different from the ones described in the previous paragraph as they aim to generate complete answer sentences, and not just short answer spans. **Evaluation of QA Systems:** Token level simi
2310.11394
Quantum Financial Modeling on Noisy Intermediate-Scale Quantum Hardware: Random Walks using Approximate Quantum Counting
Quantum computers are expected to contribute more efficient and accurate ways of modeling economic processes. Quantum hardware is currently available at a relatively small scale, but effective algorithms are limited by the number of logic gates that can be used, before noise from gate inaccuracies tends to dominate results. Some theoretical algorithms that have been proposed and studied for years do not perform well yet on quantum hardware in practice. This encourages the development of suitable alternative algorithms that play similar roles in limited contexts. This paper implements this strategy in the case of quantum counting, which is used as a component for keeping track of position in a quantum walk, which is used as a model for simulating asset prices over time. We introduce quantum approximate counting circuits that use far fewer 2-qubit entangling gates than traditional quantum counting that relies on binary positional encoding. The robustness of these circuits to noise is demonstrated. We compare the results to price change distributions from stock indices, and compare the behavior of quantum circuits with and without mid-measurement to trends in the housing market. The housing data shows that low liquidity brings price volatility, as expected with the quantum models.
Dominic Widdows, Amit Bhattacharyya
2023-10-17T16:54:31Z
http://arxiv.org/abs/2310.11394v2
# Quantum Financial Modeling on NISQ Hardware: Random Walks using Approximate Quantum Counting ###### Abstract Quantum computers are expected to contribute more efficient and accurate ways of modeling economic processes. Quantum hardware is currently available at a relatively small scale, but effective algorithms are limited by the number of logic gates that can be used, before noise from gate inaccuracies tends to dominate results. Some theoretical algorithms that have been proposed and studied for years do not perform well yet on quantum hardware in practice. This encourages the development of suitable alternative algorithms that play similar roles in limited contexts. This paper implements this strategy in the case of quantum counting, which is used as a component for keeping track of position in a quantum walk, which is used as a model for simulating asset prices over time. We introduce quantum approximate counting circuits that use far fewer 2-qubit entangling gates than traditional quantum counting that relies on binary positional encoding. The robustness of these circuits to noise is demonstrated. While this paper is mainly about robust simplified quantum circuit designs, we compare some aspects of the results with price change distributions from stock indices, and compare the behavior of circuits with and without mid-measurement to trends in the housing market. ## 1 Motivation: Quantum Finance Implementations in 2023 Quantum computers are expected to enable more sophisticated and accurate modeling of various financial situations. The reasons for the high expectations for quantum finance are in some cases thoroughly worked-out algorithmically: for example, Egger et al. (2020) survey applications including option pricing and risk management, where Monte Carlo simulation methods are commonly used, and explain how quantum algorithms for amplitude estimation offer a potential quadratic speedup (by reducing the number of samples needed for the variance of the probabilistic outcomes to converge). As with quantum factoring and search, there are solid reasons for expecting quantum computers to perform well at large-scale problems that are especially challenging for classical computing methods. In some cases, the proposed models are simple and concise enough to be simulated on classical hardware, and now in the early 2020's their behavior can be explored on real quantum computers. However, these models tend to be very small: for example, Zhu et al. (2022) use 6 trapped-ion qubits to perform generative modeling for correlated stock prices, and Stamatopoulos et al. (2020) use just 3 superconducting qubits for option pricing. The scale of such experiments has been particularly limited by quantum gate accuracy. For example, the 3-qubit circuit of Stamatopoulos et al. (2020) is optimized down to 18 2-qubit entangling gates and 33 single-qubit gates, but even with this small circuit, error rates in the results range from 62% raw, to 21% using Richardson extrapolation for error-correction. This is not surprising, since the accuracies of the single- and 2-qubit gates are estimated at 99.7% and 97.8% respectively, and \(0.997^{33}\times 0.978^{18}\approx 0.587\), so the compounded gate error rate is at least 40%. A safe implementation strategy might be to wait for large-scale fault-tolerant quantum computers to become available, but this runs the risk of missing opportunities in the meantime. Instead, researchers such as Stamatopoulos et al. (2020) and Zhu et al. (2022) try to use currently-available quantum hardware, and ask whether implementations can be made robust enough to provide value sooner. In the current NISQ era (Noisy Intermediate-Scale Quantum), the scarce resources include the number of qubits, and also, as seen above, the number of gates, and especially the number of 2-qubit entangling gates. Circuits are sometimes described in terms of width (number of qubits) and depth (number of dates, or layers of gates), and both need to be minimized. Quantum developers sometimes have many suggested designs to start from: quantum information processing has been explored as an academic field for some decades, and established literature provides many circuit recipes (Nielsen and Chuang, 2002). A natural strategy is to take such designs, consider their NISQ era limitations, and see if there are alternatives that provide some of the same functionality using fewer qubits or gates. This paper develops some new examples of this approach, with the basic example of quantum counting. The central novel contribution of the paper are the approximate quantum counting circuits, introduced in Section 5. The motivation is that quantum counting is used as a component in the implementation of quantum random walks, which are proposed as a model for stock prices, and also for _beliefs_ about the future value of stock prices, for the pricing of stock options. Beliefs are less exact than prices: it is not very important to make sure that an estimate of $1,000 comes $1 after an estimate of $999 and $1 before an estimate of $1,001; but it is important to make sure that these are all treated similarly, and that doubling any of them gives something in the region of $2,000. The circuits proposed in this paper demonstrate such properties, albeit approximately, but much more accurately than is currently possible on quantum computers that use positional binary representations for numbers that strictly follow the axioms of arithmetic. A distinct feature of quantum systems including quantum walks is that they behave differently when they are measured, compared to when they are left to evolve dynamically. Such behavior has been demonstrated with humans (Kvam et al., 2015; Yearsley and Pothos, 2016) in psychology experiments, and is an important feature in quantum economics (Orrell, 2020). Section 7 investigates the simulated behavior of the approximate counting circuits with mid-measurement, and shows that they exhibit desirable behavior (in this case, that more frequent measurement tends to reduce the chances of large changes). To begin with, the next few sections review some of the basic quantum logic gates and how they are put together into quantum circuits, the use of random walks and quantum walks in finance, and how these come together to emphasize the practical quantum counting problem. ## 2 Quantum Gates Used In This Paper In mathematical terms, the key features that distinguish quantum from classical computers are superposition and entanglement. This section gives a brief summary of how these properties are worked with in quantum circuits. Some familiarity with quantum mechanics, especially Dirac notation, is assumed, so that \(|0\rangle\) and \(|1\rangle\) are the basis states for a single qubit whose state is represented in the complex vector space \(\mathbb{C}^{2}\) a 2-qubit state is represented in the tensor product space \(\mathbb{C}^{2}\otimes\mathbb{C}^{2}\cong\mathbb{C}^{4}\) with basis states \(\left|00\right\rangle,\left|01\right\rangle,\left|10\right\rangle\) and \(\left|11\right\rangle\), 3-qubit states are represented in \(\mathbb{C}^{\otimes 3}\cong\mathbb{C}^{8}\) with basis states \(\left|000\right\rangle,\left|001\right\rangle,\ldots,\left|111\right\rangle\), and so on. For introductions to how linear algebra is written and used in quantum mechanics, see Nielsen and Chuang (2002, Ch 2), Orrell (2020, Ch 2). Quantum measurement is probabilistic: if \(\left|\phi\right\rangle\) is an eigenvector of a given measurement operator, then a system in the state \(\left|\psi\right\rangle\) is observed to be in the state \(\left|\phi\right\rangle\) with probability given by the square of their scalar product, \(\left\langle\phi|\psi\right\rangle^{2}\), and if this outcome is observed, the system is now in the state \(\left|\phi\right\rangle\). Superposition can be realized in a single qubit: the state \(\alpha\left|0\right\rangle+\beta\left|1\right\rangle\) is a superposition of the states \(\left|0\right\rangle\) and \(\left|1\right\rangle\), where \(\alpha\) and \(\beta\) are complex numbers, with \(\left|\alpha^{2}\right|+\left|\beta^{2}\right|=1\). Each single-qubit logic is a linear operator that preserves the orthogonality of the basis states and this normalization condition, and the group of such operators is \(U(2)\), the group of complex \(2\times 2\) unitary matrices. Single-qubit gates that feature prominently in this paper are shown in Figure 1. So single-qubit gates coherently manipulate the superposition state of an individual qubit. Entanglement is a property that connects different qubits. Since the 1930's, quantum entanglement has gone from a hotly-disputed scientific prediction, to a statistical property demonstrated with large ensembles, to a connection created between pairs of individual particles, to a working component in quantum computers. All modern quantum computers have some implementation of an entangling gate, and only one is really needed, because all possible 2-qubit entangled states can be constructed mathematically by combining appropriate single-qubit gates before and after the entangling gate. Furthermore, a single 2-qubit entangling gate and a set of single-qubit gates forms a _universal gateset_ for quantum computing (Nielsen and Chuang, 2002, SS4.5). The CNOT (controlled-NOT) gate of Figure 2 is the most common example of a 2-qubit gate in the literature. In the standard basis, its action is sometimes described as performing a NOT operation on the target qubit if the control qubit is in the \(\left|1\right\rangle\) state. Thus, as well as causing entanglement, it is sometimes thought of as a kind of conditional operator in quantum programming. Entanglement is the crucial property that distinguishes quantum computing from an algorithmic point of view, because predicting the probability distributions that result from quantum operations with entanglement can become exponentially hard for classical computers. In simpler terms, quantum computing is special because it offers special kinds of interference, not because it offers special kinds of in-between-ness. A quantum circuit consists of a register of qubits, and a sequence of logic gates that act on these qubits. For example, the circuit in Figure 3 prepares the famous Bell state (named after physicist John Bell, whose pioneering theorem motivated experiments that demonstrated real entanglement). It maps the input state \(\left|00\right\rangle\) to the state \(\frac{1}{\sqrt{2}}(\left|00\right\rangle+\left|11\right\rangle)\), which has the crucial 'entangled' behavior whereby if one qubit is measured Figure 1: Some standard single-quantum gates and their corresponding matrices, which operate on the superposition state \(\alpha\left|0\right\rangle+\beta\left|1\right\rangle\) written as a column vector \((\alpha,\beta)^{T}\). to be in the \(\ket{0}\) state, the other qubit must also be in the \(\ket{0}\) state, and vice versa. There are many standard gate recipes and equivalences. In particular, larger operators are often thought of as distinct gates in their own right, an important example being the 3-qubit Toffoli gate of Figure 4. This is like an extended CNOT gate -- it has 2 control qubits instead of 1, and performs an X-rotation / NOT operation on the target qubit if both the control qubits are in the \(\ket{1}\) state. The decomposition in Figure 4 shows that 5 CNOT gates are needed for each Toffoli gate. There are variants of this, but as a general rule-of-thumb, the error-rate of a Toffoli gate will be at least 4 times the error-rate of the 2-qubit gates form which it is assembled. Toffoli gates are particularly important for binary arithmetic, as seen in Section 5. In the NISQ era, such considerations are pervasive: there is a ubiquitous tradeoff between circuit complexity (the number of gates needed to execute a given algorithm) and expected circuit accuracy (the more gates we use, the more inaccurate our results become). ## 3 Random Walks, Stock Prices, and Quantum Walks This section briefly reviews the role of random walks and quantum walks in the modeling of asset prices. For a more thorough introduction, see Orrell (2020, Ch 7, 8). A random walk is a mathematical process that constructs a path through some base space composed of a succession of randomly-chosen steps (Xia et al., 2019). Random walks have been used to model a range of scenarios including physical (Brownian) motion, population dynamics, and web browsing sessions, though they were first proposed for modeling prices of stocks in the Paris Bourse in the work of Bachelier (1900). This was formalized in the Black-Scholes model for pricing financial options: the paper introducing the Black-Scholes formula assumes that: Figure 4: The Toffoli gate diagram, showing that it performs a NOT operation on the target qubit if both the control qubits are in the \(\ket{1}\) state. On the right is one of its standard decompositions into CNOT and single-qubit gates. 5 CNOT gates are needed to implement one Toffoli gate. Figure 3: Hadamard and CNOT gates in sequence make a quantum circuit that prepares the Bell state \(\frac{1}{\sqrt{2}}(\ket{00}+\ket{11})\). The stock price follows a random walk in continuous time with a variance rate proportional to the square of the stock price. (Scholes and Black, 1973, SS2(b)). A classical random walk with unit steps up-or-down leads to a binomial distribution, which at large scales is approximated by a corresponding normal distribution. Thus, for large simulations, the simplifying assumption of a fixed size for each step is immaterial, because the overall distribution is normal. However, the most standard formulation for the Black-Scholes model assume that the price change for each unit of time is not fixed, but (log-)normally distributed. The Black-Scholes formula has been widely used as a pricing tool: indeed, over-reliance on the model, and the amounts of money entrusted to it, have been found to be significant contributors to the 2008 market crash (Cady, 2015; Wilmott and Orrell, 2017). One particular observation is that the assumption of a constant rate of volatility is not borne out by the long-tail of variations in strike-price, leading to the claim that a 'volatility smile' distribution is a more faithful model in practice (Orrell and Richards, 2023). Quantum walks have been proposed as an alternative that takes into account key factors including varying subjective beliefs about the future, and the transactions between different traders (Orrell, 2020, Ch 7). Quantum random walks were introduced in the 1990s (Aharonov et al., 1993) and have become a rich and established area of quantum modeling (Venegas-Andraca, 2012). Another anticipated benefit of these quantum walk models is that they will work natively on quantum computers, when large fault-tolerant quantum hardware is available (Orrell, 2021). Quantum walks are thus expected to be a powerful component in pricing models: for example, they may be used to model the input distributions on which the Monte Carlo methods proposed by Stamatopoulos et al. (2020) depend. Often the term 'quantum walk' is preferred to the term 'quantum random walk', not only for brevity, but because the internal state of a quantum walk is typically an entirely deterministic superposition of different states. For example, a walk that starts in position 0 with a 50-50 chance of going in either direction will, after one step, be in a superposition of the states representing positions \(-1\) and \(+1\), with equal amplitudes in the superposition. It is only the measurement outcome that is probabilistic, when one of these distinct possibilities is randomly selected. In the most standard presentation, a quantum walk consists of a _quantum walker_ and _quantum coin_. At each turn, the coin is tossed, and the walker's position moves depending on the coin's resulting state. A canonical example is an unrestricted discrete walk, where the positions correspond to integers, and each move is a single step, represented by incrementing the position integer by \(\pm 1\). This leads to an elegant expression for the _shift_ or _translation_ operator (Venegas-Andraca, 2012, Eq. 9) (Orrell, 2020, SS7.1): \[\ket{0}_{c}\bra{0}\otimes\sum_{i}\ket{i+1}_{p}\bra{i}+\ket{1}_{c}\bra{1} \otimes\sum_{i}\ket{i-1}_{p}\bra{i}. \tag{1}\] The \(c\) and \(p\) subscripts refer to the coin and position registers. The positions are represented by integer states \(\ket{n}\) for \(n\in\mathbb{Z}\). Experiments in simulating quantum walks and harmonic oscillators on real quantum computers have been very small so far, restricted to just 2 qubits, and have reported very noisy results using superconducting hardware (Qiang et al., 2016; Kadian et al., 2021; Puengtambol et al., 2021). The reasons for this are explained in the next section. ## 4 Quantum Counting and the Challenge of Recording Position To simulate a quantum walk using repeated applications of the shift operator in Equation 1, we need to model tossing a coin, and tracking position. The coin-toss is easy for today's quantum computers to imple ment effectively. For example, we use a single qubit and a Hadamard gate which acts like a 'beam-splitter', putting the coin into a superposition of \(|0\rangle\) and \(|1\rangle\) states. The bigger challenge for quantum computers today is tracking the position: in other words, the quantum counting problem (Haven et al., 2017, Ch 4). For non-negative numbers, the states \(|n\rangle\) can be associated with the energy levels in a harmonic oscillator (Jain et al., 2021). In theory this might connect the process of quantum counting with the use of motional modes in for quantum information processing, but this is not yet available on commercial quantum computers (Chen et al., 2021). The most traditional way to represent numbers on a computer is to use some form of binary positional notation. For example, the binary expression 110 represents the number 6 (or the number 3, if the bits are read in reverse order). Quantum binary 'adder circuits' were designed by Feynman in the early papers that first motivated quantum mechanical computing Feynman (1986), but the ongoing presence of errors in NISQ-era machines limits the number of steps we can reliably count (Orts et al., 2020). Choosing a binary positional encoding, as used in classical computing, makes quantum counting very susceptible to 2-qubit gate errors, because manipulating binary encodings takes a lot of entanglement and coordination between qubits. To compute the sum of two binary numbers \(A\) and \(B\) of bitlength \(n\) using classical Boolean algebra, we add (XOR) the least significant bits, and then at each other position we add the corresponding bits along with a 'carry' from the previous stage, setting the output and passing a 'carry' on to the next position. Feynman (1986) explained the quantum gate operations needed for each such step in the quantum full adder circuit of Figure 5, and modern versions are optimized variants on this theme (Orts et al., 2020). If it takes 11 2-qubit gates for each full-adder, then adding two single-byte (8-bit) numbers using this approach uses \(\sim 80\) 2-qubit gates, so by the time such a register has successively added 10 numbers, the chances of an error are over 50% even with a two-qubit gate fidelity of 99.9%, which is on the high-end of performance estimates at the time of writing (IonQ Aria, 2022). Error rates with quantum counting can thus undermine the simulation of quantum walks, and block this application of quantum finance. The problem is demonstrated in Figure 6, which compares quantum walks with ideal outcomes and with noise. This shows that the vulnerability of the quantum counting process dominate after a handful of steps. There are many optimizations and alternatives. We expect progress in quantum hardware to enable greater fidelity and stability, and eventually mid-circuit quantum error-correction should make the current problems with quantum counting obsolete -- but this comes at the cost of waiting for fault-tolerant quantum computing. Quantum addition algorithms can be optimized (Cuccaro et al., 2004; Gidney, 2018), and the incremental operation of counting can be made simpler than repeated full-register addition Li et al. (2014). An interesting benchmark challenge could be to design and evaluate quantum circuits and see how Figure 5: A quantum full adder circuit, first introduced by Feynman Feynman (1986), uses 2 Toffoli and 3 CNOT gates, which is at least 11 2-qubit gates. far they can count with \(>50\%\) fidelity, but that task is not undertaken here. Our basic motivation is that none of these methods simplify counting enough for many successive counting operations on nontrivial quantum registers to be performed accurately. Instead, this paper proposes alternative circuits that can be used to simulate steps and positions in a walk, without requiring exact counting. ## 5 Approximate Quantum Counting: Fault-Tolerant Circuits for Tracking Position By now, the central modeling problem of this paper should be clear: we would like to be able to model a quantum walk on a quantum computer, but the use of positional binary notation to represent integer quantities requires 'increment' and 'decrement' operators that require too many entangling gates to give accurate results on current quantum hardware. To avoid these pitfalls, we introduce alternative circuit designs that can also be used for recording position in a random walk. Instead of trying to ensure that every move goes up and down by exactly one step on the position axis, the position register is incremented using some gate combination that is likely to move the position by some amount that is generally positive for upward steps, and downwards for downward steps. Another way to describe this is that instead of putting all the randomness in the coin toss and following this with deterministic shift operators, we toss a random coin and then combine this with a somewhat-random shift operator. For larger circuits, such methods can give a walk that goes up and down more reliably than the results we get if we try to insist that the position represented by the state \(|n\rangle\) is an accumulation of exactly \(n\) steps of unit length in that direction. ### Arc Counter Circuits This circuit design uses only single-qubit rotations throughout. Such a representation is sometimes called a rotation encoding (Schuld and Petruccione, 2021). It is particularly easy to implement for a modest number of features, and can be incremented as new feature weights are encountered. Figure 6: Ideal simulated quantum walk after 14 and 15 steps (left), compared with results simulated with expected noise (right). The ideal distribution has the two-tailed peaks characteristic of a quantum walk or harmonic oscillator, but this quickly gets lost with noise (right). To use a rotation encoding for counting, qubits are rotated through particular arc-lengths or angles at each incremental step. Each qubit is used to represent one digit in a binary register, where each bit is twice as significant as its predecessor. At each step, each qubit traverses an arc that is inversely proportional to that qubit's significance, as in Figure 7. This means that the \(n+1^{th}\) qubit rotates at half the rate of the \(n^{th}\) qubit. A good analogy for this representation is to think of each qubit as one of the hands on a traditional analog clock. On an analog clock, the minute hand cycles at 12 times the speed of the hour hand, and the second hand 60 times faster still, whereas in our binary clock, the ratio between the speed of rotation of each successive pair of 'hands' is 2:1. This analogy works well for the standard binary positional notation for integers, which can be thought of as a binary digital clock. In a digital register (or an abacus), the digits logically depend on one another for correct incrementing, because we need to know that one register is full before we increment the next. By contrast, the hour hand on a clock does not 'carry' information when the minute hand completes a cycle -- it just rotates at its own slower pace. Thus the rotation encoding clock-based design requires much less coordination (and hence entanglement) between the qubits. This comes at a representational cost -- the register does not represent exact integers, and random variations in the outputs are expected, because many fractional angles are used throughout the circuit. (This is true for the basic counting operation, irrespective of whether the counting is coupled with a 'coin toss' operation.) Figure 8: Quantum walk results after different numbers of steps with arc counter circuit and an 8-qubit register Figure 7: Arc counter circuit Sample results from a quantum walk with an 8 qubit register are shown in Figure 8. Statistically noteworthy properties include: * The mean distance from the starting point generally increases with the number of steps. * In some cases, the position appears to jump ahead, because a high-order qubit is measured in the \(\ket{1}\) state. This can happen (with low probability) after just a single step. * There are sometimes peaks in the distributions after specific powers of two or their combinations (e.g., peaks at 48, 64, 96). It may be possible and desirable to find ways to smooth out these peaks. Since there are no 2-qubit entangling gates, error rates are lower, but there's also no physical quantum advantage from this design -- it is easy to model this distribution on a classical computer. It's possible that such distributions might be useful models for random processes, but this would not require quantum computers to simulate. ### Reversal and Superposition by Classical Post-Processing An important feature of the traditional 'binary adder' circuit components is that they are able to decrement (subtract) as well as increment (add). The arc counter circuit, and the others below, do not support this feature. The logical work to guarantee that a change from \(0111111\) to \(1000000\) happens in-concert for all of the bits is precisely what we've given up, which makes it much harder to orchestrate a difference between positive and negative steps with large distances. As noted by Haven et al. (2017, Ch 4), it is natural for quantum systems to have a lowest state which we may call \(\ket{0}\), and if we want to generate a full set of integers including the negative ones, these can be constructed as differences between positive natural numbers. This leads to an alternative method for simulating random walks that can evolve in both directions. We use two quantum circuits, one for the 'up' steps and one for the 'down' steps, and subtract the results from the down circuit from the results from the up circuit as a classical post-processing step. The 'up' and 'down' circuits can even be configured with different 'clock speeds', which has been done in the example of Figure 9. ### Arc Walk Counter Circuit This circuit design combines the arc rotations of the Arc Counter design above, with the 'Hadamard quantum coin' prevalent in the quantum walk literature. At each step, a Hadamard gate 'tosses a coin', and if the outcome is 'heads', or \(\ket{1}\), a controlled rotation is performed on each of the other qubits, following the same pattern of angles as in the Arc Counter circuit above. This gives the circuit pattern of Figure 10. Example results (8 counter qubits, 10 to 15 steps) are shown in Figure 11. On the left is a purely incrementing circuit, on the right is a two-way walk using the reversal and superposition technique above. ### Random Jump Circuit In this class of examples, instead of using smaller rotations for higher-order qubits, we set up the circuit so that these qubits are changed less often. This can be done with and without a quantum coin controlling the gates. The example in Figure 12 uses a standard Hadamard quantum coin. At each step, a different target qubit in the counter register is selected, according to some weighted random sampling function. This function should prefer the lower-order qubits). In the example below, Figure 10: Arc walk counter circuit Figure 9: Two-directional arc walk circuit from combining positive and negative distributions Figure 11: Example results for arc walk counter circuit (8 counter qubits, 10 to 15 steps). On the left is a purely incrementing circuit, on the right is a two-way walk using the reversal and superposition technique above. Figure 12: Random jump circuit the selection was done with the distribution \(\{\frac{1}{2},\frac{1}{4},\frac{1}{8},\ldots\}\). Note that the circuit creation now introduces classical randomness, as well as the circuit measurement still having quantum randomness. Results for up to 9 steps, using an 8 qubit counter register, are shown in Figure 13. As expected, each walk can go both ways (because randomly flipping a bit can reduce as well as increase a number), though the average tendency for an individual walk is to increase. This is because the registers are initialized to zero, so the process randomly diffuses out from zero. ### Cascading Disjunction Circuits 'Cascading disjunction' is an extra circuit component that can be added to any of the counting circuits above. The idea uses a standard circuit component that performs Boolean disjunction, as in Figure 14. In this implementation, such gates are added between randomly chosen lower- and higher-order bits, using the same sampling distribution as in the Random Jump Circuits. This enables the values of lower-order qubits to propagate to higher-order qubits, which increases and tends to preserve progress in the Figure 14: Boolean in-place disjunction circuit, which sets the state of the higher order qubit to the Boolean OR of the input states of the lower and higher order qubits. The \(X\) gates surrounding the Toffoli gate implement the usual NOT operations to turn an AND conjunction operator into an OR disjunction operator \((A\lor B=\neg(\neg A\wedge\neg B))\). Finally, the swap gate and the reset to \(|0\rangle\) operations ensure that the output is written back into the higher order qubit, and the ancilla qubit is reset to its initial state. Using this construction several times requires mid-circuit reset, so that the ancilla qubit can be reused. Figure 13: Random jump circuit results (positive only on the left, two-way walk using reversal and superposition on the right) walk, because the higher-order qubits are less likely to be randomly selected to be switched back to \(|0\rangle\) later. Results of including this technique are shown in Figure 15 (positive only on the left, two-way walk using reversal and superposition on the right). The walks are still random, but propagating more reliably with the cascading disjunction components. Another variant of this technique would be to add a conjunction between the coin qubit and the target qubit, that sets the next higher qubit to \(|1\rangle\) before reversing the target qubit to \(|0\rangle\). This would behave like a limited-carry operation: it performs some of the coordination between qubit values in the traditional bit-counter circuits in the literature, but much less, in a much more targeted fashion. ## 6 Preliminary Results of Counting Circuits, With and Without Noise The big advantage for the simpler models presented here is that we can run them much more accurately (and quickly) on NISQ-era quantum hardware. Preliminary results for the different circuit designs presented in this paper are given in Tables 1 and 2. The goal of the new circuits is more reliable representation and incrementing of position, so the key comparison is with a binary counter itself, rather than the use of a binary counter going up and down in a random walk. The new methods are evaluated just with \(n\) positive steps, without subtracting the results of a walk in the other direction. The use of cascading disjunctions is included in the ideal 'random jump' circuit, though not in the noisy simulation, because the support for mid-circuit reset on the ancilla qubit is not guaranteed today. The simulations used 6-qubit counting registers, so can represent numbers from 0 to 63. Ideal simulations can only be performed for small numbers of qubits (as a rule-of-thumb, as we pass 30 qubits we start to break the limits of classical simulation). In addition, the binary counter and arc counter circuits were run on a quantum computer with 11 trapped-ion qubits, described by Wright et al. (2019). These results are also show in Table 2, with the label QPU (quantum processing unit). Note that 11 qubits is relatively small by today's standards: the number of reliable qubits in state-of-the-art machines in 2023 is at least in the 20s IonQ Aria (2022). The older machine was deliberately chosen for this experiment, because it highlights the frailty of exact binary counting circuits compared with approximation counting circuits. Figure 15: Cascading disjunction circuit results (positive only on the left, two-way walk using reversal and superposition on the right) Each run of a quantum circuit, including measuring the output into classical bits, is called a'shot'. The random jump results were computed using 30 shots on each of 30 randomly-generated circuits, so these results include both classical and quantum randomness. The binary and arc circuits are generated deterministically, so these results include 1000 shots for a single circuit, and all the randomness is quantum. Key findings include: * The binary counter results are perfect with ideal simulation, but are rendered useless in the noisy simulation. They quickly converge to a random number around 32, which is the average * For all the other circuits, the difference between ideal and noisy results is much less. * The average results for the arc counter and arc walk circuits are the most reliable for simulating a monotonically-increasing position, with or without noise. * The random jump results also tend to increase, but tend to plateau and then move up and down randomly. (This randomness is smaller with a larger register.) \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Steps} & Binary & Arc & \multirow{2}{*}{Arc walk} & \multirow{2}{*}{Random jump} & Random \\ & counter & counter & & jump & cascading \\ \hline 0 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ 1 & 1.000 & 1.108 & 0.491 & 4.386 & 1.133 \\ 2 & 2.000 & 3.276 & 1.522 & 3.383 & 5.281 \\ 3 & 3.000 & 4.742 & 2.079 & 8.488 & 5.783 \\ 4 & 4.000 & 6.764 & 2.826 & 11.450 & 6.341 \\ 5 & 5.000 & 8.233 & 3.833 & 9.317 & 13.539 \\ 6 & 6.000 & 10.564 & 4.860 & 12.468 & 15.510 \\ 7 & 7.000 & 10.955 & 6.476 & 13.964 & 16.366 \\ 8 & 8.000 & 12.664 & 6.458 & 9.804 & 21.479 \\ 9 & 9.000 & 15.097 & 6.958 & 14.261 & 16.837 \\ 10 & 10.000 & 17.836 & 7.810 & 12.261 & 21.919 \\ \hline \hline \end{tabular} \end{table} Table 1: Average distances traveled after \(n\) steps, ideal simulation \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Steps} & Binary & Binary & Arc & \multirow{2}{*}{Arc counter} & \multirow{2}{*}{Arc walk} & Random \\ & counter & counter & & (QPU) & & jump \\ \hline 0 & 0.000 & 0.00 & 0.000 & 0.002 & 0.000 & 0.000 \\ 1 & 15.606 & 21.946 & 1.312 & 1.94 & 2.562 & 1.742 \\ 2 & 25.123 & 32.771 & 3.461 & 4.412 & 3.859 & 2.618 \\ 3 & 30.504 & 32.802 & 5.488 & 5.782 & 6.223 & 5.919 \\ 4 & 29.921 & 33.603 & 6.824 & 6.236 & 8.721 & 9.279 \\ 5 & 31.046 & 32.212 & 8.931 & 10.208 & 10.773 & 10.250 \\ 6 & 31.732 & 33.317 & 10.442 & 12.984 & 12.627 & 10.981 \\ 7 & 30.837 & 32.978 & 11.183 & 12.389 & 14.978 & 12.401 \\ 8 & 31.894 & 33.775 & 13.148 & 13.682 & 15.898 & 14.568 \\ 9 & 30.99 & 32.989 & 14.614 & 15.504 & 17.300 & 12.684 \\ 10 & 31.912 & 34.101 & 18.459 & 19.666 & 20.907 & 17.216 \\ \hline \hline \end{tabular} \end{table} Table 2: Average distances traveled after \(n\) steps, noisy simulation and QPU The QPU results for binary and arc counting are compared graphically in Figure 16. This shows how quickly the binary counter becomes useless on a real QPU, whereas by contrast, the arc counter QPU results stay close to the ideal simulated results. ### Quantum Walk Distributions and Real Financial Data The notion that market returns follow a normal (or log-normal) distribution is standard and established in quantitative finance, even though it has been known for decades that heavy-tailed distributions are sometimes a better fit (Mandelbrot, 1963; Zi-Yi, 2017). In the traditional random walk model, daily changes in asset price are also assumed to follow a normal distribution, or even more simply, to take a constant step in either direction, and the accumulation of many such steps in a binomial distribution eventually approximates the normal distribution. In practice, daily relative changes in stock prices also tend to have a distribution where most values cluster around zero, but significant outliers cause a normal distribution fitted with the same mean and standard deviation to underestimate the density in the middle if the distribution. This is shown for the Dow Jones Industrial Average in Figure 17 (using data from the Yahoo! Finance API). An initial comparison shows that distributions of daily changes generated by the arc walk and random jump circuits also follow this heavy-tailed pattern, which is not modeled well by normal approximations. This does not show that the quantum approximate counting circuits give a better prediction of stock price changes on a specific day. But it does show that the distribution of possible changes can be better-adapted to real-world financial data, without the artificial constraint that daily changes should be normally or uniformly distributed. Figure 16: Ideal and actual QPU results for binary counter and arc walk circuits. The QPU are much closer to the ideal monotonically-increasing results for the arc counter, whereas they are useless after 2 steps for the binary counter It should be noted that quantum results in Figure 17 are obtained with some parametrization and averaging, because all quantum job results are averaged over the number of shots, and the random jump circuits are averaged over a number of sample circuits as well. The impact of long-tail measurements depends on how large a sample we take. While this implies that there is some arbitrariness in results, it also means that parameters such as the number of qubits, circuits, and shots can be tuned to model particular datasets. In related work, IonQ quantum computers have also been used to model the normal distribution itself, using a matrix product state technique that can readily be adapted to other distributions, because it relies on piecewise polynomial approximation Iaconis et al. (2023). One of the longer-term promises of such work is that such distributions can be used as inputs for models such as the Monte Carlo simulations advocated by Egger et al. (2020). If we have a reliable circuit for preparing a particular distribution, then such a circuit could be used as input for Monte Carlo modeling by entangling its output with the simulated variables, rather than by sampling an individual number from the distribution and using this as a single 'classical' random input value. ## 7 Quantum Walks with Mid-Measurement: A Quantum Zeno Effect A crucial difference between classical random walks and quantum walks is that quantum walks behave differently depending on when they are measured. There is no classical counterpart for this behavior, because a hallmark of classical systems is that their state is revealed but not changed by measurement. In theory, it is possible to prevent a quantum system from changing state at all, by measuring smaller and smaller intervals. As a simplest example, the gate \(R_{X}(\theta)\) from Figure 1 operating on a qubit in state \(\ket{0}\) produces the state \[R_{X}(2\theta)(\ket{0})=\cos(\theta)\ket{0}+i\sin(\theta)\ket{1}.\] The probability of transitioning to the state \(\ket{1}\) is thus proportional to \(\sin^{2}(\theta)\) which tends to zero for small \(\theta\), and it is easy to see that if a larger angle is divided into smaller and smaller increments, the probability of observing a transition to the state \(\ket{1}\) in _any_ of these increments also tends to zero, because \(\lim_{n\to\infty}(n)\sin^{2}n\to 0\). This phenomenon is sometimes called the quantum Zeno effect, after Zeno's classical paradox of motion. Of crucial interest for this paper, such effects have also been observed in psychology. Kvam et al. (2015) Figure 17: Distributions of relative changes in the Dow Jones Industrial Average, and in quantum approximate counting simulations demonstrated that participants are likely to form less extreme judgments of moving scenes if asked to judge the motion in smaller time-frames, and Yearsley and Pothos (2016) demonstrated that participants evaluating evidence in a criminal trial are more likely to change their minds if several pieces of evidence are presented before asking for a decision. It is easy to add mid-circuit measurement to our quantum approximate counting circuits and to evaluate the results, at least in simulation. (The availability of mid-circuit measurement varies across quantum platforms currently, partly because the accuracy of the measurement and reset operations is hard to guarantee.) Example results are shown in Figure 18, simulating walks with 20 steps, with no mid-measurement, measurement every 7 steps, and measurement every step. The average positions reached by these walks were 35.6, 14.3, and 5.7 respectively, so as expected, the use of mid-measurement reduces the average distance traveled in the quantum walk. (It is not always this simple, particularly due to periodicity.) ### Mid-Measurement, Transactions, and the Housing Market In the quantum economics theory of Orrell (2020), the act of measurement is compared with fixing a transaction, and subjective opinions of value can vary like quantum states between transactions. By analogy with the quantum Zeno effect, we may expect that prices vary less if there is more frequent measurement, i.e. more frequent transactions. Evidence is presented by Orrell (2022); Orrell and Richards (2023) showing that price volatility is not constant, and that large variations correspond to uncertainty in value and wide ranges between different bid prices and asking prices. The range of prices offered to buy or sell an item is typically much more obvious in the housing market than the stock market, because each item for sale is priced and negotiated much more individually. A standard process involves the use of comparable sales or comps, in which transactions on properties nearby in time, space, and value are used to form a pricing estimate (Pagourtzi et al., 2003). If these nearby transactions correspond to measurements of the system, then the quantum Zeno effect would suggest that the outcome of this measurement is more certain if there are more nearby transactions. This effect was demonstrated in practice using the following modeling assumptions, and summary data published by Zillow. When a house sells for less than its original listing price, this indicates a difference between the seller's and the buyer's estimate of the house's value. Larger uncertainties in the market would support larger Figure 18: Arc counter circuit results, simulating the results of quantum walks with 20 steps, with and without mid-circuit measurement at the given positions. differences of opinion. Even when considering monthly averages of data, we would expect that a smaller number of sales in a given area would contribute to greater market uncertainty, and this should correlate with a greater difference between the list price and the sale price. By contrast, when a house sells for more than its original listing price, we assume that there are other factors involved: in particular, this situation is most common when there are other bids on the property from other potential buyers, so a minimum value is already established without the need for comparable transactions. Thus, we assume that the markets where lack of comparables is a primary factor in price uncertainty are those where the average sale price is less than the average list price. Data used to test these hypotheses was gathered from the Zillow Housing Data portal1. The datasets are summary statistics: counts and averages. These are only comparable within a given metro area: for example, 2000 sales in a month would be very low for New York, NY, and very high for Wichita, KS. Thus we compute correlations by comparing monthly statistics within each metro area. Footnote 1: [https://www.zillow.com/research/data/](https://www.zillow.com/research/data/), accessed 2023-10-05. The algorithmic steps are as follows: * For each metro area: * For each month: * Collect the sales count and the average list-to-sale price ratio. * If the average list-to-sale price ratio is greater than 1, skip this month. * This gives a set of (count, ratio) pairs, e.g. [(822, 0.98), (785, 0.96), (803, 0.97)], etc. * Compute the Pearson correlation coefficient between these sales counts and list-to-sale price ratios. * Gather the Pearson correlation coefficients into a histogram to see if there is a general trend. The result is in Figure 19. Nearly all the correlations are strongly positive. This shows that, in cases where a house is sold for less than its asking price, there is a very strong correlation between the translation volume, and the closeness of the list and the sale prices. This is in line with the trends expected from quantum economics models, in which various beliefs and opinions about value can evolve and diverge more when there are fewer transactions or measurements. However, it is also easy to propose simple non-quantum models for this behavior. Fewer comparable samples should lead to greater sampling error and thus greater price uncertainty. One potential strategy for evaluating and distinguishing which approaches are better would be to consider the dynamics / evolution of prices in such models: for example, to see if aspects of the quantum Monte Carlo sampling reduction described by Egger et al. (2020) can be applied to the problem of making accurate price estimates with fewer comparable sales. Any quantum'modeling advantage' on this problem would be especially compelling, because housing transactions are a naturally limited resource: we cannot simply train larger classical models for longer and assume they will give better results. ## 8 Conclusions and Future Work This paper has introduced and explored quantum approximate counting circuits, as fault-tolerant alternatives to the traditional quantum walk design, particularly for the way position is tracked and incremented. The new designs presented here lack some of the mathematical elegance, and the theoretical results, that accompany the traditional quantum walk design: and in particular, there are no longer unit increment and decrement operators that correspond to the ladder operators of a quantum oscillator. However, the enormous advantage for the simpler models presented here is that they behave much more accurately on NISQ-era quantum hardware, which could contribute to commercially advantageous applications of quantum computers in economics. These are just prototype designs so far. The main next steps for this work are to evaluate the proposals more quantitatively, answering the following two questions: 1. How do results on NISQ-era quantum computers correspond to ideal or simulated results for small circuits, and what does this indicate about the expected behavior on quantum hardware for systems that are too big to simulate on classical hardware? 2. How do results compare with the distributions observed with real market behaviors? The ideal outcome of this research is that we would find circuit walk designs that are robust enough to given better models of market behavior that include some of the benefits of quantum approaches noted by Orrell (2020), while being able to run on today's quantum hardware without waiting for error-correction. Given the crucial and explicit role that measurement plays in quantum models, it is possible that some of the earliest such quantum advantages will be apparent in markets where a small number of significant transactions can dramatically influence the price of a particular asset. An initial analysis suggests that the housing market may be an appropriate area to test this hypothesis. This work can be seen as part of a larger program to bring value in economic modeling on quantum computers. Other successes for quantum circuit designs include modeling and sampling from key distributions (Iaconis et al., 2023), and demonstrating particularly effective time-series models using copula functions implemented using entanglement (Zhu et al., 2022). Related work in cognitive science has demonstrated that simple quantum circuits can also be used to model decision-making processes (Widdows and Rani, 2022). In the next few years, it is likely that several such small components, being developed today, will be used as key building blocks in the first profitable applications of quantum computing in economics. Figure 19: Histogram showing correlations between larger numbers of transactions and smaller list-to-sale price differences. Data from Zillow Housing Data. Acknowledgements The author would like to thank Amit Bhattacharyya, Emmanuel Pothos, and David Orrell for interesting conversations and encouragement. ## 10 Funding This work was funded by IonQ, Inc.
2304.02511
Towards the optimal beam dump experiment to search for feebly interacting particles
Future searches for new physics beyond the Standard Model are without doubt in need of a diverse approach and experiments with complementary sensitivities to different types of classes of models. One of the directions that should be explored is feebly interacting particles (FIPs) with masses below the electroweak scale. The interest in FIPs has significantly increased in the last ten years. Searches for FIPs at colliders have intrinsic limitations in the region they may probe, significantly restricting exploration of the mass range $m_{\text{FIP}} < 5-10$\,GeV/c$^2$. Beam dump-like experiments, characterized by the possibility of extremely high luminosity at relatively high energies and the effective coverage of the production and decay acceptance, are the perfect option to generically explore the ``coupling frontier'' of the light FIPs. Several proposals for beam-dump detectors are currently being considered by CERN for implementation at the SPS ECN3 beam facility. In this we paper we analyse in depth how the characteristic geometric parameters of a beam dump experiment influence the signal yield. We apply an inclusive approach by considering the phenomenology of different types of FIPs. From the various production modes and kinematics, we demonstrate that the optimal layout that maximises the production and decay acceptance consists of a detector located on the beam-axis, at the shortest possible distance from the target defined by the systems required to suppress the beam-induced backgrounds.
Kyrylo Bondarenko, Alexey Boyarsky, Oleksii Mikulenko, Richard Jacobsson, Maksym Ovchynnikov
2023-04-05T15:36:44Z
http://arxiv.org/abs/2304.02511v2
# Towards the optimal beam dump experiment to search for feebly interacting particles ###### Abstract Future searches for new physics beyond the Standard Model are without doubt in need of a diverse approach and experiments with complementary sensitivities to different types of classes of models. One of the directions that should be explored is feebly interacting particles (FIPs) with masses below the electroweak scale. The interest in FIPs has significantly increased in the last ten years. Searches for FIPs at colliders have intrinsic limitations in the region they may probe, significantly restricting exploration of the mass range \(m_{\rm FIP}<5-10\,\mathrm{GeV/c^{2}}\). Beam dump-like experiments, characterized by the possibility of extremely high luminosity at relatively high energies and the effective coverage of the production and decay acceptance, are the perfect option to generically explore the "coupling frontier" of the light FIPs. Several proposals for beam-dump detectors are currently being considered by CERN for implementation at the SPS ECN3 beam facility. In this we paper we analyse in depth how the characteristic geometric parameters of a beam dump experiment influence the signal yield. We apply an inclusive approach by considering the phenomenology of different types of FIPs. From the various production modes and kinematics, we demonstrate that the optimal layout that maximises the production and decay acceptance consists of a detector located on the beam-axis, at the shortest possible distance from the target defined by the systems required to suppress the beam-induced backgrounds. ## 1 Introduction Despite the success of the Standard Model (SM) of particle physics, evidence for the existence of new physics beyond the Standard Model is already well established because the origin of neutrino oscillations, dark matter, and the baryon asymmetry of the Universe is not known. However, we have no solid predictions of where to search for it. New particles capable of resolving these problems can have masses from sub-eV to Planck scale and coupling constants with SM particles ranging many orders of magnitude. At this crossroad point of particle physics, it is essential to use efficiently available or planned experimental facilities to push forward different frontiers of physics, probing whole classes of models simultaneously. If the mass of a new particle is below the EW scale, it may be produced at accelerators not only as a resonance but also in decays of SM particles, such as heavy bosons \(W,Z,h\), as well as mesons \(\pi,D,B\). This makes this range of masses of new particles especially interesting from an experimental point of view. In this mass range, new particles would have escaped detection, not because of the limit on available accelerator energy, but because their creation is extremely rare. Numerous searches at past experiments as well as at the LHC constrain large values of coupling constants, which is why new particles of this type are often called feebly-interacting particles, or FIPs (see e.g. [1; 2]). FIPs can play a direct role in the beyond SM phenomena, like e.g. heavy neutral leptons or HNLs in the sub-EW mass range explain neutrino masses via sea-saw mechanism and matter-antimatter asymmetry via their out-of-equilibrium kinetics in the early Universe at a temperature above 100 GeV. They can also be a "portal" that connects the SM sector with a Dark Sector, i.e. the case in which the Dark Sector particles only interact with ordinary matter via the FIP mediator (see e.g. [3]). FIPs, with their tiny coupling constants, form a "coupling frontier" of particle physics, and are part of a whole class of SM extensions. If there is no particular reason (e.g. symmetry) for making these new particles stable, their lifetime scales with the coupling constant \(g\) and mass \(m_{\rm FIP}\) as \(\tau_{\rm FIP}\propto g^{-2}m_{\rm FIP}^{-\alpha}\) where \(\alpha=1-5\)[3]. Depending on the lifetime, different search strategies should be used to probe the FIP parameter space efficiently. In particular, particles with lifetimes \(c\tau_{\rm FIP}\gtrsim 1\) mm can be searched for via displaced-vertices schemes at the LHC and future colliders [4; 5; 6; 7]. The limitation on \(c\tau_{\rm FIP}\) from below comes from numerous backgrounds caused by events with SM particles, which occur at small displacements. The limitation from above is FIP-dependent. It is caused by two factors. First, the collider detectors have short decay volumes of the order of \({\cal O}(1\) m), typically defined by the dimensions of their inner trackers [4]. As a result, long-lived FIPs mainly decay well beyond the fiducial volume. Second, to reduce SM backgrounds, one must impose severe selection criteria on candidate events with FIPs, such as kinematic properties and specific final states, resulting in low signal efficiency. For instance, a typical selection efficiency of recent searches for HNLs at CMS [4] was of the order of 1%. Therefore, even if a long-lived FIP decays inside the tracker, the event will likely be outside the selection acceptance. Typically, the FIP lifetime and its production rate are controlled by the same coupling, i.e. a significantly long FIP lifetime also means a small production rate.1 Together with typically low signal efficiencies coming from the trigger and the event-versus-background selection, it is possible that the FIP production rate within the acceptance of the searches is insufficient to provide any sensitivity. This is the case of dark scalars with the coupling through mixing, dark photons, and axion-like particles [1]. Even for FIPs where the production is in principle sufficient (e.g., heavy neutral leptons [5]), future collider searches have a limited potential to probe the pa rameter space of GeV-scale FIPs. This is explained by the behavior of the FIP decay length \(c\tau_{\rm FIP}\gamma_{\rm FIP}\propto g^{-2}m_{\rm FIP}^{-\alpha-1}\) - for a given coupling \(g\), the lifetime rapidly increases with decreasing \(m_{\rm FIP}\), in other words, quickly reducing the FIP decay probability within the fiducial volume and hence the sensitivity. We illustrate these points in Fig. 1, where we show the parameter space of heavy neutral leptons (HNLs) and dark scalars mixing with Higgs bosons. Future development of new trackers [8; 9] at ATLAS, CMS, and LHCb may improve the LHC reach for light FIPs, but the remaining explorable region in the parameter space will still be large (see Sec. 6). Altogether, this suggests that we need a special experiment to search for FIPs in the GeV range. In this paper, we argue that the most suitable experimental setup to search for FIPs with mass below \(5\,\mathrm{GeV}\) is a beam dump experiment, where an extracted proton beam hits a dense target, and where the search is performed in a displaced decay volume. Although having much lower centre-of-mass energy of collisions than at colliders, beam dumps can deliver extremely high luminosity by operating with Figure 1: The potential of future collider searches to probe the parameter space of feebly-interacting particles in the plane FIP mass - FIP coupling to SM particles. The figures demonstrate that colliders cannot efficiently explore the parameter space of FIPs with mass of the order of GeV. **Left panel:** parameter space of heavy neutral leptons (HNLs) that mix with electron neutrinos. The lower bound (seesaw) is defined by their ability to generate masses for active neutrinos [10]. LHC in high luminosity phase [5; 7] and lepton colliders [6] are mainly sensitive to short-lived HNLs, with the typical lifetimes \(c\tau_{N}\lesssim\mathcal{O}(100~{}\mathrm{m})\), see text for details. The scaling of the HNL lifetime with the mass is \(\tau_{N}\propto m_{N}^{-5}U_{e}^{-2}\). As a result, colliders have poor sensitivity to HNL masses \(m_{N}\lesssim 10\:\mathrm{GeV}\). **Right panel**: dark scalars mixing with Higgs bosons. Given the strict event selection to cope with the backgrounds and the available triggers, the event rate with displaced vertices at colliders is insufficient to provide a competitive sensitivity. Instead, scalars may be searched for with prompt events at LHCb [11], even though only relatively large couplings are within reach. The parameter space of these examples and other feebly-interacting particles in the GeV range can instead be more efficiently explored in the coming years with beam dump experiments. a more intensive proton beam combined with a high-A/Z target. This means in particular that they are capable of delivering a large number of mesons within a relatively small forward solid angle, in particular \(B,D\), that may further decay into FIPs in the mass range of interest. There is no limitation on the decay volume length. It can easily be several tens of metres to cover a much larger lifetime acceptance than at collider detectors. Finally, backgrounds can be significantly reduced by placing the decay volume behind a chain of components designed to suppress beam-induced particle backgrounds, such as a hadron absorber and a muon deflector, considerably reducing the need to impose strict signal selection criteria. The best placement of such an experiment is the SPS accelerator at CERN, operating with a high-intensity proton beam of energy \(E_{p}=400\) GeV. Several proposals of beam dump experiments at the SPS have been made [12; 13; 14]. They differ in geometric parameters, including the placement with respect to the beam axis, the decay volume size, and the detector angular coverage, both in terms of production acceptance and decay acceptance. The experiments also differ in the choice of material for the proton target. Maximum production of FIPs, and simultaneously maximum suppression of background from pion and kaon decays to muons and neutrinos, are achieved with a target of the highest possible atomic mass and atomic number, as well as minimized internal cooling for density. However, to discuss the optimal experimental layout, we assume in this paper the same target material for all experiments. In this paper, we perform an analysis of the sensitivity to FIPs with respect to the geometric parameters of a beam dump experiment, in a maximally model-independent way, in order to find the optimal configuration. We start with an on-axis experiment specified in Table 1. Its close analog is the SHiP experiment [12; 15]. We then study how the FIP sensitivity is affected by changing the parameters from the table. To be model-independent, we consider a few FIP models covering a wide class of production mechanisms and decay modes. Our main results are summarized in Figures 5, 6, 7, 8. We demonstrate that the setup in Table 1 is optimal for searching for FIPs independently of their lifetime, \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(l_{\text{min}}\) & \(S_{\text{det}}\) & \(l_{\text{fid}}\) & \(l_{\text{det}}\) & \(r_{\text{displ}}\) \\ \hline 38 m & \(4\times 6\) m\({}^{2}\) & 50 m & 15 m & 0 m \\ \hline \end{tabular} \end{table} Table 1: Parameters of the hypothetical experiment used as a reference experiment in our estimates: the longitudinal distance from the target to the beginning of the decay volume; the transverse dimensions of the decay volume and the detector, the longitudinal length of the decay volume; the longitudinal length of the detector; the distance from the centre of the detector in the transverse plane to the beamline. Here and below, we assume that the decay volume is oriented parallel to the beamline, which is motivated by the typical constraints from available space and infrastructure. being also compatible with the absence of backgrounds. One of the main reasons for providing the largest signal acceptance is the on-axis placement. The off-axis location leads to a substantial loss of acceptance, and significantly worsens the ability to reconstruct properties of FIPs such as mass, spin, and decay modes, see Figure 8. The paper is organized as follows. In Sec. 2, we start from the expression for the number of events and discuss the FIP phenomenology (subsection 3), describing, in particular, their production and decay modes. In Sec. 4, we analyze how the number of events at the lower bound of the sensitivity varies with the experimental configuration, considering separately its placements on-axis (Sec. 4.1) and off-axis (Sec. 4.2) relatively to the beamline. In Sec. 4.3, we study the impact of the configuration on the potential to probe FIPs at the upper bound of the sensitivity. In Sec. 5, we apply our findings to compare the sensitivity of the experiments proposed at SPS, SHiP, SHADOWS, and HIKE. Finally, in Sec. 6, we make conclusions. Appendices contain all the relevant technical information on the phenomenology of FIPs and the calculations. ## 2 Number of signal events Let us start with the expression for the number of events in the regime of large lifetimes, where the typical decay length of FIPs, \(c\tau_{\rm FIP}\langle\gamma_{\rm FIP}\rangle\), is much larger than the characteristic scale of the experiments \(\simeq 100\;\mathrm{m}\) (the opposite cases of short lifetimes \(c\tau_{\rm FIP}\langle\gamma_{\rm FIP}\rangle\lesssim l_{\rm min}\) is discussed in Sec. 4.3). In this case, the number of events may be represented in the following schematic form (see Appendix A): \[N_{\rm events}\approx N_{X,\rm prod}\times\frac{l_{\rm fd}\langle p_{\rm FIP}^ {-1}\rangle}{c\tau_{\rm FIP}m_{\rm FIP}}\times\epsilon_{\rm geom}\times \epsilon_{\rm rec}\times{\rm Br}_{\rm vis} \tag{1}\] Here, \(N_{X,\rm prod}=N_{\rm PoT}\times{\rm Br}(pp\to X)\) is the total number of the FIPs produced in the collisions, with \(N_{\rm PoT}\) being the number of protons on target, and \({\rm Br}(pp\to X)\) the probability of producing of FIPs by any mechanism; the second factor is the decay probability in the regime of large lifetimes, with \(\langle p_{\rm FIP}^{-1}\rangle\) being the mean inverse momentum for the FIP at the experiment; \(\epsilon_{\rm geom}\) is the overall geometric acceptance, following from geometric limitations of the decay volume and detectors (discussed in details below); \({\rm Br}_{\rm vis}\) is the branching ratio of the decay of FIPs into states visible at the given experiment; finally, \(\epsilon_{\rm rec}\) is the total reconstruction efficiency - the fraction of events within the geometric acceptance that may be reconstructed. Independently of the configuration, we will assume \(\epsilon_{\rm rec}=1\), and that \({\rm Br}_{\rm vis}\) includes all decays with at least two electrically charged particles or two photons. Considering two experiments located at the same facility (such that the beam energy and target configuration are the same), for the ratio of the number events (1) one has \[\frac{N_{\rm events,1}}{N_{\rm events,1}}\approx\frac{l_{\rm fd,1}}{l_{\rm fd,2}}\times\frac{\epsilon_{\rm geom,1}}{\epsilon_{\rm geom,1}}\times\frac{ \langle E_{\rm FIP}^{-1}\rangle_{1}}{\langle E_{\rm FIP}^{-1}\rangle_{2}} \tag{2}\] Therefore, to compare the lower bounds of sensitivity of two experiments, we need to understand the behavior of \(\epsilon_{\rm geom}\) and \(\langle E_{\rm FIP}^{-1}\rangle\). ### Geometric acceptance Schematically, the geometric acceptance is given by \[\epsilon_{\rm geom}\simeq\epsilon_{\rm FIP}\times l_{\rm fid}^{\rm eff}/l_{\rm fid }\times\epsilon_{\rm decay}, \tag{3}\] see Fig. 2. The first factor is the FIP acceptance, i.e., the fraction of FIPs with trajectories pointing to the cross-section of the end of the detector.2\(l_{\rm fid}^{\rm eff}\) is effective fiducial volume length - the mean length inside the decay volume passed by FIPs pointing to the end of the detector. If the considered experiment is located on-axis, then \(l_{\rm fid}^{\rm eff}\approx l_{\rm fid}\). However, in case of the off-axis placement, parallel to the beamline, \(l_{\rm fid}^{\rm eff}\) gets effectively reduced. Finally, the last factor is the decay product acceptance, i.e., the fraction of decays of FIPs within \(\epsilon_{\rm FIP}\) with at least two of their decay products pointing to the end of the detector. Roughly, the decay products of FIPs with the gamma factor \(\gamma_{\rm FIP}\) have the opening angle3 Footnote 2: Naively, the mentioned definition of the FIP acceptance looks too restrictive, as it does not take into account the FIPs decaying inside the decay volume but not pointing to the detector. However, because of the 4-momentum conservation, FIPs that do not point to the detector typically cannot decay into particles pointing to the detector. Therefore, instead of considering FIPs decaying in any direction inside the decay volume, we considered only the FIPs that already point to the detector. Footnote 3: For simplicity, we considered here 2-body decays; however, for 3-body decays, the situation is qualitatively similar. \[\Delta\theta_{\rm decay}\sim 2/\gamma_{\rm FIP} \tag{4}\] Figure 2: Illustration of the impact of different contributions to the geometric acceptance defined by Eq. (3). First, the FIPs produced by collisions of the proton beam with the fixed target must point to the detector (the red arrow). The fraction of such events is given by \(\epsilon_{\rm FIP}\). The effective length inside the decay volume passed by decaying FIPs (the dashed blue line) may differ significantly from the nominal decay volume length \(l_{\rm fid}\). This results in the factor \(l_{\rm fid,eff}/l_{\rm fid}\). Finally, the decay products of FIPs (the green arrows) also have to point to the detector, which is incorporated by \(\epsilon_{\rm decay}\). If this angle becomes comparable with the angle covered by the detector as seen from the FIP's decay point, then \(\epsilon_{\rm decay}\) would significantly reduce the event rate. ## 3 Kinematic distributions of FIPs To understand the behavior of \(\epsilon_{\rm FIP}\) and \(\epsilon_{\rm decay}\), we need to study in a model-independent way how FIPs may be produced in proton-target collisions and how they decay. To this extent, we consider different types of FIPs: dark photons, dark scalars, HNLs, and ALPs with photon coupling [1]. By considering all of them, we can perform the analysis in a maximally model-independent fashion. The dominant production mechanisms and decay modes are shown in Fig. 3 and in Table 2. The production channels are decays of mesons (light unflavoured mesons \(\pi,\eta,\eta^{\prime},\rho^{0}\), as well as heavy flavored mesons \(B,D\)), or the direct production through proton bremsstrahlung, Drell-Yan process, coherent proton-nucleus and photon-nucleus scattering. We generate the distribution of the light mesons at the SPS using the approach of [16] (see also [17]), and use the distribution of \(B,D\) mesons from [18]. We follow the description of the bremsstrahlung process from [19] and the coherent scattering from [16]. For the details on the derivation of the distributions of FIPs from these production channels, see Appendix A. The solid angle distributions \(df_{\rm FIP}/d\Omega_{\rm FIP}\sim df_{\rm FIP}/d\cos(\theta_{\rm FIP})\) of the FIPs produced by these mechanisms, as well as their energy distributions are shown in Fig. 4. Because of the kinematics of the collisions with a fixed target, the bulk of the distributions of FIPs are contained within a relatively small forward solid angle around the beam axis, being flat up to and quickly dropping at large angles \(\theta>\theta_{\rm flat}\). The direct production processes are characterized by very small typical transverse momentum compared to the momentum of the incoming proton. In the case of Drell-Yan and proton bremsstrahlung, it is of the order of \(m_{p}\sim 1\:{\rm GeV}\)[21]. Given very large typical energies of the FIPs produced by these mechanisms, \(E_{\rm FIP}\sim 100-200\) GeV, the angle is \(\theta_{\rm flat}\simeq 10\) mrad. For the Primakov production off nuclei and nucleons, Figure 3: Examples of production processes for various FIPs: (a) proton bremsstrahlung (dark photon \(V\)), (b) coherent scattering off nuclei (ALP with the photon coupling \(a\)), (c) decays of \(B\) mesons (HNLs \(N\), dark scalars). \(\theta_{\rm flat}\) is determined by typical transverse momenta carried by virtual photons, being \(\theta_{\rm flat}\simeq m_{p}/E_{\rm beam}=2.5\cdot 10^{-3}\) rad [22]. Light mesons produced in collisions have characteristic \(p_{T}\) of the order of \(\Lambda_{\rm QCD}\), and relatively small mean energy of order of \(\langle E\rangle\simeq 20-30\) GeV, which leads to \(\theta_{\rm flat}\) being of the order of a few tenths of mrad [16]. Heavy mesons \(D,B\) have large \(p_{T}\) of the order of their mass, but, at the same time, much larger characteristic energies. As a result, their distributions start dropping at even smaller angles \(\theta_{\rm flat}\lesssim 10\) mrad. For lighter masses, the FIPs produced by decays of \(B,D\) may have broader angular distribution. The reason is an additional transverse momentum of the order of the energy of the FIP at the rest frame of the decaying meson, which may be as large as the meson mass for FIPs with \(m_{\rm FIP}\ll m_{\rm meson}\). The decay modes of FIPs may differ by the number of the decay products and their phase space if the number of products is fixed, which comes from the ratio of the FIP-to-decay product masses and the matrix element of the decay. The main decay modes of the FIPs are provided in Table 2. ## 4 Effect on the number of signal events from varying parameters of the experiment In this section, we study the impact of varying the geometric parameters of the experiment on the number of events. To illustrate the effect of the decay products \begin{table} \begin{tabular}{|c|c|c|} \hline FIP & Prod. modes & Decay modes \\ \hline \multirow{2}{*}{**DP**\(V\)} & \multirow{2}{*}{\(\begin{cases}\pi^{0}/\eta\to V,\ m_{V}<m_{\eta}\\ {\rm Brem/DIS},\ m_{V}>m_{\eta}\end{cases}\)} & \multirow{2}{*}{\(\begin{cases}V\to ll\\ V\to 2\pi,3\pi,KK,m_{V}\lesssim 1\ {\rm GeV}\\ V\to qq,m_{V}\gtrsim 1\ {\rm GeV}\end{cases}\)} \\ \hline \multirow{2}{*}{**ALP\({}_{\gamma}\)**\(a\)} & \(\begin{array}{c}\gamma+Z\to a+Z\\ p+Z\to p+Z+a\end{array}\) & \multirow{2}{*}{\(\begin{cases}a\to\gamma\gamma\\ \end{cases}\)} \\ \hline \multirow{2}{*}{**Scalar**\(S\)} & \multirow{2}{*}{\(\begin{cases}K\to S+\pi,\ m_{S}<m_{K}-m_{\pi}\\ B\to S+X,\ m_{S}>m_{K}+m_{\pi}\end{cases}\)} & \multirow{2}{*}{\(\begin{cases}S\to ll\\ S<\pi\pi/KK,m_{S}<2\ {\rm GeV}\\ S\to qq,m_{S}>2\ {\rm GeV}\end{cases}\)} \\ \hline \multirow{2}{*}{**HNL**\(N\)} & \multirow{2}{*}{\(\begin{cases}K\to N+X,\ m_{N}<m_{K},\\ D\to N+X,m_{N}<m_{D_{s}},\\ B\to N+X,\ m_{N}>m_{D_{s}}\end{cases}\)} & \multirow{2}{*}{\(\begin{cases}N\to ll\nu\\ N\to\pi^{0}\nu,\eta\nu,\pi l,m_{N}\lesssim 1\ {\rm GeV}\\ N\to qq\nu,qql,m_{N}\gtrsim 1\ {\rm GeV}\end{cases}\)} \\ \hline \end{tabular} \end{table} Table 2: Dominant production and decay modes of GeV-scale FIPs in proton-proton collisions at the SPS. We consider the dark photon, the ALP with the photon coupling, the Higgs-like scalar with mixing coupling, and HNLs with the dominant coupling to electron neutrinos. For more details, see [20]. acceptance and the FIP angular distribution, we consider the following FIPs and properties: * HNLs with masses \(0.5\text{ GeV}\), \(1.5\text{ GeV}\) produced by decays of \(D\) mesons, and \(4\text{ GeV}\) produced by decays of \(B\). * Dark scalars with masses \(0.5\text{ GeV}\) and \(4\text{ GeV}\) produced by decays of \(B\). * Dark photons with masses \(m_{V}=300\text{ MeV}\) produced by decays of \(\eta\) and \(m_{V}=1\text{ GeV}\) by proton bremsstrahlung. * ALPs with photon coupling with masses \(m_{a}=300\text{ MeV}\) and \(1\text{ GeV}\). Figure 4: Kinematics of FIPs produced in proton-target collisions at the SPS. Molybdenum target is considered. **Top panels**: solid angle distributions \(df_{\text{FIP}}/d\Omega_{\text{FIP}}\sim df_{\text{FIP}}/d\cos(\theta_{\text{ FIP}})\) of various FIPs. Different masses are considered, corresponding to different production channels (Table 2). Note that the distribution of heavy HNLs with \(m_{N}>3\text{ GeV}\) is very similar to the distribution of scalars, because of the same mother particle and decay kinematics. The polar angle coverage of the detector of the reference setup 1 is indicated with arrows and the vertical dashed line. **Bottom panels**: energy spectra of the mesons producing FIPs, dark photons produced by the proton bremsstrahlung, and ALPs with photon coupling. For the case of heavy mesons \(B,D\), the distribution is shown assuming two different angular coverage: “on-axis” \(\theta<0.05\text{ rad}\), and “off-axis”, \(\theta>0.05\text{ rad}\), to demonstrate how the spectrum gets softer off-axis. See text and Ref. [20] for details. ### On-axis location Let us first assume the on-axis placement of the experiment. We analyze how the number of long-lived FIPs is affected by changing the distance to the decay volume \(l_{\rm min}\), and its length \(l_{\rm fid}\) (the case of short-lived FIPs with \(c\tau_{\rm FIP}\gamma_{\rm FIP}\lesssim l_{\rm min}\) is discussed in Sec. 4.3). The dependence of the number of events on \(l_{\rm min}\) is shown in Fig. 5 (top panels). We normalize all the values to the corresponding values of the SHiP-like experiment from Table 1. The main impact of \(l_{\rm min}\) is in defining the solid angle covered by the detector as seen from the target, \[\Omega_{\rm det\text{-target}}=S_{\rm det}/(l_{\rm min}+l_{\rm fid}+l_{\rm det })^{2}, \tag{10}\] Figure 5: The behavior of the number of signal events of a beam dump on-axis experiment at the SPS at the lower bound of the sensitivity (\(c\tau_{\rm FIP}\langle\gamma_{\rm FIP}\rangle\gg 100\) m, see Sec. 2 for details) under change of the distance to the decay volume \(l_{\rm min}\) (**top panels**) and its length \(l_{\rm fid}\) (**bottom panels**) for different models of FIPs. On the one hand, changing these parameters may have a significant impact for the backgrounds to be removed, the complexity of the setup, and costs. On the other hand, the maximal impact of these parameters on the number of events is small, \(<\mathcal{O}(2)\), see text for details. Therefore, we conclude that the optimization of these parameters should be a subject of background considerations and costs rather than the maximization of the number of FIP events. The other parameters defining the experimental setup – the transverse size of the decay volume and the detector dimensions – are summarized in Table 1. For convenience, we normalize the number of events to the one for the configuration from Table 1. and hence the fraction of FIPs \(\epsilon_{\rm FIP}\) pointing to the detector. If the FIP's angular distribution \(df_{\rm FIP}/d\Omega\) (Fig. 4) is flat within the angles covered by the detector (the case of light HNLs, dark scalars, and ALPs), then \(\epsilon_{\rm FIP}\propto\Omega_{\rm det\text{-target}}\). From Eq. (21), it follows that to increase \(N_{\rm events}\) for the SHiP-like configuration by a factor of two, it is necessary to decrease \(l_{\rm min}\) from \(l_{\rm min}=38\text{ m}\) to \(l_{\rm min}\approx 8\text{ m}\). At the same time, if \(df_{\rm FIP}/d\Omega\) is collimated and falls at the boundaries (dark photon, ALPs, and heavy HNLs/dark scalars), the effect on \(\epsilon_{\rm FIP}\) is even smaller. On the other hand, the price of such a close placement would be a significant increase in the SM background. The effect of \(l_{\rm fid}\) is less trivial, see the bottom panels of Figure 5. It affects the product \(\epsilon_{\rm FIP}\times l_{\rm fid}\times\epsilon_{\rm decay}\). The impact on \(\epsilon_{\rm FIP}\) is similar to the \(l_{\rm min}\) case (Eq. (21)). The second factor comes from the decay probability of long-lived FIPs. In addition, by increasing \(l_{\rm fid}\) and maintaining the aperture of the detector constant, \(\epsilon_{\rm decay}\) decreases, and vice versa. Indeed, \(l_{\rm fid}\) enters the solid angle covered by the detector as seen from the beginning of the decay volume, \[\Omega_{\rm det\text{-fid}}=S_{\rm det}/(l_{\rm fid}+l_{\rm det})^{2} \tag{22}\] If \(l_{\rm fid}\) is too large, the opening angle between the FIP's decay products \(\Delta\theta_{\rm decay}\) (Eq. (4)) becomes comparable with the detector size, and such events do not contribute. Hence, \(l_{\rm fid}\times\epsilon_{\rm decay}\) remains constant under further increase of \(l_{\rm fid}\). As a result, the product \(\epsilon_{\rm FIP}\times\epsilon_{\rm decay}\times l_{\rm fid}\) first scales linearly with \(l_{\rm fid}\), then reaches maximum where \(\Delta\theta_{\rm decay}\) becomes comparable with the detector size, and then decreases as a result of decreasing \(\Omega_{\rm det\text{-target}}\). For the FIPs produced by decays of heavy mesons, the maximum is around \(l_{\rm fid}=50\text{ m}\). The situation is different for the FIPs produced by decays of \(\pi,\eta\), bremsstrahlung, or Primakov process. In these cases, either the FIPs have small masses or very large energies (Fig. 4), which leads to a very large \(\gamma_{\rm FIP}\). Hence, the suppression by \(\epsilon_{\rm decay}\) is not so severe, meaning that the number of events increases with \(l_{\rm fid}\) over a very large range. ### Off-axis location Let us now analyze the impact on the number of signal events when displacing the detector off-axis. As in the previous subsection, we first consider the configuration with the same dimensions and distance from the beam dump as in Table 1. We will consider only the parallel orientation of the decay volume and detector relative to the beamline, motivated by the limitations typically imposed by the infrastructure. Let us start with increasing the transverse displacement of the centre of the detector relative to the beamline, \(r_{\rm displ}\). We vary \(r_{\rm displ}\) from 0 to 5 m. Note that \(r_{\rm displ}\) is not the same as the off-axis displacement of the side of the decay volume. In particular, for the considered configuration, the latter is \(>0\) only if \(r_{\rm displ}>2\text{ m}\). For \(r_{\rm displ}=3\text{ m}\), the gap between the side of the decay volume and the beam axis is 1 m. The dependence of the number of events is shown in Fig. 6, top panels. For all FIP models considered, the number of events decreases with \(r_{\rm displ}\). The reason for this is that the FIP angular distribution decreases at the larger polar angles (Fig. 4) covered with the off-axis displacement \(r_{\rm displ}\). In the case that the whole detector is placed off-axis (\(r_{\rm displ}>2\) m, such that the side of the decay volume is entirely away from the beam axis), we would not only shift the detector in the domain of large \(\theta\), but also decrease the azimuthal coverage in the domain of small polar angles, which Figure 6: The behavior of the number of signal events of a beam dump experiment at the SPS at the lower bound of the sensitivity (\(c_{\rm 7FIP}\langle\gamma_{\rm FIP}\rangle\gg 100\) m, see Sec. 2) assuming an off-axis placement of the centre of its decay volume parametrized in terms of the displacement \(r_{\rm displ}\). From the figures, we see that independently on the FIP type, by increasing \(r_{\rm displ}\), the number of events decreases. This results from the very forward-pointing FIP angular distribution that falls at large polar angles (Fig. 4). Depending on the FIP, the decrease may be an order of magnitude or larger (**top panel**). It is impossible to compensate for this decrease by placing the experiment closer to the target (**bottom panel**): despite the increase of the solid angle covered by the detector, the minimal covered polar angle increases, which again results in a decrease of the FIP flux. The other parameters defining the experiment – transverse dimensions of the decay volume, and detector dimensions – are fixed as specified in Table 1. Note that \(r_{\rm displ}\) does not equal the off-axis displacement of the side of the decay volume. In particular, for the configuration considered, this displacement becomes non-zero only if \(r_{\rm displ}>2\) m. The displacement \(r_{\rm displ}=3\) m corresponds to 1 m gap between the side of the decay volume and beam axis. For convenience, we normalize the number of events to the one for the configuration specified in Table 1. further reduces the FIP acceptance. Finally, the acceptance gets further suppressed by the shortening of the effective decay volume length for FIPs that enter from the side (Eq. (3)). For heavy FIPs, the drop is more significant, which is explained by more forward-pointing angular distributions and smaller decay acceptance. The decrease in \(\epsilon_{\rm decay}\) is due to the softer energy spectrum at large polar angles. It leads to an increase of the typical opening angle between the decay products \(\theta\simeq 1/\gamma_{\rm FIP}\) (Eq. (4)), and hence a decrease in \(\epsilon_{\rm decay}\). It is also interesting to compare the effect of shortening \(l_{\rm min}\) in the case of off-axis and on-axis placements (Fig. 5 and discussion therein). The behavior of the number of events is illustrated in Fig. 6, bottom panels. Unlike the on-axis case, the number of events tends to grow if _increasing_\(l_{\rm min}\), which is again a result of the very forward-pointing angular distributions and decreasing \(\epsilon_{\rm decay}\) at the off-axis locations. ### Upper bound of the sensitivity for on-axis and off-axis Finally, let us also examine the role of the geometric parameters in determining the potential of the experiment to search for FIPs with large couplings, for which the typical FIP decay length is smaller than the distance to the decay volume, \(c\tau_{\rm FIP}\langle\gamma_{\rm FIP}\rangle\lesssim l_{\rm min}\). At the upper bound, the number of events is proportional to the following integral (see Appendix A): \[N_{\rm events}^{\rm upper\ bound}\propto\epsilon=\int dL\int dE_{\rm FIP} \frac{1}{c\tau_{\rm FIP}\gamma_{\rm FIP}}\exp\left[-\frac{L}{c\tau_{\rm FIP} \gamma_{\rm FIP}}\right]f_{E_{\rm FIP},L}^{l_{\rm min}}, \tag{7}\] where \(L\) is the modulus of the FIP decay position, and \(f_{E_{\rm FIP},L}^{l_{\rm min}}\equiv\left\langle\frac{df_{\rm FIP}}{dE_{\rm FIP }}\epsilon_{\rm decay}\right\rangle_{\theta}\) is the FIP distribution averaged over the angular coverage of the detector. The integral (7) effectively plays the role of geometric acceptance in the case of short-lived FIPs. It is sensitive to the high-energy tail of the FIP distribution (\(E_{\rm FIP}>\langle E_{\rm FIP}\rangle\)). We will start with the simpler case of the on-axis configuration. In this case, \(f_{E_{\rm FIP},L}^{l_{\rm min}}\) depends weakly on \(l_{\rm min}\) and on \(L\): \(f_{E_{\rm FIP},L}^{l_{\rm min}}\approx f_{E_{\rm FIP}}\). Indeed, independently of \(l_{\rm min}\), the detector covers the far-forward domain, which determines the high-energy tail of the distribution function. As a result, the only impact of decreasing \(l_{\rm min}\) comes from decreasing \(L_{\rm min}\approx l_{\rm min}\). Namely, the whole integral is saturated around its value \(L_{\rm min}\): \[\epsilon\approx\int dE_{\rm FIP}\exp\left[-\frac{l_{\rm min}}{c\tau_{\rm FIP} \gamma_{\rm FIP}}\right]f_{E_{\rm FIP}} \tag{8}\] Up to logarithmic corrections, at a fixed mass, the upper bound of the sensitivity, i.e., the smallest lifetimes that may be probed, scales as \(\tau_{\rm FIP}^{\rm upper}\propto l_{\rm min}^{-1}\). Let us now consider the off-axis case, concentrating on the case where the whole detector lies off-axis. For the detector dimensions in Table 1, this case would correspond to the transverse displacement of the centre of the detector \(r_{\rm displ}>3\) m. In this case, the situation is less trivial. First, the FIP energy spectrum becomes softer (remind Fig. 4) at large polar angles. As a result, the value of the \(E_{\rm FIP}\) integral in Eq. (4.3) at fixed \(L\) decreases compared to the off-axis case. Second, the FIPs pointing to the closest (to the beamline) part of the detector enter the decay volume from the side (Fig. 2). This also decreases \(f_{E_{\rm FIP},L}^{l_{\rm min}}\). This behavior is in tension with the exponent in Eq. (4.3): small values of \(L\) close to \(L_{\rm min}\) correspond to maximal polar angle and hence smaller energies/acceptance. This destructive interplay destroys the positive impact of decreasing \(l_{\rm min}\) on the potential to probe short FIP lifetimes. As a result, the fully off-axis configurations have a lower number of signal events than the on-axis, even given much smaller \(l_{\rm min}\). To illustrate these qualitative arguments, let us consider three setups: the on-axis experiment from Table 1, the same on-axis experiment but with \(l_{\rm min}=10\) m, and finally an off-axis experiment with \(l_{\rm min}=10\) m and \(r_{\rm displ}=3\) m. For the FIPs, Figure 7: The energy distribution of the function (4.5) determining the number of events at the upper bound of the sensitivity (\(c\tau_{\rm FIP}\langle\gamma_{\rm FIP}\rangle\lesssim l_{\rm min}\), where \(l_{\rm min}\) is the distance from target to the decay volume). Its shape and normalization depend on the value of the distance from the target to the decay volume \(l_{\rm min}\) and the high-energy tail of FIPs within the acceptance for the given experiment. If decreasing \(l_{\rm min}\) for the on-axis placement of the experiment, the high-energy tail would remain unchanged; as a result, we would increase the event rate. This may not be the case for an off-axis placement: the energy spectrum may become much softer and compensate for the decrease of \(l_{\rm min}\) (see text for details). To illustrate these points, we consider a dark scalar with mass \(m_{S}=3\) GeV and lifetime \(c\tau_{S}=0.05\) m at three various experimental setups at SPS: the configuration from Table 1; the same configuration but with \(l_{\rm min}=10\) m; and the off-axis experiment with \(l_{\rm min}=10\) m, the displacement of the lower edge of its decay volume from the beamline of 1 m, and the decay volume length \(l_{\rm fid}=20\) m. The number of events at the closer on-axis experiment is larger, while at the off-axis experiment, it is smaller since scalars have smaller energies. we will consider dark scalars with mass \(m_{S}=3\) GeV and lifetime \(c\tau_{S}=5\) cm, which corresponds to the upper bound of the sensitivity. The behavior of the integrand of (4.3), \[\frac{d\epsilon}{dE_{\rm FIP}}=\int dL\frac{1}{c\tau_{\rm FIP}\gamma_{\rm FIP}} \exp\left[-\frac{L}{c\tau_{\rm FIP}\gamma_{\rm FIP}}\right]f_{E_{\rm FIP},L}^{ l_{\rm min}}, \tag{4.5}\] for the three setups is shown in Fig. 7. Obviously, the on-axis setup with \(l_{\rm min}=10\) m has the largest flux of FIPs. However, despite the fact that the off-axis decay volume is located \(\sim 4\) times closer to the beam dump than the on-axis setup with \(l_{\rm min}=38\) m, the off-axis setup has a \(\simeq 30\) times lower value of the total integral (4.3), following from the factors described above. ## 5 Comparison between the experiment proposals under study at the CERN SPS In this section, we make a comparison of the physics yields between the three experiment proposals that are currently being considered for implementation in the ECN3 beam facility at CERN's SPS accelerator, HIKE [13], SHADOWS [14], SHiP [12]. All proposals are based on a similar detector setup in that they conceptually consist of large decay volumes followed by spectrometers and particle identification, together with various veto systems. HIKE is primarily a kaon experiment located on-axis. It requires a specialised beam setup with a kaon target, a secondary beam line for the kaon selection, and an absorber of copper/iron for the remaining proton beam and secondary hadrons from the kaon target. HIKE's distance to the kaon target is defined by the optimisation for the kaon physics programme, resulting in a relatively large distance between HIKE's decay volume and the absorber. The kaon physics optimisation imposes limitations on the maximum beam intensity that is due to both the secondary beam line setup and the detector, effectively four times lower than it would be for SHiP. HIKE is also proposed to partially operate in beam-dump mode for FIP physics. In this mode, the kaon target is moved aside to let the proton beam of the same intensity be directly dumped on the absorber. With the very small solid angle coverage, HIKE has mainly sensitivity to dark photons and ALPs with photon coupling. SHADOWS is an off-axis experiment that would be located alongside HIKE's kaon beam, downstream of the absorber, with the detector covering angles \(\theta\gtrsim 30\) mrad (Fig. 4). SHADOWS' distance to the proton absorber is defined by the infrastructure around the absorber, shielding requirements, a muon sweeper, and the subsequent beam line elements. SHADOWS would operate together with HIKE in beam-dump mode, with SHADOWS searching for the FIPs flying off-axis and HIKE for those produced in the far-forward region. In this respect, it is expected that the beam time for HIKE and SHADOWS is split between periods of kaon physics and beam-dump physics. SHiP is instead a dedicated on-axis experiment with the detector located as close as possible to a compact target station housing a target of molybdenum/tungsten that is optimised for FIP physics. SHIP's distance to the target is defined by a hadron stopper with minimum depth, and a specialised magnetic muon deflector that sweeps the muon flux away from the fiducial volume. SHIP's location allows it to cover all the FIP production modes at the SPS. In Fig. 8 we show the 90% CL sensitivities of HIKE\({}_{\rm dump}\) + SHADOWS and of SHIP to HNLs, dark scalars, dark photons, and ALPs with the coupling to photons. In addition, for SHiP, we include the iso-contours corresponding to \(N_{\rm events}=100\). Such a large number of events allows not only to establish the existence of a new particle but also identify its properties such as branching ratios of various decay channels, precise mass, etc. Details of the sensitivity estimates are described in Appendix A. We see that the lower bound of the sensitivity of SHADOWS+HIKE\({}_{\rm dump}\) is close to the 100 events line of SHiP. This may be easily understood if using Eq. (1). Namely, for SHADOWS, the product \(N_{\rm PoT}\times l_{\rm fid}\) is 10 times smaller than at SHiP. The rest of the suppression in the number of events at SHADOWS comes from the off-axis placement parallel to the beamline, significantly decreasing \(\epsilon_{\rm FIP}\times\langle p_{\rm FIP}^{-1}\rangle\) - by a factor \(1/3-1/20\) depending on the FIP mass. For HIKE, the product \(N_{\rm PoT}\times l_{\rm fid}\) is only twice smaller than at SHiP. However, its detector covers \(\sim 20\) times smaller solid angle as seen from the target and the beginning of the decay volume, which results in a much lower overall acceptance. Moreover, despite the fact that SHADOWS is placed much closer to the target, which could naively mean that it should be able to probe larger couplings of FIPs, Fig. 8 shows that this is not the case. One of the reasons for this is that FIPs flying off-axis have a much softer energy spectrum and hence shorter typical decay length (see Fig. 6 and discussions therein). An additional reason is that the most energetic FIPs enter the decay volume from the side, resulting in a shorter flight path within the decay volume and thus suppressing the fraction of their decays inside the decay volume. ## 6 Conclusions Future searches for new physics beyond the Standard Model are without a doubt in need of a diverse approach and experiments with complementary sensitivities to different classes of models, ranging both mass and coupling scales. A theoretically and experimentally attractive case for new physics that is largely unexplored but within reach at current facilities, consists of particles with masses below the electroweak scale that may be produced at accelerators in decays of SM particles. Their couplings should be small in order to avoid existing experimental constraints. Hence, they are often called feebly-interacting particles, or FIPs. FIPs are now being actively searched for at the LHC and will be searched for at future collider experiments. However, this type of search has limitations in prob ing the parameter space of long-lived FIPs in the mass range \(m_{\rm FIP}\lesssim 5-10\) GeV, see Fig. 1, mainly because of strong backgrounds and too short decay volumes as compared to the typical decay length of light FIPs. Beam dump-like experiments, characterized by the possibility of extremely high luminosity at relatively high energies, and effective coverage of the production and decay acceptance, are the perfect setup to generically explore the "coupling frontier" and the case of FIPs with mass \(m_{\rm FIP}\lesssim 5-10\) GeV. Figure 8: Comparison of the potential of the HIKE, SHADOWS, and SHiP proposals to explore the parameter space of FIPs in beam-dump mode. The FIP physics is here illustrated with HNLs (**top left panel**), dark scalars with the mixing coupling and with the quartic coupling fixed by \({\rm Br}(h\to SS)=10^{-3}\) (**top right panel**), dark photons (**bottom left**), and ALPs with the coupling to photons (**bottom right**). For all the models except for dark scalars, we use the 90% CL combined sensitivity of SHADOWS and HIKE\({}_{\rm dump}\) from the LoIs [13; 14], while for dark scalars we report our own estimates based on the SHADOWS configuration specified in the LoI (see Appendix A for detail). For SHiP, we show two curves: the 90% CL sensitivity, and the domain \(N_{\rm events}>100\), where it may be possible to determine properties of FIPs such as their mass and decay branching ratios. The combined impact of the lower number of protons on the target and the non-optimal placement for SHADOWS and HIKE lead to a significant limitation on their physics potential. Their exclusion domain lies within the FIP identification domain of SHiP. Moreover, in the case of FIPs produced by decays of \(B\) mesons (such as e.g. HNLs, dark scalars, and ALPs coupled to fermions), the reach of SHADOWS and HIKE may be overcome by future searches at LHCb [9] with triggers allowing the use of the muon stations as trackers. We show the sensitivity of such types of searches in the case of HNLs. below the B meson mass. Beam dump experiments can be equipped with long decay volumes and be located at some distance from the target to accommodate absorbers and deflectors of SM particles, as well as veto systems, to reduce the backgrounds. The best placement for such a FIP facility is the SPS accelerator at CERN, where the existing infrastructure and the currently available proton yield of up to \(4\times 10^{19}\) protons per year at \(400\,\mathrm{GeV}\) make it possible to implement and operate a-state-of-the-art experiment at relatively low costs. There are currently three experiment proposals being considered for implementation in the ECN3 beam facility at CERN's SPS accelerator, HIKE [13], SHADOWS [14], SHiP [12]. Their respective objectives and layouts are briefly summarised in Sec. 5. In order to determine the optimal experimental geometry, we have made an in-depth study of the dependence of the number of FIP signal events at the lower (Secs. 4.1, 4.2) and the upper (Sec. 4.3) bounds of the sensitivity, as a function of the length of the decay volume, the distance from the target, and the transverse displacement with respect to the beamline, and other parameters. We have performed the analysis for several classes of models of FIPs ("portals") with different production channels and decay modes, see Sec. 3. Given that all the proposals claim to reach a background-free regime, we do not consider backgrounds specifically other than the constraints that such assumptions impose on the geometric parameters. In particular, we analyzed the effect of displacing the detector off-axis for different portals. Generically, it leads to a decrease in the number of events for two specific reasons: first, the angular flux of FIPs decreases at large angles (Fig. 4, and second, it causes geometric shortening of the effective length of the decay volume along trajectories pointing to the detector (Fig. 2). The impact of the off-axis placement depends on the dominant FIP production channel, i.e., by the decays of mesons (such as for HNLs, ALPs, and dark scalars), or directly in proton-target collisions (dark photons). In the former case, shifting the decay volume entirely to the side of the beam axis such that it covers polar angles \(\theta\gtrsim 10\) mrad, leads to a loss of up to a factor of five in the number of events at the lower bound, depending on the FIP mass, as compared to the reference configuration in Table 1. In the latter case, the same decrease in yield is seen as soon as the side of the decay volume wall gets off-axis. Practically, this means that the off-axis configurations have no sensitivity to dark photons. In contrast, we have seen that changing the distance from the target to the decay volume, and its length, for the on-axis configuration does not affect the number of events at the lower bound by more than a factor of two (Fig. 5) over a broad range of parameters. On the other hand, significantly decreasing the distance to the target affects the complexity of the experiment and background suppression, and hence the cost. Therefore, optimizing the on-axis configuration is rather a subject of minimizing background and cost. Finally, we have applied the analysis to the three ECN3 proposals (Sec. 5). In Fig. 8, we show the 90% CL sensitivities of HIKE\({}_{\rm dump}\) + SHADOWS and of SHIP to HNLs, dark scalars, dark photons, and ALPs with the photon coupling. We also include the projection of the sensitivity of the LHC experiments, taking LHCb as an example, for the case that new triggers will be developed, allowing, e.g., the use of the muon chambers as trackers for FIPs independently of their production channel [9]. In this case, the sensitivity of SHADOWS+HIKE may be limited in the domain of heavy FIPs with \(m\gtrsim 2\) GeV. For SHiP, we also include the iso-contours corresponding to \(N_{\rm events}=100\) in Fig. 8. Such a large number of events allows not only to establish the existence of a new particle but also identify its properties such as branching ratios of various decay channels, precise mass, etc. We conclude that the combination of lower beam intensity and non-optimal geometric placement for HIKE and SHADOWS worsens their potential to explore the "coupling frontier" compared to SHiP. In particular, in the domain of small couplings, where SHADOWS and HIKE may detect 1 event at 90% CL, SHiP would be able to reconstruct their parameters such as mass, spin, and the probability of various decays. To stress the importance of the maximization of the FIP event yield, let us look closer at the model of HNLs. The main motivation for the HNLs is that they provide a very simple explanation of the observed active neutrino masses, in a way very similar to other fermions of the SM, i.e., by the mixing of left-handed and right-handed states via a Dirac mass term. For this, however, (i) at least two HNLs are required in order to explain to observed neutrino mass differences \(\Delta m^{2}_{\rm solar},\Delta m^{2}_{\rm atm}\); (ii) their coupling constants should be at the level presented in Fig. 1 as the "sea-saw line". As we see, such couplings are very far from what will be accessed by the current experiments. It is, however, possible that the couplings of each of the HNLs are orders of magnitude larger, but that their contributions to the active neutrino masses cancel with high precision, due to a fine-tuning or a symmetry. The fine-tuning scale associated with Figure 9: **The left panel**: the HNL parameter space reach of SHiP and SHADOWS in the plane \(\xi=U_{\rm seesaw}^{2}/U^{2}\)-HNL mass. The departure of \(\xi\) from the seesaw line \(\xi=1\) shows the scale of fine-tuning needed to explain neutrino masses by two HNLs with large couplings. **The right panel:** the same figure but with the reach of colliders included. this cancellation is \(\xi=U_{\rm seesaw}^{2}/U^{2}\), where \(U_{\rm seesaw}^{2}\sim 5\cdot 10^{-11}(1~{}{\rm GeV}/m_{N})\) is the seesaw line which does not require the fine-tuning. As shown in Fig. 9, beam dump experiments allow probing the part of the parameter space that is unreachable to collider experiments. In addition, we find that the layout of SHiP experiment at the SPS ECN3 beam facility is close to optimal. ## Acknowledgements AB is supported by the European Research Council (ERC) Advanced Grant "NuBSM" (694896). KB is partly funded by the INFN PD51 INDARK grant. OM is supported from the NWO Physics Vrij Programme "The Hidden Universe of Weakly Interacting Particles" with project number 680.92.18.03 (NWO Vrije Programma), which is (partly) financed by the Dutch Research Council (NWO). MO received support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 860881-HIDDeN.
2307.14623
BubbleML: A Multi-Physics Dataset and Benchmarks for Machine Learning
In the field of phase change phenomena, the lack of accessible and diverse datasets suitable for machine learning (ML) training poses a significant challenge. Existing experimental datasets are often restricted, with limited availability and sparse ground truth data, impeding our understanding of this complex multiphysics phenomena. To bridge this gap, we present the BubbleML Dataset \footnote{\label{git_dataset}\url{https://github.com/HPCForge/BubbleML}} which leverages physics-driven simulations to provide accurate ground truth information for various boiling scenarios, encompassing nucleate pool boiling, flow boiling, and sub-cooled boiling. This extensive dataset covers a wide range of parameters, including varying gravity conditions, flow rates, sub-cooling levels, and wall superheat, comprising 79 simulations. BubbleML is validated against experimental observations and trends, establishing it as an invaluable resource for ML research. Furthermore, we showcase its potential to facilitate exploration of diverse downstream tasks by introducing two benchmarks: (a) optical flow analysis to capture bubble dynamics, and (b) operator networks for learning temperature dynamics. The BubbleML dataset and its benchmarks serve as a catalyst for advancements in ML-driven research on multiphysics phase change phenomena, enabling the development and comparison of state-of-the-art techniques and models.
Sheikh Md Shakeel Hassan, Arthur Feeney, Akash Dhruv, Jihoon Kim, Youngjoon Suh, Jaiyoung Ryu, Yoonjin Won, Aparna Chandramowlishwaran
2023-07-27T04:47:05Z
http://arxiv.org/abs/2307.14623v2
# BubbleML: A Multi-Physics Dataset and Benchmarks for Machine Learning ###### Abstract In the field of phase change phenomena, the lack of accessible and diverse datasets suitable for machine learning (ML) training poses a significant challenge. Existing experimental datasets are often restricted, with limited availability and sparse ground truth data, impeding our understanding of this complex multiphysics phenomena. To bridge this gap, we present the BubbleML Dataset 1 which leverages physics-driven simulations to provide accurate ground truth information for various boiling scenarios, encompassing nucleate pool boiling, flow boiling, and sub-cooled boiling. This extensive dataset covers a wide range of parameters, including varying gravity conditions, flow rates, sub-cooling levels, and wall superheat, comprising 79 simulations. BubbleML is validated against experimental observations and trends, establishing it as an invaluable resource for ML research. Furthermore, we showcase its potential to facilitate exploration of diverse downstream tasks by introducing two benchmarks: (a) optical flow analysis to capture bubble dynamics, and (b) operator networks for learning temperature dynamics. The BubbleML dataset and its benchmarks serve as a catalyst for advancements in ML-driven research on multiphysics phase change phenomena, enabling the development and comparison of state-of-the-art techniques and models. Footnote 1: [https://github.com/HPCForge/BubbleML](https://github.com/HPCForge/BubbleML) ## 1 Introduction Phase-change phenomena, such as boiling, involve complex multiphysics processes and dynamics that are not fully understood. The interplay between bubble dynamics and heat transfer performance during boiling presents significant challenges in accurately predicting and modeling these heat and mass transfer processes. Machine learning (ML) offers the potential to revolutionize this field, enabling data-driven discovery to unravel new physical insights, develop accurate surrogate and predictive models, optimize the design of heat transfer systems, and facilitate adaptive real-time monitoring and control. The applications of ML in this domain are diverse and impactful. Consider the context of high-performance computing in data centers, where efficient cooling is critical. Boiling-based cooling techniques, such as two-phase liquid cooling, offer enhanced heat dissipation capabilities, ensuring reliable and optimal operation of power-intensive electronic components such as GPUs. Moreover, boiling phenomena play a crucial role in complex processes like nuclear fuel reprocessing, where precise modeling and prediction of boiling dynamics contribute to the safe and efficient management of nuclear waste. In the realm of water desalination, boiling processes are integral, playing a vital role in thermal desalination methods that provide clean drinking water in water-scarce regions. These advancements in pivotal areas such as thermal management, energy efficiency, and heat transfer applications, driven by ML techniques have far-reaching implications, empowering us to design more sustainable energy systems, enhance environmental preservation efforts, and advance engineering capabilities across various domains. To train data-driven ML algorithms effectively, we need large, diverse, and accurately labeled datasets. However, obtaining high-fidelity datasets that encompass a wide range of phase-change phenomena and operating conditions is a significant challenge. Boiling processes are highly sensitive to factors like surface properties, pressure, orientation, and working fluid composition [1]. Additionally, the chaotic nature of vapor interactions and occlusions makes quantifying boiling processes inherently difficult. Specialized experimental setups, involving instrumentation, sensors, and high-speed visualization techniques, come with substantial costs, further limiting the availability of extensive and accurate large-scale experimental data [59]. As a result, only a few well-funded research laboratories have access to precise ground truth data, and even then, this data often lacks fidelity and fails to capture detailed microscale dynamics, such as local bubble-induced turbulence and its impact on overall heat transfer. This scarcity of high-fidelity datasets poses challenges in designing accurate ML models for multiphase and phase change processes. While scientific ML (SciML) approaches can incorporate physical knowledge and constraints into the training process to reduce some of this data burden [44], the validation and quantification of uncertainty still rely on the availability of ground truth data. Therefore, there is an urgent need for open, diverse, and large-scale datasets to develop robust models and advance research in multiphysics problems such as phase change phenomena. Figure 1: **BubbleML Dataset.** Capturing diverse two-phase boiling phenomena with ground truth for key physical variables–velocity, temperature, and pressure. (a) Single bubble rising from a nucleation site on the heater surface. (b) Chaotic multi-bubble dynamics—merging, splitting. (c) Flow boiling transitions from bubbly to slug regime with increasing inlet velocity. The velocity and temperature fields are obtained by solving equations 1a and 1b, while the pressure field is obtained by solving the Poisson equation which ensures that continuity is satisfied. Simulations have played a key role as the third pillar of science in overcoming the inherent challenges faced by experimental studies in various scientific domains. High-fidelity multiscale data from simulations complement and enhance experimental measurements. In the field of phase change, simulations have successfully modeled transport equations for momentum, energy, and phase transition enabling accurate measurements of velocity, pressure, and temperature fields around bubbles [51, 27]. As a result, simulations serve as powerful tools for understanding and quantifying boiling. However, SciML researchers often setup their own simulations to generate ground truth solutions for training and testing their models, rather than relying on shared benchmark datasets. This is even common among major papers: [44, 30, 34, 26]. While this approach is reasonable for studying specific, simple partial differential equations (PDEs), real-world applications of PDE solvers and simulations often involve large-scale systems with complex multiphase physics and a combination of Dirichlet and Neumann boundary conditions [58]. These real-world problems require substantial domain expertise, engineering time, and computational resources. Performing such simulations independently is impractical or even infeasible for many ML researchers. This difficulty in dataset generation has led to a drought of SciML research to study "real-world" physics problems. Previous efforts to build benchmark datasets have primarily focused on single- and multiphysics problems with single-phase dynamics [61, 40, 9, 5]. As a response to the aforementioned challenges, we introduce the BubbleML Dataset 2, an extensive and innovative collection of data generated through Flash-X simulations [15]. This dataset encompasses a wide range of boiling phenomena, including nucleate boiling of single bubbles, merging bubbles, flow boiling in different configurations, and subcooled boiling. Figure 1 provides a visual glimpse into the diverse range of physical phenomena and variables covered by the dataset. To further enhance its applicability, the dataset covers various gravity conditions ranging from earth gravity to gravity at the International Space Station, different heater temperatures and also different inlet velocities. In total, we present around 80 simulations, each capturing a specific combination of parameters and conditions. In summary, the key contributions are as follows: Footnote 2: Through Zenodo, a permanent DOI for the dataset 10.5281/zenodo.8039786 **Multiphase and Multiphysics Dataset.** A comprehensive dataset encompassing a wide range of two-phase (liquid-vapor) phase change phenomena in boiling, with a focus on bubble and flow dynamics. This dataset will will be of great interest to the scientific machine learning and thermal science communities. **Real-world Validation.** Validation against experimental data to ensure the datasets accuracy and reliability. This validation process enhances the dataset's fidelity and establishes a strong connection between simulation and real-world phenomena. **Diverse Downstream Tasks.** BubbleML is designed to facilitate diverse downstream tasks. To demonstrate the dataset's potential, we provide two benchmark scenarios: optical flow for learning bubble dynamics and scientific machine learning using operator networks to learn temperature dynamics and estimate heatflux. ## 2 Related Work **Scientific Machine Learning Datasets.** There have been several efforts to develop benchmark datasets for scientific machine learning tasks [61, 40, 9, 5, 56, 23]. Notably, the ERA5 atmospheric reanalysis dataset [23], curated by the European Center for Medium-Range Weather Forecasting (ECMWF) provides hourly estimates of a large number of atmospheric, land, and oceanic climate variables since 1940. It is the most popular publicly available source for weather forecasting, facilitating the training of neural weather models such as FourCastNet [41], GraphCast [28], and ClimaX [38]. PDEBench [61] provides an impressive collection of datasets for 11 PDEs commonly encountered in computational fluid dynamics. Boundary conditions in scientific simulations play a crucial role in capturing the dynamics of the underlying physical systems. The majority of datasets in PDEBench utilize periodic boundary conditions. Although some datasets encompass Neumann or Dirichlet boundary conditions, none consider a combination of both which presents a noteworthy gap in accurately modeling real-world scenarios. Another challenging problem is the modeling of turbulent Kolmogorov flows and the dataset generated using JAX-CFD [25] is gaining popularity in benchmarking neural flow models [32, 60]. BlastNet [9] generated using DNS solver, S3D [8] focuses on simulating the behavior of a single fluid phase solving for compressible fluid dynamics, combustion, and heat transfer. AirfRANS [5] is a dataset for studying the 2D incompressible steady-state Reynolds-Averaged Navier-Stokes equations over airfoils. Current datasets have made commendable strides in addressing single- and multiphysics scenarios, and provide a valuable foundation for developing and evaluating SciML algorithms. Nonetheless, their scope falls short in capturing the range of behaviors and phenomena encountered in phase change physics. In contrast, BubbleML focuses on capturing the complex dynamics and physics associated with multiphase phenomena, particularly in the context of phase change simulations. Unlike many existing datasets that predominantly utilize a single type of boundary condition, BubbleML incorporates a combination of Dirichlet and Neumann boundary conditions [58]. This inclusion enables researchers to explore and model scenarios where multiple boundary conditions coexist, enhancing the realism and applicability of the dataset. Moreover, the presence of "jump" conditions along the liquid-vapor interface adds an additional layer of complexity. These conditions arise due to surface tension effects and require careful modeling to accurately capture the interface behavior [12, 13]. By incorporating such challenges, BubbleML provides a realistic and demanding testbed for ML models. **Optical Flow Datasets.** Optical flow estimation, a classical ill-posed problem [3] in image processing, has witnessed a shift from traditional methods to data-driven deep learning approaches. Middlebury [2] is a dataset with dense ground truth for small displacements, while KITTI2015 [37] provides sparse ground truth for large displacements in real-world scenes. MPI-Sintel [7] offers synthetic data with very large displacements, up to 400 pixels per frame. However, these datasets are relatively small for training deep neural networks. FlyingChairs [14], a large synthetic dataset, contains around 22,000 image pairs generated by applying affine transformations to rendered chairs on random backgrounds. FlyingThings3D [35] is another large synthetic dataset with approximately 25,000 stereo frames of 3D objects on different backgrounds. While these datasets have been instrumental in advancing data-driven optical flow methods, they primarily focus on rigid object motion in visual scenes and do not address the specific challenges posed by multiphase simulations. Efforts have been made to capture non-rigid motion in nature, such as piece-wise rigid motions seen in animals [29]. In boiling, the non-rigid dynamics of bubbles and the motion of liquid-vapor interfaces play a crucial role in the distribution and transfer of thermal energy. The BubbleML dataset provides a unique opportunity to explore and develop optical flow algorithms tailored to such dynamics. Unlike existing datasets, it offers a diverse range of bubble behaviors, including merging, growing, splitting, and complex interactions (see Figure 1). As a result, BubbleML fills a gap by providing challenging scenarios that involve phase change dynamics. The ability to accurately predict and forecast bubble dynamics has practical implications in various fields. ## 3 BubbleML: A Multiphase Multiphysics Dataset for ML In this section, we start by introducing the preliminary concepts underlying the SciML learning problem and give insights into the types of simulations and PDEs involved in this domain. Then, we present an overview of the dataset pipeline along with its validation against real-world experiments. ### Preliminaries A common application for SciML is approximating the solution of _boundary value problems_ (BVPs). BVPs are widely used to model various physical phenomena, including fluid dynamics, heat transfer, electromagnetics, and quantum mechanics [58, 57, 39, 20]. BVPs take the form: \(L[u(x)]=f(x),x\in\Omega\) and \(B[u(x)]=g(x),x\in\partial\Omega\). The goal is to determine the vector-valued solution function, \(u\). \(x\) is a point in the domain \(\Omega\) and may include a temporal component. The boundary of the domain is denoted as \(\partial\Omega\). The governing equation is described by the PDE operator \(L\), and the forcing function is denoted as \(f\). The _boundary condition_ (BC) is given by the boundary operator \(B\) and the boundary function \(g\). \(B[u]=g\) ensures the existence and uniqueness of the solution. There are three common types of BCs: periodic, Dirichlet, and Neumann. Periodic BCs enforce the equality of the solution at distinct points in the domain: \(u(x_{1})=u(x_{2})\). Dirichlet BCs specify the values of the solution on the boundary: \(u(x)=g(x)\). Neumann BCs enforce constraints on the derivatives of the solution: \(\partial_{n}u(x)=g(x)\)[58]. As seen in Figure 1, BubbleML combines both Dirichlet (no-slip walls, heater, inflow) and Neumann (outflow) boundaries, which impose constraints on flow and temperature dynamics. Additionally, the "jump conditions" that govern the transitions between the liquid and vapor phases use Dirichlet and Neumann boundaries [12]. ### Overview of PDEs and Flash-X Simulation A comprehensive description of the simulations is well beyond the scope of this paper and can be found in [12, 13]. We provide a concise description here as knowledge of the PDEs is important when training physics-informed models. The liquid (\(l\)) and vapor (\(v\)) phases of a boiling simulation are characterized by differences in fluid and thermal properties: density, \(\rho\); viscosity, \(\mu\); thermal diffusivity, \(\alpha\); and thermal conductivity \(k\). The phases are tracked using a level-set function, \(\phi\), which is positive inside the vapor and negative in the liquid. \(\phi=0\) provides implicit representation of the liquid-vapor interface, \(\Gamma\) (see Figure 1). The transport equations are non-dimensionalized and scaled using the values in liquid and are given as, \[\frac{\partial\vec{u}}{\partial t}+\vec{u}\boldsymbol{\cdot}\nabla\vec{u}=- \frac{1}{\rho^{\prime}}\nabla P+\boldsymbol{\cdot}\left[\frac{\mu^{\prime}}{ \rho^{\prime}}\frac{1}{\text{Re}}\nabla\vec{u}\right]+\frac{\vec{g}}{\text{Fr }^{2}}+\vec{S}_{u}^{\Gamma}+S_{P}^{\Gamma} \tag{1a}\] \[\frac{\partial T}{\partial t}+\vec{u}\boldsymbol{\cdot}\nabla T=\nabla\boldsymbol{ \cdot}\left[\frac{\alpha^{\prime}}{\text{Re}\,\text{Pr}}\nabla T\right]+S_{T}^ {\Gamma} \tag{1b}\] where, \(\vec{u}\), is the velocity, \(P\) is the pressure, and \(T\) is the temperature everywhere in the domain. The Reynolds number (Re), Froude number (Fr), and Prandtl Number (Pr) are constants set for each simulation. Scaled fluid properties like, \(\rho^{\prime}\), represent the local value of the phase scaled by the corresponding value in liquid. Therefore, \(\rho^{\prime}\) is \(1\) in liquid phase, and \(\rho_{v}/\rho_{l}\) for vapor phase. The effect of surface tension is modeled using Weber number (We) and incorporated by a sharp pressure jump, \(S_{P}^{\Gamma}\), at the liquid-vapor interface, \(\Gamma\). The effects of evaporation and saturation conditions on velocity and temperature, \(\vec{S}_{u}^{\Gamma}\), and \(S_{T}^{\Gamma}\), are modeled using a ghost fluid method [12]. For a more detailed discussion of non-dimensional parameters and values, we refer the reader to Appendix D. The continuity equation is given by, \(\nabla\cdot\vec{u}=-\hat{m}\nabla(\rho^{\prime})^{-1}\cdot\vec{n}\), where the mass transfer \(\hat{m}\) is computed using local temperature gradients in liquid and vapor phase, \(\hat{m}=\text{St}(\text{Re}\,\text{Pr})^{-1}\big{[}\nabla T_{l}\cdot\vec{n}^ {\Gamma}-k^{\prime}\nabla T_{v}\cdot\vec{n}^{\Gamma}\big{]}\) where, \(\vec{n}^{\Gamma}\) is the surface normal vector to the liquid-vapor interface. The Stefan number St, is another constant defined for the simulation and depends on the the temperature scaling given by, \(\Delta T=T_{wall}-T_{bulk}\), and latent heat of evaporation, \(h_{lv}\). Simulation data is scaled to dimensional values using the characteristic length \(l_{0}\), velocity \(u_{0}\), and temperature \((T-T_{bulk})/\Delta T\) scale. Temporal integration is implemented using a fractional step predictor-corrector formulation to enforce incompressible flow constraints. The solver has been extensively validated and demonstrates an overall second-order accuracy in space [12, 13]. In thermal science, _heat flux_ measured as the integral of the temperature gradient across the heater surface (\(\overline{q}=\partial T/\partial y\)) serves as a vital indicator of boiling efficiency. It reflects the contribution from multiple sub-processes such as conduction, convection, microlayer evaporation, and bubble induced turbulence. Identifying and managing each sub-process's impact to enhance \(\overline{q}\) is an open challenge [16, 24]. _Critical heat flux_ (CHF) signifies peak heat flux before a sharp drop in efficiency occurs due to the formation of a vapor barrier (see Figure 5b). It is arguably the most important design and safety parameter for any heat-flux controlled boiling application [31]. Accurate heat flux modeling and prediction of boiling crisis are paramount for the reliability of heat transfer systems [47, 71, 53]. The simulations in this study are implemented within the Flash-X framework [15, 12], and a dedicated environment is provided for running new simulations 3. The repository contains example configuration files for various multiphase simulations, including those used in this dataset. To ensure reproducibility, a lab notebook has been designed that organizes each study using configuration files for data curation. The lab notebook and Flash-X source code are open-source to allow for community development and contribution, enabling creation of new datasets beyond the scope of this paper. The simulation archives store HDF5 output files and bash scripts that document software environment and repository tags for reproducibility. The lab notebook also provides an option to package Flash-X simulations as standalone Docker/Singularity containers which can be deployed on cloud and supercomputing platforms without the need for installing third-party software dependencies. The latter is ongoing work towards software sustainability [11]. Footnote 3: [https://github.com/Lab-Notebooks/Outflow-Forcing-BubbleML](https://github.com/Lab-Notebooks/Outflow-Forcing-BubbleML) ### Dataset Overview The study encompasses two types of boiling namely, pool boiling and flow boiling. Pool boiling represents fluid confined in a tank above a heater, resembling scenarios like cooling nuclear waste. The BCs for pool boiling include walls on the left and right, an outlet at the top, and a heater at the bottom. In contrast, flow Boiling models water flowing through a channel with a heater, simulating liquid cooling of data center GPUs. There is an inlet BC modeling flow into the system and an outlet. The fluid used for the simulations is FC-72 (perfluorohexane), an electrically insulating and stable fluorocarbon-based fluid commonly used for cooling applications in electronics operating at low temperatures (ranging from \(50^{\circ}\)C to \(100^{\circ}\)C). To explore various phenomena, different parameters such as heater temperature, liquid temperature, inlet velocity, and gravity scale are adjusted in each simulation. A summary of the dataset is presented in Table 1. Appendix E provides detailed illustrations of the boundary conditions and descriptions of each simulation for reference. BubbleML stores simulation output in HDF5 files. Each HDF5 file corresponds to the state of a simulation at a specific instant in time and can be directly loaded into popular tensor types (e.g., PyTorch tensors or NumPy arrays) using BoxKit. BoxKit is a custom Python API designed for efficient management and scalability of block-structured simulation datasets [18, 10]. It leverages multiprocessing and cache optimization techniques to improve read/write efficiency of data between disk and memory. Figure 2 provides an example of a boiling dataset and the corresponding workflow for enabling downstream tasks like scientific machine learning and optical flow. By operating on simulation data in manageable chunks that fit into memory, BoxKit significantly improves computational performance, particularly when handling large quantities of datasets. Each simulation within the BubbleML dataset tracks the velocities in the x and y directions, temperature, and a signed distance function (SDF), \(\phi\), which represents the distance from the bubble interface. The SDF can be used to get a mask of the bubble interfaces or determine if a point is in the liquid or vapor phase. These variables are stored in HDF5 datasets. For instance, the temperature is stored in a tensor with a shape \(t\times x\times y\times z\), which allows indexing with \(xyz\)-spatial coordinates or time. For 2D simulation datasets, the shape becomes \(t\times x\times y\). The HDF5 files also include any constants or runtime parameters provided to the simulation. Some of these parameters, such as thermal conductivity or Reynolds number, are constants used in the PDEs that govern the system. The inclusion of these variables and parameters in the dataset enables comprehensive analysis and modeling of the boiling phenomena. BubbleML follows the FAIR data principles [65] as outlined in appendix A.2. To ensure accuracy of scientific simulations, it is also essential to validate against experimental observations due to inherent approximations in numerical solvers and simplified models of real-world phenomena. Appendix A.4 provides a comprehensive validation of the BubbleML dataset. ## 4 Benchmarks of BubbleML: Optical Flow and SciML ### Optical Flow **Generation of Optical Flow Dataset.** Optical flow computes the velocity field of an image based on the relative movement of objects between consecutive frames. This method holds significant implications for downstream tasks, such as extracting side-view boiling statistics and applying SciML to real world experimental data. Although many datasets capturing spatiotemporal dynamical systems can be repurposed to create optical flow datasets, the inherent non-rigidity of bubbles introduces unique physical phenomena that are not prevalent in other datasets. For instance, consider \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Dim & Type - Physics & Sims & Domain & Resolution & Timesteps & Size \\ & & & (\(mm^{d}\)) & Spatial & \(\Delta t\) & & (GB) \\ \hline 2D & PB - Single Bubble & 1 & \(4.2\times 6.3\) & \(192\times 288\) & 0.5 & 500 & 0.5 \\ 2D & PB - Saturated & 13 & \(11.2\times 11.2\) & \(512\times 512\) & 1 & 200 & 24.2 \\ 2D & PB - Subcooled & 10 & \(8.4\times 8.4\) & \(384\times 384\) & 1 & 200 & 10.3 \\ 2D & PB - Gravity & 9 & \(11.2\times 11.2\) & \(512\times 512\) & 1 & 200 & 16.5 \\ 2D & FB - Inlet Velocity & 7 & \(29.4\times 3.5\) & \(1344\times 160\) & 1 & 200 & 10.7 \\ 2D & FB - Gravity & 6 & \(35\times 3.5\) & \(1600\times 160\) & 1 & 200 & 10.9 \\ 2D & PB - Subcooled\({}_{0.1}\) & 15 & \(8.4\times 8.4\) & \(384\times 384\) & 0.1 & 2000 & 155.1 \\ 2D & PB - Gravity\({}_{0.1}\) & 9 & \(11.2\times 11.2\) & \(512\times 512\) & 0.1 & 2000 & 163.8 \\ 2D & FB - Gravity\({}_{0.1}\) & 6 & \(35\times 3.5\) & \(1600\times 160\) & 0.1 & 2000 & 108.6 \\ 3D & PB - Earth Gravity & 1 & \(8.75^{3}\) & \(400^{3}\) & 1 & 57 & 122.2 \\ 3D & PB - ISS Gravity & 1 & \(8.75^{3}\) & \(400^{3}\) & 1 & 29 & 62.6 \\ 3D & FB - Earth Gravity & 1 & \(35\times 3.5^{2}\) & \(1600\times 160^{2}\) & 1 & 55 & 93.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of BubbleML datasets and their parameters. \(\Delta t\) is the temporal resolution in non-dimensional time (\(\Delta t=1=0.008\) seconds). For rationale behind the parameter choices, refer to appendix A.3. PB: pool boiling. FB: flow boiling. Figure 2: **Dataset Curation and Workflow.** Flash-X multiphase simulations are executed and converted into unblocked HDF5 formats using the BoxKit library. The resulting dataset is publicly available 1, enabling downstream tasks like scientific machine learning and optical flow. the scenario where a bubble detaches from a heater surface: the bottom region of the bubble exhibits significantly higher velocity compared to the top region, resulting in a velocity gradient that forces the bubble into a spherical shape. At hotter heater temperatures, deformation and detachment process might occur more frequently, leading to different flow patterns and bubble behaviors illustrated in Figure 1. We create an optical flow dataset from BubbleML to capture multiphase phenomena. Using the liquid-vapor phase distinction information from simulations, we distinguish between liquid and vapor phases. This enables the generation of image sequences wherein bubble trajectories are tracked across consecutive timesteps. Importantly, our data extraction is limited to bubble velocities per timestep, excluding fluid velocities. This focuses the learning task on capturing discernible objects (bubbles). The bubble velocities in non-dimensional units are converted to pixels per frame units (see appendix B.1), before being written to the widely used Middlebury [2] flow format, producing a sequence of images and flow files that resemble the Sintel dataset [7]. To facilitate the training and validation of optical flow models, PyTorch dataloaders are provided for the generated dataset 1. This allows for easy integration and fine-tuning of existing optical flow models using the BubbleML dataset. **Learning Bubble Dynamics.** We evaluate and fine-tune two state-of-the-art optical flow models, RAFT [62] and GMFlow [68], using the BubbleML optical flow dataset (B). We consider three different pre-trained models for each method: the first model is trained exclusively on FlyingChairs (C), the second trained on FlyingChairs and FlyingThings3D (C+T), and the third model which is fine-tuned for the Sintel Benchmark (C+T+S). To assess the performance of the trained models, we measure the end-point error and Table 2 summarizes the results for one dataset. Refer to Appendix B for results on the other datasets. Initially, the pre-trained models exhibit subpar performance on the BubbleML data. To address this, each model is fine-tuned for 3-4 epochs with a low learning rate of \(10^{-6}\). After fine-tuning, we observe a significant improvement in predictions for the test data (see Figures 7 and 8 in Appendix B). While all fine-tuned models tend to converge to similar levels of accuracy for pool boiling datasets, fine-tuning the pre-trained FlyingChairs models (C) on BubbleML (B) dataset gives the best results. This could be attributed to the similar nature of the datasets consisting of 2D objects in motion. In the case of flow boiling, the best results are achieved by fine-tuning models initially trained for the Sintel benchmark (C+T+S). Flow boiling images have an extremely high aspect ratio (8:1) which is similar to the Sintel (3:1) and the KITTI (4:1) datasets. Note that although training the models on the boiling dataset for more epochs improves performance on our specific task, it adversely affects the models' generalization capabilities, leading to increased errors on the other datasets. **Open problems.** Error analysis (B.3) highlights the shortcomings of state-of-the-art optical flow models in accurately capturing the turbulent dynamics of bubbles. Although fine-tuning improves the overall performance, the high errors at the bubble boundaries remain an ongoing challenge. This underscores the need for novel optical flow models that incorporate physical insights to accurately capture the complex and chaotic behavior of boiling. BubbleML bridges the gap for physics-informed optical flow datasets. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multirow{2}{*}{Chairs (Val)} & \multicolumn{2}{c}{Things (Val)} & \multicolumn{2}{c}{Sintel (Train)} & \multicolumn{2}{c}{KITTI (Train)} & \multirow{2}{*}{Boiling (Test)} \\ \cline{3-4} \cline{6-9} & & & Clean & Clean & Final & F1-EPE & F1-all & \\ \hline \multirow{2}{*}{C} & RAFT & 0.82 & 9.03 & 2.19 & 4.49 & 9.83 & 37.57 & 4.20 \\ & GMFlow & 0.92 & 10.23 & 3.22 & 4.43 & 17.82 & 56.14 & 4.73 \\ \hline \multirow{2}{*}{C+B} & RAFT & 0.91 & 11.22 & 2.55 & 5.16 & 13.7 & 44.44 & **2.33** \\ & GMFlow & 1.31 & 11.99 & 3.78 & 5.12 & 21.91 & 63.04 & **2.36** \\ \hline \multirow{2}{*}{C+T} & RAFT & 1.15 & 4.39 & 1.40 & 2.71 & 5.02 & 17.46 & 4.72 \\ & GMFlow & 1.26 & 3.48 & 1.50 & 2.96 & 11.60 & 35.62 & 7.98 \\ \hline \multirow{2}{*}{C+T+B} & RAFT & 1.28 & 7.69 & 1.69 & 2.95 & 9.96 & 23.61 & 2.38 \\ & GMFlow & 1.39 & 3.88 & 1.61 & 2.91 & 14.49 & 43.09 & 2.51 \\ \hline \multirow{2}{*}{C+T+S} & RAFT & 1.21 & 4.69 & 0.77 & 1.22 & 1.54 & 5.64 & 8.39 \\ & GMFlow & 1.53 & 4.09 & 0.95 & 1.28 & 3.04 & 13.61 & 14.65 \\ \hline \multirow{2}{*}{C+T+S+B} & RAFT & 1.37 & 6.59 & 0.89 & 1.60 & 1.83 & 6.44 & 2.34 \\ & GMFlow & 1.65 & 4.49 & 1.07 & 1.45 & 4.06 & 18.99 & 2.56 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of pre-trained and fine-tuned RAFT and GMFlow models on optical flow data (B) generated from PB-Saturated dataset. A 80:20 split results in 2000 training and 500 validation image pairs. The pre-trained models include C trained on FlyingChairs dataset, C+T trained further on FlyingThings3D, and C+T+S further fine-tuned for the Sintel test benchmark. Model+B represents models fine-tuned on the BubbleML optical flow dataset. ### Scientific Machine Learning **SciML Preliminaries.** Our SciML baseline experiments use _neural PDE solvers_ to learn temperature and flow dynamics. We focus on two classes of neural PDE solvers: (a) Image-to-image models, widely used in computer vision tasks, such as image segmentation [49]. These are not always suitable as PDE solvers, since they can only be used on a fixed resolution, but are still competitive in many baselines [19, 32]. (b) Neural operators are neural networks that learn a mapping between infinite dimensional function spaces. As they map functions to functions, neural operators are discretization invariant and can be used on a higher resolution than they were trained [26]. The seminal neural operator is the Fourier Neural Operator (FNO) [30]. Further details can be found in Appendix C. For both classes of models, we employ the auto-regressive formulation of a forward propagator, denoted as \(\mathcal{F}\). For timesteps \(\{t_{1},\dots,t_{max}\}\) discretized such that \(t_{k+1}-t_{k}=\Delta t\), the forward propagator \(\mathcal{F}\) maps the solution function \(u\) at \(k\) consecutive time steps \(\{t_{m-k},\dots,t_{m-1}\}\) to the solution at time \(t_{m}\). For brevity, we use \(u([t_{m-k},t_{m-1}])=\{u(t_{m-k}),\dots,u(t_{m-1})\}\). The operator \(\mathcal{F}\) can be approximated using a neural network \(\mathcal{F}_{\theta}\) parameterized by \(\theta\). This network is trained using a dataset of \(N\) ground truth solutions \(D=\{u^{(n)}([0,t_{max}]):n=1\dots N\}\). By applying a standard gradient descent algorithm, we find parameters \(\hat{\theta}\) minimizing some loss function of the predictions \(\mathcal{F}_{\hat{\theta}}\{u^{(n)}([t_{m-k},t_{m-1}])\}\) and the ground truth solutions \(u^{(n)}(t_{m})\). Thus, given solutions for \(k\) initial timesteps of an unseen function \(u\), we can obtain an approximation \(\mathcal{F}_{\hat{\theta}}\{u([0,t_{k-1}])\}\approx u(t_{k}=t_{k-1}+\Delta t)\). Using this approximation for \(t_{k}\), we can step forward to get \(\mathcal{F}_{\hat{\theta}}\{u([t_{1},t_{k}])\}\approx u(t_{k+1})\). This process is called _rollout_ and is repeated until reaching \(t_{max}\). While in principle, rollout can be done for arbitrary time, the quality of approximation worsens with each step [32, 36]. We implement several strategies that attempt to mitigate this deterioration [6, 63]. However, achieving long and stable rollout is still an open problem. **Baseline Implementations.** We implement several baseline image-to-image models--including UNet\({}_{\text{bmch}}\) and UNet\({}_{\text{mat}}\)--and neural operators--including FNO, UNO, F-FNO, and G-FNO. Detailed descriptions and comparisons of the models are included in Appendix C.1. **Training Strategies.** Detailed descriptions for each of the training stragies we used are listed in Appendix C.2. We implement teacher-forcing training [66], temporal bundling, and the pushforward trick [6]. Models trained with the pushforward trick are prefixed with "P-". A discussion of hyperparameter settings can be found in Appendix C.3. **Metrics.** We draw inspiration from PDEBench and adopt a large set of metrics that include the Root Mean Squared Error (RMSE), Max Squared Error, Relative Error, Boundary RMSE (BRMSE), and low/mid/high Fourier errors [61]. These metrics provide a comprehensive view of the physical dynamics, which may be missed when only using a "global" loss metric. For instance, when predicting temperature, we find that the max error can often be very high due to the presence of sharp transitions between hot vapor and cool liquid. Even a one-pixel misalignment in the model's prediction can cause the reported temperature to be the opposite extreme. Metrics which report a global average (i.e., RMSE) could mask these errors because they get damped by the average. We incorporate an additional _physics_ metric: the RMSE along bubble interfaces (IRMSE). The accuracy along both the domain and immersed boundaries is very important. Boundary conditions determine if the solution to a PDE exists and is unique. In the case of the multiphysics BubbleML dataset, accurate modeling of the system requires satisfying the conditions at the liquid-vapor interfaces accurately. **Learning Temperature Dynamics.** One application of SciML using the BubbleML dataset is to learn the dynamics of temperature propagation within a system. In this context, the system's velocities serve as a sourcing function, influencing the temperature distribution. Notably, UNet-based models perform best across all datasets (see Figure 2(c) and d). For a full listing of error metrics for each model and dataset pairing, refer to Appendix C.4. UNet models may have some advantage in predicting the interfaces and boundaries (IRMSE and BRMSE), because they naturally act as edge-detectors. The temperature also propagates smoothly, so it is likely unnecessary to use global filters, like the FNO variants. In contrast, FNO models rely on fast Fourier transforms and weight multiplication in the Fourier space, which, while capable of handling global and local information simultaneously, might not be as effective at capturing local, non-smooth features. Several recent studies report similar observations about auto-regressive UNet and FNO variants [32, 19, 36]. The trained model can be a valuable tool to get fast estimates of heat flux, discussed in section 3.2. Heat flux is influenced by steep temperature gradients and dynamic temporal changes which presents a challenging problem. To further validate our models, we perform cross-validation to predict the heatflux trends observed in Figure 5. For each heat flux prediction, we holdout a simulation and train a forward propagator on the remaining simulations within the dataset. Even with partial training--50 epochs for subcooled boiling models and 100 epochs for saturated boiling models--we achieve compelling results. The heatflux predictions by UNet\({}_{\text{bench}}\) remarkably track the expected trend, as seen in Figure 2(a). **Learning Fluid Dynamics.** As an additional benchmark, we use the BubbleML dataset to train models to approximate both velocity _and_ temperature dynamics. This is a challenging problem. Results are shown in Appendix C.5. These follow similar training settings to the temperature-only models. Strikingly, however, we get nearly the opposite results to predicting only temperature: the UNet\({}_{\text{bench}}\) model struggles when predicting both velocity and temperature fields jointly, while the UNet\({}_{\text{nod}}\) and the FNO variants perform comparatively better. All of the models have difficulty capturing the trails of condensation that form in the temperature field. The vapor trails form, but dissipate more quickly than expected. An example rollout of UNet\({}_{\text{mod}}\), trained using the pushforward trick, is shown in Figure 4. We see that the flow closely aligns with the ground-truth simulation. **Open Problems.** We reiterate several open problems in SciML that BubbleML offers an avenue to explore. The first is the creation of a new class of _models that can learn multiple interrelated physics_. We find that while UNet architectures work well at predicting temperature and FNO variants work well at predicting velocity, neither excel at joint prediction of temperature and velocity. The CNN-based UNet architectures outperform FNO and its variants when predicting temperature, potentially due to CNN's capacity to naturally act as edge-detectors, and thus handle non-smooth interfaces more easily. On the other hand, FNO variants perform quite well at predicting velocities, but still struggle with temperature estimation, especially in capturing condensation trails. This is related to the second problem: _developing neural operators that can handle non-smooth and irregularly shaped interfaces_. FNO variants seem to encounter difficulties in modeling temperature fields, which have sharp jumps along bubble interfaces where the temperature transitions from cool liquid to hot vapor. Conversely, the velocity field appears relatively smooth, and thus may be composed of lower frequencies better captured by FNO variants. However, these models still miss sharp and sudden changes in velocity along bubble interfaces that are important for accurately modeling long-range dynamics. The third problem is improving _stability during long rollouts_. This is explored within the context of other datasets [32, 36], but it is particularly relevant for BubbleML. For instance, in subcooled boiling, after bubbles depart from the surface, they undergo condensation and generate vortices that gradually dissipate as they move upstream. To model these extended temporal processes accurately, autoregressive models must be stable across long rollouts. However, Figure 4: **Velocity and Temperature Rollout.** The left figure shows the first 80 timesteps of P-UNet\({}_{\text{mod}}\)’s rollout, where color indicates velocity and streamlines illustrate direction of flow. Both the flow magnitude and direction align exceptionally well with the ground truth. On the right, (a) and (b) show the rollout errors for temperature and velocity predictions. The prefix “P-” denotes that the model is trained with the pushforward trick [6]. Notably, UNet\({}_{\text{mod}}\) starts with slightly better initial accuracy, but it degrades more quickly than the model trained with the pushforward trick. P-UNet\({}_{\text{mod}}\) behaves more stably during rollout. Figure 3: **Temperature and Heat Flux Prediction.** (a) Cross-validated heatrix \(\mathcal{H}/\mathcal{H}_{max}\) estimates for subcooled and saturated boiling. (b), (c), and (d) show results for the fully trained forward propagator. In (b), accuracy degradation is minimal, with spikes occurring during timesteps of violent turbulence caused by rapid bubble detachment from the heater surface. (c) and (d) compare frames from the Flash-X simulation and predictions by the forward propagator for subcooled boiling. we observe that models experience instability, leading them to slowly diverge from the ground truth. The BubbleML dataset presents an opportunity to study these challenges in SciML. ## 5 Conclusions and Limitations This paper presents the BubbleML dataset, which fills a critical gap in ML research for multiphase multiphysics systems. By employing physics-driven simulations, the dataset offers precise ground truth information for a variety of boiling scenarios, encompassing a wide range of parameters and providing a comprehensive and diverse collection of data. BubbleML is validated against experimental observations and trends, establishing its reliability and relevance in multiphysics phase change research. The two BubbleML benchmarks demonstrate applications in improving the accuracy of optical flow estimation and SciML modeling encountered in multiphase systems. Importantly, BubbleML extends its impact beyond its immediate applications. It resonates with broader challenges in SciML, serving as a foundational platform to study several open problems. Limitations.Combining datasets might pose challenges due to their varying sizes. Because the resolution scales proportionally with the domain size, the constant relative spacing between grid cells allows the UNet model to be effectively trained on the merged boiling dataset. However, this approach does not extend to FNO, requiring domain decomposition methods [64] or downscaling strategies [48] to accommodate variable domain sizes. Note that the dataset is also exclusively composed of simulations due to the unavailability of experimental data with velocity, pressure, and temperature fields. Future work will involve collaboration with experimentalists to augment the dataset. ## 6 Acknowledgments and Disclosure of Funding This work was partially supported by the National Science Foundation (NSF) under the award number 1750549, the Office of Naval Research (ONR) under grant number N00014-22-1-2063 (supervised by program manager Dr. Mark Spector), the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the US Department of Energy Office of Science and the National Nuclear Security Administration, and Laboratory Directed Research and Development Program at Argonne National Lab, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357. We gratefully acknowledge the GPU computing resources provided on HPC3, a high-performance computing cluster operated by the Research Cyberinfrastructure Center at the University of California, Irvine.
2306.03076
Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning Hardware
Existing methods to recover model accuracy on analog-digital hardware in the presence of quantization and analog noise include noise-injection training. However, it can be slow in practice, incurring high computational costs, even when starting from pretrained models. We introduce the Sensitivity-Aware Finetuning (SAFT) approach that identifies noise sensitive layers in a model, and uses the information to freeze specific layers for noise-injection training. Our results show that SAFT achieves comparable accuracy to noise-injection training and is 2x to 8x faster.
Lakshmi Nair, Darius Bunandar
2023-06-05T17:52:44Z
http://arxiv.org/abs/2306.03076v1
# Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning Hardware ###### Abstract Existing methods to recover model accuracy on analog-digital hardware in the presence of quantization and analog noise include noise-injection training. However, it can be slow in practice, incurring high computational costs, even when starting from pretrained models. We introduce the Sensitivity-Aware Finetuning (SAFT) approach that identifies noise sensitive layers in a model, and uses the information to freeze specific layers for noise-injection training. Our results show that SAFT achieves comparable accuracy to noise-injection training and is 2\(\times\) to 8\(\times\) faster. ## 1 Introduction Recent advances in analog-digital hardware is motivated by improving energy and speed efficiency for deep learning applications. However, such devices are often susceptible to effects of analog noise and reduced precision (quantization) which impacts the final model accuracy. One of the commonly used approaches for tackling this issue includes noise-injection training. Here, the model is subjected to the perturbations caused by quantization and/or analog noise, by injecting some representative noise into the model's layers during training, to recover accuracy [1; 2; 3; 4]. Prior work has shown that loss of precision due to quantization can also be treated as "noise" and that models can be made resilient to this loss by retraining with noise injection, where the injected noise is proportional to the precision loss [4; 3]. However, noise-injection training can incur significant training time, even when starting from pretrained models. The speed of training can be potentially improved by training only a subset of the layers that are highly sensitive to noise, while freezing (i.e., disable weight updates) the rest. As shown in Figure 0(a) for ResNet50, some weights change substantially during noise-injection training, while others hardly change and could potentially be frozen (conceptually similar to transfer learning [5]). While prior work has looked at similar methods for speeding up training, they either focus on BERT-like models [6; 7], or rely on domain knowledge to identify noise sensitive layers [8]. Motivated by these observations, we seek to answer the question: "_Starting from a pretrained model, can we identify which layers are the most sensitive to noise, and retrain just those?_". Prior work in quantization has used KL-divergence to identify layers that are sensitive to quantization [9; 10; 11]. We present an alternate metric for measuring layer sensitivity to noise, by computing the _standard deviation_ of the output differences between the noisy/quantized model and the noise-free/unquantized model at each layer. We then introduce the Sensitivity-Aware Finetuning (SAFT) approach based on noise-injection training, that selects specific layers for training based on the layer sensitivity analysis. ## 2 Sensitivity-Aware Finetuning The motivation behind SAFT is the observation that after noise-injection training, the parameters of a model change significantly only for specific layers, e.g., Figure 0(a) shows this for ResNet50. Keeping such layers noise-free in the model results in larger accuracy improvements. This possibly indicates that the layers whose parameters change the most have higher noise sensitivity. Hence, we seek to retrain only the most noise sensitive layers to validate this hypothesis. Our approach takes a pretrained model \(M\), its noisy version \(N\), and a sample batch of inputs \(X\) from the training data. Note that \(N\) refers to the model used during standard noise-injection training [1], where we inject noise into the weights during the forward pass to perturb the outputs. The input data is first passed through \(M\), and the inputs and outputs at every layer are stored. Then, the inputs at every layer of the _original model_\(M\) are passed through the _corresponding layers of \(N\)_ (See Appendix Algorithm 1). The layer outputs for \(N\) are also saved, and the standard deviation1 of the differences between the outputs of \(M\) and \(N\) are computed per-layer. The process flow is shown in 0(b). Footnote 1: Noise mean is typically zero based on hardware models [12; 1; 3] Once the standard deviations are computed, SAFT involves: a) Identifying the top \(k\) layers with the highest standard deviations; b) Selectively training only the top \(k\) layers of \(N\) while freezing the parameters of the remaining layers. We note that \(k\) is an additional hyperparameter. The value of \(k\) can be determined based on visualizing the standard deviation values in a plot, or it can be treated as the other hyperparameters and set using tools such as Tune [13]. Another consideration here is the batch size used for computing the statistics. The batch size should be sufficiently large to obtain a reasonable estimate of the noise sensitivity2. When large batches cannot be processed, data samples can be processed individually and stacked. The statistics can then be accumulated over the stack. For training, we use the procedure in [1], and apply the backward gradient updates to noise-free weights. Footnote 2: We find that using the batch sizes typically used for training, works well in most cases **Computational complexity of standard deviation based layer sensitivity analysis:** Existing layer sensitivity approaches start with the un-quantized model and proceed by quantizing a single layer at a time, evaluating the model accuracy in each case [14; 15]. The layers are then sorted in decreasing order of sensitivity and the most sensitive layers are skipped during quantization [14]. Existing software packages such as OpenVino have introduced the Accuracy Aware Algorithm, a slightly modified version of the approach starting with the quantized version of the model [15]. However, these approaches are brute-force methods that can take \(O(Nt)\) time for a model with \(N\) layers and evaluation time \(t\) for the sample of data. In contrast, the standard deviation based sensitivity analysis only requires a sample of data to be passed _once_ through the model for a reduced complexity of \(O(t)\). For models with large \(N\) this can lead to significant speed improvements in layer sensitivity analysis. ## 3 Experiments We evaluate SAFT on eight different models. In all cases, similar to prior work in [14] we only apply noise to the matrix-multiplication layers (such as Convolutions, Linear etc.), and leave other layers such as batchnorm or activation layers as noise-free. We also evaluate the use of KL-divergence as an alternate metric to standard deviation in our experiments. Similar to prior work, we evaluate Figure 1: Sensitivity-Aware Finetuning (SAFT): motivation (left) and description (right) SAFT with simulated hardware noise using both multiplicative and additive noise, wherein noise is injected into the weights [1; 2; 16]. We sample the noise \(N\) from both a Gaussian distribution with zero mean as in prior work [2], \(N\sim\mathcal{N}(0,\sigma)\), and from a Uniform distribution \(N\sim U[-r_{1},r_{1}]\). Our baseline noise-injection is implemented similar to the approach in [1]. The parameters of the noise distributions for the different models are shown in Appendix Table 5. The specific noise parameters were chosen so as to result in a drop in the performance of all the models, which can then be recovered through training. Note that we set a fixed seed for all our training runs to ensure fair comparison. For SAFT, we compute the standard deviation values on a single batch of training data. We freeze \(total-k\) layers in a model during training, retraining only \(k\). We determined \(k\) empirically by visualizing the standard deviation plots and checking the number of layers that have a relatively high noise standard deviation. We seek to standardize this procedure in our future work. Table 4 in the Appendix shows the batch size, and \(k\) values (\(\#\text{Frozen}=\#\text{Total}-k\)) used in our experiments. Some models require more layers to be trained than others owing to higher noise in more layers. Our experiments evaluate: _Given the exact same training parameters, does SAFT perform similar to baseline noise-injection training?_ Note that our research question compares our approach to noise-injection training, rather than to obtain a predefined target performance, which noise-injection training has already been shown to achieve with sufficient epochs [1; 4; 14]. In our training experiments, we only train for a few epochs (1-5 epochs) to see if the performances of the two approaches match, whereas achieving close to the baseline noise-free FP32 performance takes many more epochs [14]. ## 4 Results Noise standard deviation plots for four models are shown in Figure 2. Stars \(\star\) indicate the layers that are trained, while the remaining are frozen. Similar to prior findings, the first and last set of layers in vision models exhibit high sensitivity [17; 18; 1]. We also see a "sawtooth" pattern in the vision models like ResNet, corresponding to the repeating blocks in the network, consistent with observations in prior work [12]. For MobileNet v3, quite a few of the convolution layers have a higher noise standard deviation compared to ResNet50. For Faster RCNN, we see that several layers in the \begin{table} \begin{tabular}{l|c|c c c|c c c|c} & \multicolumn{3}{c|}{**Multiplicative Gaussian**} & \multicolumn{3}{c|}{**Additive Gaussian**} & \multicolumn{1}{c}{**SAFT**} \\ & **FP32** & **Untrained** & **Noise-inj** & **SAFT** & **Untrained** & **Noise-inj** & **SAFT** & **Speed \(\uparrow\)** \\ **ResNet18** & 69.8 & 68.7 & 69.1 & 69.0 & 66.0 & 67.4 & 67.8 & 2\(\times\) \\ **ResNet34** & 73.3 & 72.1 & 72.9 & 72.9 & 69.0 & 70.0 & 70.0 & 4\(\times\) \\ **ResNet50** & 76.1 & 74.9 & 75.8 & 75.6 & 70.7 & 73.1 & 73.1 & 8\(\times\) \\ **ResNeXt50** & 77.6 & 72.2 & 74.4 & 74.0 & 71.1 & 73.9 & 74.2 & 8\(\times\) \\ **MobileNet v3** & 74.0 & 70.8 & 71.7 & 71.6 & 70.9 & 72.7 & 72.6 & 5\(\times\) \\ **Faster RCNN** & 59.0 & 56.5 & 58.9 & 58.7 & 52.2 & 54.4 & 54.8 & 3\(\times\) \\ **Mask RCNN** & 56.0 & 52.0 & 55.3 & 55.6 & 48.5 & 53.6 & 54.9 & 3\(\times\) \\ **Bert Base** & 74.7 & 73.1 & 74.4 & 74.6 & 72.4 & 74.4 & 74.2 & 2\(\times\) \\ \end{tabular} \end{table} Table 1: Results comparing SAFT with baseline noise-injection training with Gaussian noise shows similar performances. Here “Untrained” denotes performance before training, when noise is injected. Note that SAFT achieves accuracy close to noise-injection training while being 2\(\times\) to 8\(\times\) faster. \begin{table} \begin{tabular}{l|c|c c c|c c c|c} & \multicolumn{3}{c|}{**Multiplicative Uniform**} & \multicolumn{3}{c|}{**Additive Uniform**} & \multicolumn{1}{c}{**SAFT**} \\ & **FP32** & **Untrained** & **Noise-inj** & **SAFT** & **Untrained** & **Noise-inj** & **SAFT** & **Speed \(\uparrow\)** \\ **ResNet18** & 69.8 & 68.2 & 69.3 & 69.0 & 64.8 & 66.9 & 66.2 & 2\(\times\) \\ **ResNet34** & 73.3 & 71.8 & 72.7 & 72.6 & 67.1 & 68.5 & 68.0 & 4\(\times\) \\ **ResNet50** & 76.1 & 74.5 & 75.1 & 75.3 & 67.9 & 70.5 & 70.1 & 8\(\times\) \\ **ResNeXt50** & 77.6 & 71.3 & 73.0 & 73.1 & 73.0 & 75.7 & 75.3 & 8\(\times\) \\ **MobileNet v3** & 74.0 & 71.4 & 72.1 & 72.2 & 72.9 & 73.9 & 73.8 & 5\(\times\) \\ **Faster RCNN** & 59.0 & 57.1 & 58.2 & 58.2 & 56.2 & 57.0 & 57.2 & 3\(\times\) \\ **Mask RCNN** & 56.0 & 53.4 & 54.5 & 54.8 & 49.5 & 51.5 & 51.6 & 3\(\times\) \\ **Bert Base** & 74.7 & 68.9 & 72.0 & 72.4 & 62.2 & 72.1 & 72.7 & 2\(\times\) \\ \end{tabular} \end{table} Table 2: Results comparing SAFT with baseline noise-injection training for Uniform noise shows similar performances. Here “Untrained” denotes performance before training, when noise is injected. "head" of the model, responsible for predicting the bounding box locations, are particularly sensitive. For Bert base, the sensitivity is quite spread out across the model with several layers exhibiting high noise sensitivity. Specifically, 10 of the trained 20 layers are self-attention layers, with the remaining 10 being intermediate and output dense layers. For models like Bert, identifying the most sensitive layers can be tricky and a larger proportion of layers have to be trained compared to other models. The corresponding training speed improvements are shown in Tables 1 and 2, where up to \(8\times\) speed improvements in training can be observed. In the case of a few models such as ResNet18 and RCNN, speed up of about \(2\times\) and \(3\times\) is observed. The actual amount of speedup depends on the processing time of each layer, which in turn depends on the size of the layer (i.e., # of parameters). Hence, a direct correlation between size of the model (i.e., # of layers) and the speedup is difficult to establish. The final performance of SAFT in terms of the model metrics is shown in Tables 1 and 2 for Gaussian and Uniform noise respectively. We see that SAFT (with \(k\) frozen layers) closely matches3 the performance of the full noise-injection training approach for all noise models, leading to improvements in terms of the metrics. An interesting finding here is that specific layers that do not form a continuous sequence, can be independently trained. Typically in transfer learning a _continuous_ sequence of the last few layers, such as the last few convolutional and fully-connected layers, are often retrained [5]. Footnote 3: Our results were confirmed with the Wilcoxon Signed Rank test (\(\alpha=0.05\)) Lastly, we train a few models with Gaussian noise injection, by using KL-divergence for selecting the layers to freeze as opposed to using standard deviation (See Table 3). The results clearly show that freezing layers based on their standard deviations outperforms KL-divergence based layer selection. Furthermore, computation of KL-divergence requires converting the activations into probability distributions, which can be avoided in the case of the standard deviation based method. Interestingly, using KL-divergence did not improve performance on vision models, although it did perform well on Figure 2: Plots of the per-layer standard deviations for four models: ResNet50, MobileNet v3 large, Bert base, and Faster RCNN. The purple stars \(\star\) denote the layers that were selected for training while the rest were frozen. First and last layers of vision models have high standard deviations. \begin{table} \begin{tabular}{l|c|c} & **Baseline FP32** & **Mult / Add (Std)** & **Mult / Add (KL-d)** \\ **ResNet34** & 73.3 & **72.9 / 70.0** & 71.9 / 68.2 \\ **ResNet50** & 76.1 & **75.6 / 73.1** & 74.2 / 69.4 \\ **Bert base** & 74.7 & **74.6 / 74.2** & 74.2 / 74.1 \\ \end{tabular} \end{table} Table 3: Results for finetuning with Gaussian injected noise using KL-divergence to freeze layers, compared with using standard deviation to select the layers to freeze. Standard deviation based layer freezing (shown in bold) outperforms layer selection using KL-divergence. Bert base. It is possible that since most layers in Bert base have high noise sensitivity (See Figure 2), KL-divergence chose and trained some of the noisiest layers, whereas the noisiest layers are much more specific and localized in the case of the vision models. Especially in such cases, KL-divergence could not perform as well as standard deviation in identifying these more specific layers. ## 5 Conclusions and Future Work We introduced Sensitivity-Aware Finetuning (SAFT) for fast finetuning of pretrained models to deal with noise. SAFT computes layer sensitivity using standard deviations to freeze some layers. SAFT performs comparably to noise-injection training in terms of accuracy, while being faster at training. In the future, we will investigate additional metrics for SAFT, including combinations of metrics like standard deviation and KL-divergence. We will investigate techniques for easily identifying the \(k\) hyperparameter used in SAFT. We believe the layer sensitivity analysis can also be used for performing Partial Quantization and Quantization-Aware Training [14] in future work.
2302.11185
Quantum annealing with inequality constraints: the set cover problem
This paper presents two novel approaches for solving the set cover problem (SCP) with multiple inequality constraints on quantum annealers. The first method uses the augmented Lagrangian approach to represent the constraints, while the second method employs a higher-order binary optimization (HUBO) formulation. Our experimental analysis demonstrate that both approaches outperform the standard approach with slack variables for solving problems with inequality constraints on D-Wave quantum annealers. The results show that the augmented Lagrangian method can be successfully used to implement a large number of inequality constraints, making it applicable to a wide range of constrained problems beyond the SCP. The HUBO formulation performs slightly better than the augmented Lagrangian method in solving the SCP, but it is less scalable in terms of embeddability in the quantum chip. These findings could impact the use of quantum annealers for solving constrained optimization problems.
Hristo N. Djidjev
2023-02-22T07:39:51Z
http://arxiv.org/abs/2302.11185v1
# Quantum annealing with inequality constraints: the set cover problem ###### Abstract This paper presents two novel approaches for solving the set cover problem (SCP) with multiple inequality constraints on quantum annealers. The first method uses the augmented Lagrangian approach to represent the constraints, while the second method employs a higher-order binary optimization (HUBO) formulation. Our experimental analysis demonstrate that both approaches outperform the standard approach with slack variables for solving problems with inequality constraints on D-Wave quantum annealers. The results show that the augmented Lagrangian method can be successfully used to implement a large number of inequality constraints, making it applicable to a wide range of constrained problems beyond the SCP. The HUBO formulation performs slightly better than the augmented Lagrangian method in solving the SCP, but it is less scalable in terms of embeddability in the quantum chip. These findings could impact the use of quantum annealers for solving constrained optimization problems. _Keywords_: Set cover problem, Quantum annealing; Augmented Lagrangian method; Quadratic penalty method, D-Wave; Ising problem; QUBO; HUBO. Introduction ### Quantum annealing Quantum annealers, such as those produced by D-Wave Systems Inc., leverage quantum mechanical phenomena, including entanglement and tunneling, to tackle NP-hard optimization problems that are inherently challenging for classical computers. While D-Wave's machines are currently the most powerful quantum devices available, boasting over 5000 qubits, the state of Noisy Intermediate-Scale Quantum (NISQ) technology is still limited in its abilities and cannot yet outperform classical computers in solving practical problems due to high levels of noise and decoherence. However, each new generation of D-Wave quantum annealers is improving hardware to reduce noise and increase coherence time, enabling the machines to accurately solve progressively more complex problems. To solve a problem on a quantum annealer, it has to be formulated as a problem of the type \[\text{minimize }\textit{Is}(\mathbf{x})=\sum_{i<j}J_{ij}x_{i}x_{j}+\sum_{i}h_{i}x _{i}, \tag{1}\] where \(\mathbf{x}=\{x_{1},\ldots,x_{n}\}\), \(J_{ij}\) and \(h_{i}\) are real numbers, and variables \(x_{i}\) are either in \(\{-1,1\}\), in which case the problem is called an _Ising problem_, or in \(\{0,1\}\), when the formulation is called a _quadratic unconstrained binary optimization_ (_QUBO_) problem. The two representations are equivalent and can be easily converted into each other by using a linear variable transformation. Problem (1), which is a quadratic function of the variables \(x_{i}\), is known to be NP-hard [1] and many important optimization problems can be easily formulated as Ising or QUBO problems [21]. Such formulations are usually constructed in two steps. In the first step, the problem of interest is stated as a 0-1 quadratic programming problem, i.e., a problem to minimize a quadratic form of \(n\) binary variables subject to linear equality or inequality constraints. In the second step, that constrained problem is converted into an unconstrained one, which is necessary since problem (1) formulation doesn't allow constraints. Next we discuss methods to convert constrained problems into unconstrained ones. ### Handling constraints The _penalty method_ is the most commonly employed technique for dealing with constrained problems, in which the constraints are included in the objective function as penalty terms. For instance, a constraint \(\boldsymbol{a}^{\intercal}\boldsymbol{x}=b\) can be added to the objective as a penalty term \(P(\boldsymbol{x})=\mu(\boldsymbol{a}^{\intercal}\boldsymbol{x}-b)^{2}\), where \(\mu>0\) is a _penalty constant_. If \(\boldsymbol{x}\) satisfies the constraint, then \(P(\boldsymbol{x})=0\) and the penalty term doesn't change the value of the objective. But if \(P(\boldsymbol{x})\neq 0\), then \(P(\boldsymbol{x})>0\) and if the constant \(\mu\) is chosen large enough, then the penalty term will prevent \(\boldsymbol{x}\) to be a minimum of the combined objective. If we have an inequality constraint, we can first convert it into an equality one and then proceed as described above. For instance, a constraint \(\boldsymbol{a}^{\intercal}\boldsymbol{x}\leq b\) can be represented as \(\boldsymbol{a}^{\intercal}\boldsymbol{x}+d=b\), where \(d\geq 0\) is a new slack variable. But since problem (1) accept only binary variables and \(d\) can potentially be as large as \(b\), integer variable \(d\) has to be encoded using \(\lfloor\log b\rfloor+1\) binary variables. This may lead to a large number of new variables, especially if the problem has multiple inequality constraints. But with the size of the problem increasing, the limited availability of qubits and the reduced precision of current generation quantum devices can pose significant challenges [22]. Another issue with the penalty method is that a large penalty constant \(\mu\) can significantly impact the accuracy of quantum annealing. This is because all \(J_{ij}\) and \(h_{i}\) coefficients from (1) are normalized before being submitted to the annealer to satisfy hardware-imposed restrictions of the D-Wave device. Consequently, a large value for \(\mu\) leads to some coefficients of the resulting problem being very small in absolute value. This poses a challenge due to the analog nature of the quantum device and the finite precision of the digital-to-analog converter, resulting in further degradation of the accuracy of the quantum annealing. ### The set cover problem The problem we are considering in this paper, whose formulation involves dealing with multiple inequality constraints, is the _set cover problem (SCP)_. SCP is a classical optimization problem with multiple applications including in scheduling, resource allocation, logistics, and bioinformatics [30]. The weighted version of the problem is, given a set \(U\) of \(n\) elements \(\{1,\ldots,n\}\) and a set \(M\) of \(m\geq 2\) sets \(S_{i}\subset U\) with positive weights \(\textit{wt}_{i}\) such that \(\bigcup_{S\in M}S=U\), to find \(M^{*}\subseteq M\) for which \[\bigcup_{S_{i}\in M^{*}}S_{i}=U\] that minimizes \[\sum_{S_{i}\in M^{*}}\textit{wt}_{i}.\] In its unweighted version, all weights are one, and both versions are NP-hard [18]. In this paper we consider the weighted version of the SCP. ### Our objectives In this work, we propose two new approaches for more accurate constraint handling in solving larger SCP problems on current quantum annealers. The first approach, described in the next section, uses the augmented Lagrangian method to represent inequality constraints, as an alternative to the penalty method. The second approach formulates the SCP as a HUBO problem, which is similar to the QUBO formulation presented in (1) but permits higher degree monomials. These methods are intended to address the limitations imposed by the restricted number of qubits and the reduced precision of quantum annealers when handling a large number of variables. This work makes three main contributions. Firstly, we demonstrate that the augmented Lagrangian method can effectively handle a large number of inequality constraints when solving constrained problems with quantum annealing. While our application focuses on the SCP, this approach has broad applicability to other problem types. Secondly, we show that a HUBO representation can be leveraged to solve the SCP on a quantum annealer. This method can be extended to other problems with inequality constraints, although the problem must have certain structural characteristics. Lastly, our proposed methods produce QUBO or HUBO problems with only \(m\) variables, and our experimental results indicate that the current D-Wave Advantage machines can solve problems with up to 400 sets (variables). The paper is structured as follows. In Section 2, we give a brief literature review of related results. In Section 3, we provide a brief introduction to the augmented Lagrangian method and the HUBO formulation, and detail our proposed algorithms for handling constraints in quantum annealing. In Section 4, we present the experimental results, including a comparison of the accuracies achieved by our methods and those achieved by the penalty method. Finally, we conclude the paper with a summary of our findings and suggestions for future research directions. ## 2 Previous work Given the set cover problem's significance, many classical heuristic algorithms have been proposed for its solution. For the unweighted version of the problem, Johnson [16] and Lovasz [20] proposed algorithms that can find set covers with cost at most \(1+\ln d\) times the optimal one, where \(d\) is the maximum cardinality of sets in \(U\), while Chvatal [5] generalized their result for the weighted version. Feige [8] showed that the set cover cannot be approximated within a factor of \(\ln n\). A review of several algorithms for the SCP and comparison of their practical performances is given in [4]. There is much less work reported in the literature on quantum algorithms for the SCP. Lucas [21] describes a QUBO formulation for the SCP, among other NP-hard problems, but does not implement it on a quantum annealer. The number of variables for his formulation is \(m+n(\log m+1)\). For the related problem of _minimum vertex cover_, Pelofske et al. [23] design a quantum annealing algorithm that can deal with problems that are too large to fit onto the quantum hardware by decomposing them into smaller subproblems. Zhang et al. [32] propose quantum approximate optimization algorithm (QAOA), which uses a model for quantum computing different from quantum annealing, for the minimum vertex cover problem and apply it to graphs of ten vertices. However, the minimum vertex cover problem is simpler than set cover in the sense that its standard formulation involves a single equality constraint and no inequality ones. Cao et al. [3] consider the _set cover with pairs_ problem, introduced in [10], and propose a QUBO formulation for it that uses \(O(nm^{2})\) binary variables. They run a quantum annealing simulator on instances requiring no more than 19 variables, and also solve the QUBO problems using simulated annealing on the same set of instances. The augmented Lagrangian method (ALM) was introduced in 1969 by Hestenes and Powell [11, 25] and has been widely studied in the field of optimization. Different variants of the method have been applied to a wide range of problems, including quadratic programming [27], nonlinear programming [6], and convex optimization [15]. In quantum computing, ALM has been used in [26] for solving the quantum-chemical ground-state energy problem on a gate-based quantum computers. Yonaga et al. [31] use the alternating direction method of multipliers, a variant of ALM, to solve the quadratic knapsack problem. Djidjev [7] applies ALM for representing logical qubits in quantum annealers. We also use higher order binary optimization (HUBO) problem formulations, which have been the subject of intense research. Several works have focused on quadratization, or reduction of the HUBO to a quadratic form. Kolmogorov and Zabih [19] and Freedman and Drineas [9] show how to quadratize any monomial with a negative coefficient, regardless of its degree, by introducing a single auxiliary variable. Ishikawa [13] propose a method that results in a more efficient quadratization for positive monomials, utilizing approximately half the number of variables compared to previous techniques. Methods that don't introduce auxiliary variables but instead enumerate assignments to a small subset of the variables in order to reduce the degree include using deductions [28] and excludable local configurations [14]. Boros and Gruber [2] review the previous work on quadratization and propose new techniques. HUBO formulations have been used for quantum annealing by Pelofske et al. for Boolean tensor networks [24], by Mato for molecule unfolding, and Jun for prime factorization [17]. ## 3 Methods ### Using slack variables The standard approach to deal with the inequalities of the SCP is using slack variables in order to convert them to equalities and then use the quadratic penalty method to incorporate the resulting equalities into the objective function [21]. Specifically, we define a binary variable \(x_{i}\in\{0,1\}\) for each set \(S_{i}\in M\) that indicates whether the set is included in the final solution or not. Then the objective function to minimize is \[Q_{A}=\sum_{i=1}^{m}x_{i}wt_{i}.\] If we define \(\sigma_{i}=\{j\ |\ S_{j}\ni\,i\}\), the constraint that at least one of the selected sets covers element \(i\) is \(\sum_{j\in\sigma_{i}}x_{j}\geq 1\). We convert this into the equality \[\sum_{j\in\sigma_{i}}x_{j}=d_{i}+1,\] where \(d_{i}\) is a new integer variable in \([0,m-1]\). We encode \(d_{i}\) using \(k=\lfloor\log(m-1)+1\rfloor\) binary variables \(x_{i,\alpha}\). Then the QUBO encoding all the constraints is \[Q_{B}=\sum_{i=1}^{m}\bigg{(}\big{(}\sum_{j\in\sigma_{i}}x_{j}-\sum_{\alpha=0}^ {k}2^{\alpha}x_{i,\alpha}-1\big{)}^{2}\bigg{)}. \tag{2}\] Finally, we combine \(Q_{A}\) and \(Q_{B}\) into a single QUBO \[Q=Q_{A}+\mu\,Q_{B},\] where \(\mu\) is a constant satisfying \(\mu>\max\{\mathit{wt}_{i}\}\)[21]. Although the penalty constant \(\mu\) itself is usually not large, the issue is with the coefficients \(2^{\alpha}\), which may become as large as \(m\). In the next two sections, we describe the proposed new approaches. ### Augmented Lagrangian version #### 3.2.1 The general method The augmented Lagrangian method (ALM) for solving constrained problems combines the penalty method, used in the previous subsection, with the method of the Lagrangian multipliers. Specifically, in the case of inequalities, all inequality constraints of type \(c_{i}(\boldsymbol{x})=\boldsymbol{a}_{i}{}^{\intercal}\boldsymbol{x}-b_{i}\leq 0\), \(i=1,\ldots,n\), can be included into the objective as an additive term \[\boldsymbol{\lambda}^{\intercal}\boldsymbol{c}(\boldsymbol{x})+\frac{\mu}{2} ||\boldsymbol{c}(\boldsymbol{x})||^{2}=\sum_{i=1}^{n}\big{(}\lambda_{i}c_{i}( \boldsymbol{x})+\frac{\mu}{2}||c_{i}(\boldsymbol{x})||^{2}\big{)},\] where the coefficients \(\lambda_{i}\) are called _Lagrangian multipliers_, \(\boldsymbol{\lambda}=\{\lambda_{1},\ldots,\lambda_{n}\}\), and \(\boldsymbol{c}=\{c_{1},\ldots,c_{n}\}\). Coefficients \(\boldsymbol{\lambda}\) and \(\mu\) are estimated using an iterative procedure as described in Algorithm 1. Note that a version with both equality and inequality constraints is possible. In the next subsection we apply the method to the SCP. #### 3.2.2 Applying the AL method to the SCP First, we formulate the SCP problem as a \(0{-}1\) linear program with constraints and then we apply the AL method to get rid of the inequalities. As in Section 3.1, we define a binary variable \(x_{i}\) for each \(i\in[1,m]\) such that \(x_{i}=1\), if subset \(S_{i}\) is selected for the cover, or \(x_{i}=0\), otherwise. Then the SCP can be formulated as \[\underset{x_{i}}{\text{minimize}}\quad\sum_{i=1}^{m}x_{i}wt_{i} \tag{3}\] \[\text{subject to}\quad\sum_{j\in\sigma_{i}}x_{j}\geq 1,\;x_{j}\in \{0,1\},\;i=1,...,n. \tag{4}\] The corresponding augmented Lagrangian function, which is the new objective of the minimization problem, is \[\text{AL}(\mathbf{x})=\sum_{i=1}^{m}x_{i}wt_{i}+\sum_{i=1}^{n}\lambda_{i}\big{(}1 -\sum_{j\in\sigma_{i}}x_{j}\big{)}+\frac{\mu}{2}\sum_{i=1}^{n}\big{(}1-\sum_{j \in\sigma_{i}}x_{j}\big{)}^{2}\] \[=\sum_{i=1}^{m}x_{i}wt_{i}+\sum_{i=1}^{n}\Big{(}\lambda_{i}\big{(}1-\sum_{j \in\sigma_{i}}x_{j}\big{)}+\frac{\mu}{2}\big{(}1-\sum_{j\in\sigma_{i}}x_{j} \big{)}^{2}\Big{)}.\] Since \[\big{(}1-\sum_{j\in\sigma_{i}}x_{j}\big{)}^{2}=1+\sum_{j\in\sigma_{i}}x_{j}^{2}+2 \underset{j<k\in\sigma_{i}}{\sum}x_{j}x_{k}-2\sum_{j\in\sigma_{i}}x_{j},\] and using that \(x_{j}^{2}=x_{j}\) for \(x_{j}\in\{0,1\}\), we get \[\text{AL}(\mathbf{x}) =\sum_{i=1}^{m}x_{i}\textit{wt}_{i}+\sum_{i=1}^{n}\Big{(}-\lambda _{i}\sum_{j\in\sigma_{i}}x_{j}+\mu\underset{j<k\in\sigma_{i}}{\sum}x_{j}x_{k}- \frac{\mu}{2}\sum_{j\in\sigma_{i}}x_{j}\Big{)}+C,\] \[=\sum_{i=1}^{m}x_{i}\textit{wt}_{i}+\sum_{i=1}^{n}\Big{(}(- \lambda_{i}-\frac{\mu}{2})\sum_{j\in\sigma_{i}}x_{j}+\mu\underset{j<k\in\sigma _{i}}{\sum}x_{j}x_{k}\Big{)}+C, \tag{5}\] where \(C\) is a constant, which can be ignored when solving the optimization problem. From (5), we can get the coefficients \(J_{ij}\) and \(h_{i}\) of the QUBO representation (1) and solve that QUBO on a quantum annealer, updating parameters \(\mathbf{\lambda}\) and \(\mu\) at each iteration as specified in Algorithm 1. ### HUBO version Higher-order binary optimization (HUBO) is a generalization of quadratic unconstrained binary optimization (QUBO) to higher-order polynomials. Each HUBO problem can be converted into a QUBO problem by defining auxiliary variables that encode products of other variables in a way that leads to decreasing the polynomial degree. For instance, if monomial \(x_{1}x_{2}x_{3}\) is part of a HUBO, one can define a new variable \(u=x_{1}x_{2}\) and replace \(x_{1}x_{2}x_{3}\) by \(uv_{3}\). The constraint \(u=x_{1}x_{2}\) can be enforced using a well-known penalty quadratic function \(x_{1}x_{2}-2(x_{1}+x_{2})u+3u\)[12] resulting into the QUBO \[ux_{3}+\mu(x_{1}x_{2}-2(x_{1}+x_{2})u+3u),\] which can replace \(x_{1}x_{2}x_{3}\) for sufficiently large penalty \(\mu\). Applying this repeatedly can convert HUBO of any degree into a QUBO. To formulate a HUBO version of the SCP, we start with the 0-1 linear program (3)-(4). The constraint \(\sum_{j\in\sigma_{i}}x_{j}\geq 1\) means that, for at least one \(j\in\sigma_{i}\), \(x_{j}=1\), which means that, for at least for one \(j\in\sigma_{i}\), \(1-x_{j}=0\). Hence, the constraints (4) are equivalent to \[\prod_{j\in\sigma_{i}}(1-x_{j})=0,\;i=1,...,n.\] and \[\sum_{i=1}^{n}\prod_{j\in\sigma_{i}}(1-x_{j})=0. \tag{6}\] Define new binary variables \(y_{i}=1-x_{i}\), \(i=1,\ldots,m\). Replacing \(x_{j}\) in (6) and (3) with \(y_{j}\) and combining them into a single function, we get the HUBO formulation of the SCP \[\text{minimize}\bigg{(}m-\sum_{i=1}^{m}y_{i}\mathit{wt}_{i}+\mu\sum_{i=1}^{n} \prod_{j\in\sigma_{i}}y_{j}\bigg{)}, \tag{7}\] where \(\mu\) is a penalty coefficient. Clearly, the constant \(m\) can be ignored for the optimization. It is easy to see that it is enough to choose \(\mu>\max\{\mathit{wt}_{i}\}\). Assume that \(\mu>\max\{\mathit{wt}_{i}\}\) and, in a solution \(\{y_{i}\}\) of (7), there is an element \(i\) that is not covered, i.e., \(\prod_{j\in\sigma_{i}}y_{j}=1\). Then there will exist a set \(S_{i}\) that contains \(j\) and is not in the cover, i.e., \(y_{i}=1\) (since by the SCP definition \(\bigcup_{i}S_{i}=U\)). Adding \(S_{i}\) to the cover, i.e., changing \(y_{i}\) to \(0\), will change the value of the objective function by \(\mathit{wt}_{i}-\mu<0\), which contradicts the assumption that \(\{y_{i}\}\) is an optimal solution of (7). ## 4 Results ### Implementation of the algorithms For our experiments, we use the D-Wave Advantage_system4.1 quantum annealer, which we call _DWA_ hereafter, available through the Leap quantum cloud service. The annealing parameters we use to control the annealing process are \(\texttt{num\_reads}=1000\) for the number of samples returned per call to the annealer, \(\texttt{annealing\_time}=100\), which sets the number of microseconds for the annealing time, and \(\texttt{chain\_break\_method}=\texttt{MinimizeEnergy}\). We also use the flux_biases parameter, which is used to help control some hardware biases. Unless explicitly stated otherwise, all other parameters are set to their default values. We test the proposed new algorithms and compare them against the standard approach on random instances of the SCP. To generate the test problems, our generator for SCP instances takes as an input the number of sets \(m\), the number of elements \(n\), and the _coverage_\(c\), defined as the average number of sets covering an element of \(U=\{1,\ldots,n\}\). The generator initially creates \(m\) empty sets and then randomly places elements in sets \(S_{i}\) until the following conditions are satisfied at completion: (i) each element of \(U\) is contained in at least two sets, (ii) each set contains at least one element, and (iii) \(\sum_{i}|S_{i}|\geq mc\). Finally, a random weight \(\mathit{wt}_{i}\in[0,1]\) for \(i=1,\ldots,m\) is assigned to each set \(S_{i}\). We use different size parameters for \(m\) in \(\{50,75,100,\ldots,400\}\) and, for each \(m\), we compute a set \(N_{m}\) of three values for \(n\) defined as \(N_{m}=\{\lceil 0.5m\rceil,\,\lceil 0.75m\rceil,\,m\}\). For the value of coverage we use \(c=3\). For each type of experiment and combination of \(m\) and \(n\), we generate three random instances and average the values of the measured characteristic over the three instances. If the annealer returns an infeasible solution, i.e., one that doesn't correspond to a valid cover, we define as cost of that solution the sum of all weights, which corresponds to the trivial cover consisting of all sets. The algorithms used in the experiments are the following (see Table 1). SV_QA: based on the standard slack-variable formulation from Section 3.1 and implementation on a quantum annealer. AL_SA: uses the augmented Lagrangian representation as a model and the simulated annealing method to do the optimization. _Simulated annealing_[29] is a classical optimization method that uses a probabilistic approach to explore the solution space in search for a global minimum. To escape local minima, it uses gradual cooling from a high temperature to a low one, \begin{table} \begin{tabular}{|c|c|c|c|} \hline Name & Inequalities method & Optimization method & Classical/quantum \\ \hline \hline SV\_QA & slack variable & quantum annealing & quantum \\ \hline AL\_SA & augmented Lagrangian & simulated annealing & classical \\ \hline AL\_QA & augmented Lagrangian & quantum annealing & quantum \\ \hline HUBO\_SA & HUBO & simulated annealing & classical \\ \hline HUBO\_QA & HUBO & quantum annealing & quantum \\ \hline \end{tabular} \end{table} Table 1: Implemented algorithms used for the experimental analysis. which changes the probability of accepting moves increasing the objective function from higher in the beginning to low at the end. We have opted for simulated annealing in our experiments because it serves as a classical analogue of quantum annealing being a general-purpose heuristic for global optimization. To carry out our experiments, we utilized the implementation of simulated annealing that is included in the Ocean D-Wave software. The ALM parameters from Algorithm 1 are \(0.5\) for the initial \(\mu\), \(0\) for the initial \(\lambda\), and \(1.1\) for the increase factor \(\rho\). AL_QA: the quantum version of AL_SA,combining ALM with quantum annealing. We use the same ALM parameters as AL_SA. HUBO_SA: uses the HUBO formulation from Section 3.3. For solving the HUBO we use the simulated annealing solver of Ocean. Note that, since simulated annealing is not restricted to quadratic models, we don't need a conversion to a QUBO in this algorithm. HUBO_QA: the quantum version of HUBO_SA as it uses the HUBO formulation of the SCP. But in order to solve on DWA, we also need a HUBO-to-QUBO converter, for which we use the one supplied by the Ocean software. In the next three subsections, we will analyze the implementations of the augmented Lagrangian and HUBO algorithms and compare all five algorithms with respect to the qualities of the proposed solutions to SCP. ### Augmented Lagrangian method iterations Our focus here is on analyzing the iterations of ALM, specifically, examining the decrease rate in the number of non-satisfied constraints as the iteration number increases. Figure 1 shows the percentage of uncovered elements (non-satisfied constraints), for AL_SA and AL_QA and for number of variables \(m\in\{50,100,150\}\). The number of ALM iterations is set to \(10\). We observe that, in all cases, for smaller values for \(n\) we see a better performance. Although this may seem self-evident, we will see in the next subsection that decreasing the value of \(n\) makes the problem harder in other aspects. Regarding the dependence on the value of \(m\), we see for AL_QA that with increasing the value of \(m\) the percentage of uncovered elements also goes up. We cannot see such a clear trend for AL_SA. Comparing AL_SA and AL_QA implementations, we see that, with respect to this criterion, AL_QA has a better performance. Finally, we look at the value of the penalty factor \(\mu\) from Algorithm 1. The magnitude of \(\mu\) is important, especially in the case of quantum annealing, since it is used to scale up some coefficients, and large values of \(\mu\) can Figure 1: Number of elements not covered, per iteration step, for \(m\in\{50,100,150\}\). Shown at the top is AL_SA and at the bottom it is AL_QA. A “\(\times\)” symbol indicates the average iteration number at which the best solution for the corresponding problem size combination has been obtained. negatively affect the accuracy of the annealing, as discussed earlier. While Figure 1 doesn't directly show these values, they can be easily calculated given the iteration number \(i\), e.g., in our implementation, \(\mu(i)=0.5(1.1)^{i}\). Figure 1 shows the average iteration number where the best solution was found. We observe that, in the case \(m=50\), the best-iteration numbers and the corresponding values of \(\mu\) are lower compared to \(m=100\) and \(m=150\), especially for algorithm AL_QA. However, we don't see significant difference when we compare \(m=100\) and \(m=150\). One possible explanation is that the number of iterations, ten, may be not enough for some instances with large values of \(m\) and the best iteration number for \(m=150\) may be greater than \(10\). But increasing the number of iterations also increases the cost of the algorithm. ### Number of variables and embeddability The computational complexity of a classical optimization algorithm goes up with the number of variables of the instance. For quantum algorithms, larger number of variables usually means lower quality of the solution. But a large number of variables may also means that the QUBO does not fit on the quantum device, so the problem may not be solvable at all. Two key factors influence the sizes of problems that can be solved on a quantum computing device: the number of qubits and the connections between them. The DWA device has more than 5000 qubits, with each qubit connected to no more than 15 other qubits. The sparsity of connections means that, for most problems, several connected qubits have to be combined into a single logical qubit to represents one binary variable. Hence, the sizes of the problems that can be solved on DWA, in terms of the number of binary variables, may be much smaller than the number of available qubits. In this section, we analyze what sizes of problems are solvable for each of the three quantum algorithms: SV_QA, AL_QA, and HUBO_QA. First, let us analyze the number of the variables. SV_QA uses a QUBO formulation with \(m+n(\log m+1)\)[21]. AL_QA also uses a QUBO formulation and its number of binary variables is \(m\). Finally, HUBO_QA uses a HUBO formulation with \(m\) variables. But in order to convert it to a quadratic form, one should define a number of auxiliary variables to reduce the degree of the monomials, so the final number of variables is higher. The number of auxiliary variables depends on the implementation, and we don't have a formula for the one used by the Ocean software. But we can experimentally analyze them by counting the number of final variables for each test instance. Figure 2, bottom, shows the number of binary variables for HUBO_QA for different SCP sizes. For each value of \(m\), we observe that the number of binary variables increases with decreasing the value of \(n\). While this might look counter-intuitive, it is based on the fact that the number of auxiliary variables depends on the degrees of the monomials in the HUBO, as each monomial of degree larger than two needs additional variables to quadratize it. Furthermore, the degree of the \(i\)-th monomial is the size of the set \(\sigma_{i}\) (see (7)), which is the number of all sets containing element \(i\). Smaller values of \(n\) means more sets covering each element, on average and, hence, higher degree monomials. We can also see that the number of variables for HUBO_QA grows when \(m\) is varied between 50 an 400 from 95.5 to 794.2, for \(n=m\), and from 133.6 to 1170.4, for \(n=m/2\). Figure 2, top and middle, displays the number of couplers (quadratic coefficients) for SV_QA and AL_QA, respectively. We don't plot the number of QUBO variables for these algorithms as they can be calculated using explicit formulas, which are \(m+n(\log m+1)\) and \(m\). The number of couplers is important since, without all-to-all connectivity, larger number of couplers usually means that more qubits are needed to represent a single QUBO variable, which reduces the sizes of the embeddable problems. We observe a similar pattern, which is that the number of couplers increases when \(m\) goes up or \(n\) goes down. Specifically, when \(n\) decreases, the average size of \(\sigma_{i}\) increases, while the numbers of couplers for \(i\) goes up as roughly \((|\sigma_{i}|+\log m)^{2}\) for SV_QA and \(|\sigma_{i}|\) for AL_QA, see (2) and (5). Specifically, for \(m=275\) and \(n=138\), the number of couplers for SV_QA is 7794 and for AL_QA it is 2326. Whether a particular instance of the SCP can be embedded in the DWA chip depends on both the _number_ of variables and the number of couplers, but it also depends on the specific _connection patterns_, e.g., the locality of the connections. Since the connection patterns are hard to quantify, the ultimate criterion for evaluating and comparing the algorithms we use is the likelihood for problems with given values of \(n\) and \(m\) to be embeddable. On Figure 2, the colors indicate what portion of the ten random problems generated for each specific combination of \(n\) and \(m\) could be successfully embedded. We can observe that, for \(m\) upto 150, all algorithms produce embeddable QUBOs 100% of the time. At \(m=175\), SV_QA could still embed all problems, except one instance for \(n=138\), and the other two algorithms can embed all variables 100% of the time. Despite that single infeasible instance, we take \(m=175\) to be the largest value for \(m\) where all methods Figure 2: Comparison of the three quantum algorithms with respect to the number of variables or couplers. On the top is SV_QA, in the middle it is AL_QA, and at the bottom it is HUBO_QA. The colors show what fraction the problem instances are embeddable into the DWA chip. are able to produce solutions, and in the next subsection we compare the methods with respect to their accuracy for \(m\) upto 175. For values of \(m\) between 200 and 275, all instances of AL_QA and HUBO_QA are embeddable, while none of the SV_QA instances are. Although HUBO_QA has more embeddable instances for \(m\geq 300\), both methods have a 33% embeddability rate for \(m=400\), and for \(m=425\), no instances are embeddable. It is noteworthy that HUBO_QA performs just slightly worse than AL_QA with respect to embeddability in DWA, despite having a substantially higher number of QUBO variables (ranging from two to three times more, depending on \(n\)). ### Solution quality comparison Figure 3 shows the average costs of the covers computed by the five algorithms discussed in this paper for different values of \(m\) and \(n\). To make the comparisons easier, the weights on the sets for each value of \(m\) have been renormalized so that the bar height for HUBO_QA is one. Overall, in terms of solution quality, HUBO_QA performs the best, while SV_QA performs the worst. To provide a more detailed analysis, we compare the performance of quantum and classical algorithms, followed by a comparison of the algorithms based on the optimization method used.. Our analysis compares the performance of quantum and classical versions of our proposed methods, specifically AL_QA vs. AL_SA and HUBO_QA vs. HUBO_SA. We found that quantum annealing (QA) consistently outperforms simulated annealing (SA), particularly in the most challenging case where \(n=\lceil m/2\rceil\) and when comparing AL_QA with AL_SA. On the other had, HUBO_SA is just slightly worse than HUBO_QA. A possible explanation is that, while the HUBO_QA performance is degraded with increasing the size of the problem due to the QUBO-to-HUBO conversion, our implementation of HUBO_SA directly applies simulated annealing to the HUBO representation and does not suffer from an increased number of variables. Finally, let us compare the algorithms with respect to the methods used to implement the optimization constraints: slack variables (SV_QA), augmented Lagrangian (AL_SA and AL_QA), and HUBO (HUBO_QA and HUBO_SA). The slack variables approach finds worse solutions than the others. Except in one case, \(m=175\), where AL_SA is a bit worse than, in the other cases the solutions found by SV_QA have cost about twice greater on average than the costs of the solutions produced by the other methods. In contrast to the other methods, the performance of SV_QA is relatively consistent across different values of \(n\) when \(m\) is fixed, resulting in little variation in the quality of the solutions obtained. The HUBO method performed the best, with both the QA and SA implementations finding better solutions than the other methods. ALM, in particular its QA version, performed slightly worse, but seems to be a viable alternative. ## 5 Conclusion This paper focuses on the set cover problem and the challenge of implementing multiple inequality constraints on a quantum annealer. We compare the standard approach based on slack variables [21] with two new approaches based on the augmented Lagrangian method (ALM) and higher-order binary optimization (HUBO), respectively. Our experimental analysis shows that both new approaches outperform the standard approach. The HUBO approach finds the highest quality solutions and is easy to implement. However, unlike ALM, which is more general and can be applied straightforwardly to other problems with inequality constraints, the HUBO formulation relies on the specific structure of the set cover problem and may not be applicable to other problems with inequality constraints. Also, the HUBO approach is less scalable than ALM in terms of embeddability in the quantum chip of D-Wave. Figure 3: Comparison of the methods based on the set cover costs of computed solutions. Each method and combination of \(m\in\{50,75,\ldots,175\}\) is represented by a bar, with shading indicating a different value of \(n\). We also demonstrate that even with a large number of inequality constraints, the augmented Lagrangian method may be a viable approach for solving constrained problems on a quantum annealer. We perform experiments with problems having up to 400 constraints and find good quality solutions. However, the quality of solutions tends to degrade slowly with increasing \(n\) and \(m\), but it is much more affected by the number of quadratic couplers of the QUBO, which is determined by the ratio of \(n/m\). Our results could be applicable to solving other optimization problems with constraints on quantum annealers. Future research directions could include improving the implementation of the augmented Lagrangian optimization procedure (Algorithm 1), for instance, by updating the stopping criterion or the method for updating the values of \(\mu\) and \(\mathbf{\lambda}\). Additionally, the conversion from HUBO to QUBO could be improved to produce fewer auxiliary variables. The Ocean implementation we used is good, but better implementations are possible, especially those that take into account the specific structure of the HUBO. ## Acknowledgments This work was supported by grant number KP-06-DB-11 of the Bulgarian National Science Fund and by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project 20210114ER. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (contract No. 89233218CNA000001).
2305.05157
A Generalized Covering Algorithm for Chained Codes
The covering radius is a fundamental property of linear codes that characterizes the trade-off between storage and access in linear data-query protocols. The generalized covering radius was recently defined by Elimelech and Schwartz for applications in joint-recovery of linear data-queries. In this work we extend a known bound on the ordinary covering radius to the generalized one for all codes satisfying the chain condition -- a known condition which is satisfied by most known families of codes. Given a generator matrix of a special form, we also provide an algorithm which finds codewords which cover the input vectors within the distance specified by the bound. For the case of Reed-Muller codes we provide efficient construction of such generator matrices, therefore providing a faster alternative to a previous generalized covering algorithm for Reed-Muller codes.
Ben Langton, Netanel Raviv
2023-05-09T03:50:55Z
http://arxiv.org/abs/2305.05157v1
# A Generalized Covering ###### Abstract The covering radius is a fundamental property of linear codes that characterizes the trade-off between storage and access in linear data-query protocols. The generalized covering radius was recently defined by Ellmenchel and Schwartz for applications in joint-recovery of linear data-queries. In this work we extend a known bound on the ordinary covering radius to the generalized one for all codes satisfying the chain condition--a known condition which is satisfied by most known families of codes. Given a generator matrix of a special form, we also provide an algorithm which finds codewords which cover the input vector(s) within the distance specified by the bound. For the case of Reed-Muller codes we provide efficient construction of such generator matrices, therefore providing a faster alternative to a previous generalized covering algorithm for Reed-Muller codes. Covering codes; Reed-Muller codes. ## I Introduction The covering radius of a code is the minimum integer \(r\) such that any vector in the space is within Hamming distance at most \(r\) from a codeword of the code. This fundamental property of codes is very well understood [1], and has applications in low-access algorithms for linear queries in databases [1, 4]. Motivated by joint-recovery of multiple linear queries simultaneously, the _generalized_ covering radius was recently introduced in [2, 3]. Roughly speaking, the \(t\)-th generalized covering radius is the maximum number of coordinates in which \(t\) vectors can differ from \(t\) codewords, across all \(t\)-subsets of the code; for \(t=1\) this definition specifies the (ordinary) covering radius. While little is known about the generalized covering radius for most codes, upper and lower bounds were established for Reed-Muller codes in [3] and for some other codes and values of \(t\) in [2]. They also provided an algorithm which, given a set of \(t\) vectors, finds \(t\) codewords within their bound for Reed-Muller codes. As noted in [2], the generalized covering radius is closely related to Generalized Hamming Weights (GHW), introduced by Wei in [9, 10]. In this paper we directly link the two by showing that for every code which satisfies the _chain condition_, the generalized covering radius can be bounded using the GHWs. The chain condition [10] asserts that there exists a generator matrix which realizes the GHWs, and is satisfied by most important families of codes. Our bound follows from combining the bound in [6] for GHWs with one of the equivalent definitions of the generalized covering radius given in [2]. Our bound implies an efficient algorithm that for a given set of \(t\) vectors, finds a corresponding set of \(t\) codewords which differ from the \(t\) vectors by at most the value of the bound. This algorithm requires a _chained generator matrix_, a generator matrix of a special form which is guaranteed to exist in all chained codes. We further show how a chained generator matrix for Reed-Muller codes can be found easily. This results in a generalized covering algorithm for Reed-Muller codes, which exponentially improves the runtime of the one given in [3], albeit with a potentially negative impact on performance. Our algorithm also applies to \(q\)-ary Reed-Muller codes for \(q>2\), that were not addressed by [3]. Finally, since the algorithm provides codewords up to the value of the bound, which might be larger than the generalized covering radius of the code, we computationally compare our bound to the best known ones for Reed-Muller codes [3]. Our experiments suggest that the given bound outperforms the best known ones for Reed-Muller codes in some parameter regimes. ## II Preliminaries ### _The Generalized Covering Radius_ The \(t\)-th generalized covering radius \(R_{t}(C)\)[2] is defined as follows, where \([t]\) denotes \(\{1,\ldots,t\}\), and \(\binom{[n]}{r}\) denotes the family of subsets of size \(r\) of \([n]\). **Definition 1**.: _[_2_]_ _Let \(C\) be an \([n,k]\) code over \(\mathbb{F}_{q}\). For \(t\in\mathbb{N}\), the \(t\)-th generalized covering radius \(R_{t}(C)\) is the minimal integer \(r\) such that for every \(v_{1},v_{2},..,v_{t}\in\mathbb{F}_{q}^{n}\), there exist codewords \(c_{1},c_{2},...,c_{t}\in C\) and \(I\in\binom{[n]}{r}\) such that \(\operatorname{supp}(v_{i}-c_{i})\subseteq I\) for all \(i\in[t]\)._ It can be readily verified that this definition specifies to the well-known covering radius by setting \(t=1\). Intuitively, the \(t\)-th generalized covering radius \(r\) is the maximum number of coordinates in which \(t\) vectors can differ from the \(t\) codewords which minimizes this number. The quantity \(R_{t}(C)\) can be defined in multiple equivalent ways, out of which we make use of the following one in the sequel. **Definition 2**.: _[_2_]_ _Let \(C\) be an \([n,k]\) linear code over \(\mathbb{F}_{q}\) with generator matrix \(G\), and \(C_{t}\) be the \([n,k]\) linear code over \(\mathbb{F}_{q^{t}}\) with generator matrix \(G\). Then \(R_{t}(C)=R_{1}(C_{t})\)._ We present a few basic results about the generalized covering radius below. We first have that the generalized covering radii are monotone increasing for a given code. We omit the reference to a specific code \(C\) whenever unnecessary. **Theorem 1**.: _[_2_]_ \(R_{1}\leq R_{2}\leq...\leq R_{n-k}=n-k\)_._ Clearly, in order to cover any \(t\) given vectors, one can use the ordinary covering radius \(t\) times, which gives rise to the next theorem. The crux of studying \(R_{t}\) is in cases which this inequality is strict. **Theorem 2**.: _[_2_]_ _Let \(C\) be an \([n,k]\) code over \(\mathbb{F}_{q}\). Then for all \(t_{1}\), \(t_{2}\in\mathbb{N}\), we have \(R_{t_{1}+t_{2}}\leq R_{t_{1}}+R_{t_{2}}\)_ For example, these two theorems readily imply the generalized covering radii of the Hamming code, which is known to have a covering radius of 1. By the fact that \(R_{1}=1,R_{n-k}=k\), and Theorems 1 and 2, we have that \(R_{t}=t\) for \(t\leq n-k\). ### _Generalized Hamming Weights and Chained Codes_ The generalized Hamming weights, introduced in [2], are a similar extension of minimum distance as the generalized covering radius is to the ordinary covering radius. Recall the definition of a support of a code \(\text{supp}(C)=\{i:\exists(x_{1},...,x_{n})\in C,x_{i}\neq 0\}\), and define the following. **Definition 3**.: _[_9_]_ _The \(r\)-th generalized Hamming weight of a code \(C\) is \(d_{r}(C)=\min\{|\text{supp}(D)|:D\subseteq C,\dim(D)=r\}\)._ The _chain condition_ is then defined as follows. **Definition 4**.: _[_10_]_ _An \([n,k]\) linear code \(C\) with GHWs \(d_{1}(C),d_{2}(C),...,d_{k}(C)\) satisfies the chain condition (abbrv: chained code) if there are \(k\) linearly independent vectors \(c_{1},c_{2},...c_{k}\) such that \(d_{r}(C)=|\bigcup_{i=1}^{r}\text{supp}(c_{i})|\) for every \(r\in\{1,\ldots,k\}\)._ Intuitively, the span of each \(t\) prefix of the basis \(c_{1},\ldots,c_{k}\) is a subcode which realizes the minimum in Definition 3. Using such basis as rows of a generator matrix, we define the following. **Definition 5**.: _For a chained code \(C\), a generator matrix \(\Gamma\) with rows \(c_{1}\) (top row) through \(c_{k}\) (bottom row) is called a chained generator matrix if each \(c_{i}\) ends with \(n-d_{i}\) zeros, where \(d_{i}=|\bigcup_{j=1}^{i}\text{supp}(c_{j})|\)._ **Remark 1**.: _Given \(c_{i}\)'s which realize the GHW hierarchy (i.e., satisfy the condition in Definition 4), we can make each \(c_{i}\) end with at least \(n-d_{i}\) zeros by permuting the columns so that in each row the \(d_{i}-d_{i-1}\) columns which are new in the support are moved to the end of the nonzero part of each row, starting with \(c_{1}\). Therefore, every chained code has a chained matrix; these matrices will be useful in the sequel for providing a simple generalized covering algorithm._ For the following theorems, let \(C\) be an \([n,k]\) code and \(J\) be a subset of the coordinates of \(C\) with \(|J|<n-d_{1}\). We also assume a generator matrix of \(C\) of the form \[\begin{bmatrix}g(C_{0})&0\\ A&g(C_{J})\end{bmatrix},\] where \(g(C_{J})\) is a generator matrix of \(C_{J}\), the projection of \(C\) onto the coordinates \(J\), and \(g(C_{0})\) is the generator matrix of \(C_{0}\), the subcode of \(C\) which is 0 on \(J\) (but does not contain the coordinates in \(J\)). With this in mind, we give the following lemma, which is used for the generalized covering algorithm. The proof is given for completeness. **Lemma 1**.: _[_8_]__\(R_{1}(C)\leq R_{1}(C_{J})+R_{1}(C_{0})\)._ Proof.: Let \(v=(v_{0},v_{J})\) be an arbitrary vector of length \(n\), where \(v_{J}\) is of length \(|J|\) and \(v_{0}\) is of length \(n-|J|\). Then there exists a codeword of \(C\) of the form \((a,c_{J})\), where \(c_{J}\in C_{J}\) satisfies \(d_{H}(v_{J},c_{J})\leq R_{1}(C_{j})\) (where \(d_{H}\) denotes Hamming distance). Furthermore, there is a codeword of the form \((c_{0},0)\), where \(c_{0}\in C_{0}\), such that \(d_{H}(v_{0}+a,c_{0})\leq R_{1}(C_{0})\). Therefore, \(v\) is of distance at most \(R_{1}(C_{J})+R_{1}(C_{0})\) from \((a,c_{J})+(c_{0},0)\). Furthermore, as proven in [6], the generalized Hamming weights are related to the covering radius by the following bound. A full proof is given in order to clarify subsequent parts of the paper. **Theorem 3**.: _[_6_]_ _Let \(C\) be an \([n,k]\) chained code with GHWs \(d_{0}=0,d_{1},d_{2},...,d_{k}\). Then the covering radius \(R(C)\) of \(C\) satisfies_ \[R(C)\leq n-\sum_{r=1}^{k}\Bigl{\lceil}\frac{d_{r}-d_{r-1}}{q}\Bigr{\rceil}.\] Proof.: Let \(C\) be an \([n,k]\) chained code (Definition 4) with GHWs \(d_{1},d_{2},\ldots,d_{k}\), and let \(\Gamma(C)\) be a chained generator matrix of \(C\) with rows \(c_{1},\ldots,c_{k}\) (Definition 5), arranged as in Remark 1. Further, for \(r\in[k]\) let \(M_{r}\) be the top-left \(r\times d_{r}\) submatrix of \(\Gamma(C)\), let \(s_{r,1},\ldots,s_{r,r}\in\mathbb{F}_{q}^{d_{r}}\) be its rows (numbered top to bottom), and let \(C_{r}\) be the row-span of \(M_{r}\). Since \(C_{k}=C\), the theorem can be proved by induction on the dimension of the code, as follows. In the base case, notice that \(C_{1}=\text{span}\{s_{1,1}\}\), and \(s_{1,1}\in\mathbb{F}_{q}^{d_{1}}\) has no zero entries. Fix an arbitrary vector \(v\in\mathbb{F}_{q}^{d_{1}}\) and denote \(s_{1,1}=(\sigma_{1},\ldots,\sigma_{d_{1}})\). By the pigeonhole principle, the multi-set \(\{\{v_{j}/\sigma_{j}\}\}_{j=1}^{d_{j}}\) contains some element at least \(\lceil d_{1}/q\rceil\) times, and let \(\lambda\) be that element. It follows that \(d_{H}(\lambda s_{1,1},v)\leq d_{1}-\lceil d_{1}/q\rceil\), which implies that \(R(C_{1})\leq d_{1}-\lceil d_{1}/q\rceil\) and concludes the base case. In the inductive step, we assume that \(R(C_{r})\leq d_{r}-\sum_{i=1}^{r}\lceil(d_{i}-d_{i-1})/q\rceil\) and observe that by construction \[M_{r+1}=\begin{pmatrix}\frac{d_{r}}{M_{r}}&\frac{d_{r+1}-d_{r}}{0\ \ \ldots\ \ 0}\\ &s_{r+1,r+1}&\end{pmatrix}\] Since the last \(d_{r+1}-d_{r}\) elements of \(s_{r+1,r+1}\) must be nonzero (since \(M_{r+1}\) does not have zero columns), we have that \[R(C_{r+1})\leq R(C_{r})+d_{r+1}-d_{r}-\lceil(d_{r+1}-d_{r})/q\rceil\] by Lemma 1, and another similar use of the pigeonhole principle. Using our induction hypothesis, this implies that \[R(C_{r+1}) \leq d_{r}-\sum_{i=1}^{r}\lceil(d_{i}-d_{i-1})/q\rceil\] \[\quad+d_{r+1}-d_{r}-\lceil(d_{r+1}-d_{r})/q\rceil\] \[=d_{r+1}-\sum_{i=1}^{r+1}\lceil(d_{i}-d_{i-1})/q\rceil,\] which completes the proof. A _covering algorithm_ is evident from the proof of Theorem 3 and Lemma 1. The algorithm receives a word \(v\in\mathbb{F}_{q}^{n}\) to cover, and outputs a codeword within distance at most \(n-\sum_{r=1}^{k}\left[\frac{d_{r}-d_{r+1}}{q}\right]\) from \(v\). The algorithm requires a chained generator matrix, and proceeds by covering \(v\) sequentially by \(C_{1},C_{2}\ldots,C_{k}\), by finding the proper scalar multiple which covers the corresponding part of \(v\), and subtracting the covering codeword from what is left to cover. An example is given in Appendix A. ### _Reed-Muller codes_ Reed-Muller codes are a central topic in coding theory, and defined as follows. **Definition 6**.: _For a field \(\mathbb{F}_{q}\) and integers \(r\leq(q-1)m\) an \(RM_{q}(r,m)\) code is defined as the set of vectors_ \[RM_{q}(r,m)\triangleq\] \[\{(f(\alpha))_{\alpha\in\mathbb{F}_{q}^{m}}:f\in\mathbb{F}_{q}[x_{ 1},x_{2},\ldots,x_{m}],\deg(f)\leq r\}\] _Furthermore, because \(x^{q}=x\), we only consider polynomials where the degree of each \(x_{i}\) is less than \(q\)._ The binary codes \(RM_{2}(r,m)\) can also be defined recursively using the so-called "\((u,u+v)\) construction" \[RM_{2}(r,m)\triangleq\] \[\{(u,u+v):u\in RM_{2}(r,m-1),v\in RM_{2}(r-1,m-1)\}.\] Additionally, the GHWs for binary Reed-Muller codes are known. Let \(\rho(r,m)=\sum_{i=0}^{r}\binom{m}{i}\) be the dimension of an \(RM(r,m)\) code, and define the the _canonical_\((r,m)\)_-representation_ (\((r,m)\)-representation, for short) of a number \(t\) as follows: **Theorem 4**.: _[_9_]_ _Given \(r,m\), any \(0\leq t\leq\rho(r,m)\) can be written as_ \[t=\sum_{i=1}^{k}\rho(r_{i},m_{i})\] _where the \(r_{i}\) are decreasing, and \(m_{i}-r_{i}=m-r-i+1\). In addition, this representation is unique._ **Example 1**.: _The canonical representation of \(7\) is \(7=\rho(1,4)+\rho(0,2)+\rho(0,1)\), since we have that \(\rho(1,4)=5\), \(\rho(0,2)=1\), and \(\rho(0,1)=1\)._ Theorem 4 is used to characterize the GHW hierarchy of Reed-Muller Codes. **Theorem 5**.: _[_9_]__\(d_{t}(C)=\sum_{i=1}^{k}2^{m_{i}}\), where \(t=\sum_{i=1}^{k}\rho(r_{i},m_{i})\)._ Similarly, a slightly more involved expression for the GHWs of \(q\)-ary Reed-Muller codes is known for \(q>2\)[5], and will be discussed in the sequel. ## III The bound and the algorithm ### _A simple bound_ In this section we devise a bound on the generalized covering radius of a given code using its GHWs. The bound is based on Theorem 3 alongside Definition 2, and Lemma 2 which follows, whose proof is trivial given the following alternative definition of \(d_{r}(C)\). **Definition 7**.: _[_9_]__\(d_{r}(C)=\min\{|I|:I\subseteq[n],|I|-\operatorname{rank}(H_{I})\geq r\}\), where \(H\) is a parity check matrix and \(H_{I}\) denotes the submatrix with columns of indices in I._ **Lemma 2**.: _Let \(C\) be an \([n,k]_{q}\) linear code with generator matrix \(G\), and for \(1\leq t\leq n-k\) let \(C_{t}\triangleq\{xG|x\in\mathbb{F}_{q^{t}}^{k}\}\). Then, the GHWs of \(C\) and \(C_{t}\) coincide._ Proof.: Since \(C\) and \(C_{t}\) have the same parity check matrix, it follows trivially from the above definition that they also have the same GHWs. An equally simple proof can be derived from Lemma 4 of [7], however we have not explored if there are further connections to this work. We are now in a position to state the bound for the generalized covering radius. It is a straightforward combination of Lemma 2 with Theorem 3 and Definition 2. **Theorem 6**.: \[R_{t}(C)\leq n-\sum_{r=1}^{k}\Bigl{\lceil}\frac{d_{r}-d_{r-1}}{q^{t}}\Bigr{\rceil} \triangleq\mu_{t}(C).\] Proof.: Since the GHWs of \(C\) and \(C_{t}\) coincide, and the generator matrices of \(C\) are also the generator matrices of \(C_{t}\), it follows that \(C_{t}\) satisfies the chain condition. Thus, one can apply Theorem 3 to \(C_{t}\) and conclude the proof. In the following subsections the bound from Theorem 6 is used to obtain an efficient generalized covering algorithm for any chained code, and then specified to binary and nonbinary Reed-Muller codes. ### _The algorithm._ The following algorithm follows the outline of the one which follows from Theorem 3 for the ordinary covering problem; an example of which is given in Appendix A. It applies to any chained code, and returns a set of codewords which cover the input words up the value defined by the bound in Theorem 6. We assume that a chained matrix (Def. 5) is given as input; as an example, in the sequel it is shown how a chained matrix can be found for Reed Muller codes. In this algorithm we assume some fixed basis \(b_{1},\ldots,b_{t}\) of over \(\mathbb{F}_{q}\), and denote by \(\binom{[n]}{\leq d}\) the family of all subsets of \([n]\) of size at most \(\ell\). **Theorem 7**.: _The vectors \(\ell_{1},\ldots,\ell_{t}\) returned by Algorithm 1 are codewords of \(C\), and there exists \(I\in\binom{[n]}{\leq\mu_{t}(C)}\) such that \(\operatorname{supp}(v_{i}-\ell_{i})\subseteq I\) for every \(i\in[t]\)._ Proof.: Observe that \(u-u_{0}=-\sum_{i=1}^{k}a_{i}r_{i}\) for some \(a_{i}\)'s in \(\mathbb{F}_{q^{t}}\), and hence there exist \(a_{i,j}\)'s in \(\mathbb{F}_{q}\) such that \[u-u_{0}=-\sum_{i=1}^{k}\left(\sum_{j=1}^{t}a_{i,j}b_{j}\right)r_{i}=\sum_{j=1} ^{t}b_{j}\left(-\sum_{i=1}^{k}a_{i,j}r_{i}\right).\] Therefore, since the \(r_{i}\)'s are rows in a generator matrix of \(C\), it follow that \(\ell_{j}=-\sum_{i=1}^{k}a_{i,j}r_{i}\) is a codeword of \(C\) for every \(j\in[t]\). To show that a set \(I\in\binom{[n]}{\leq\mu_{t}(C)}\) exists, i.e., that \(\{\ell_{j}\}_{j=1}^{t}\) are a \(t\)-covering of \(\{v_{j}\}_{j=1}^{t}\) within the bound, first observe that since \(u-u_{0}=-\sum_{i=1}^{k}a_{i}r_{i}\), it follows that \(u-u_{0}\in C_{t}\). Hence, we have found a codeword in \(C_{t}\) which covers \(u_{0}\), and since \(-\sum_{i=1}^{k}a_{i}r_{i}=\sum_{i=1}^{t}b_{i}\ell_{i}\) by definition, the respective Hamming distance is \[d_{H}(u_{0},-\sum_{i=1}^{k}a_{i}r_{i}) =d_{H}(u_{0},\sum_{i=1}^{t}b_{i}\ell_{i})\] \[=w_{H}(\sum_{i=1}^{t}b_{i}v_{i}-\sum_{i=1}^{t}b_{i}\ell_{i})\] \[=w_{H}(\sum_{i=1}^{t}b_{i}(v_{i}-\ell_{i}))\] \[=\big{|}\bigcup_{i=1}^{t}\operatorname{supp}(v_{i}-\ell_{i}) \big{|}\leq\mu_{t}(C),\] where the last step follows by identical arguments as in the proof of Theorem 3. Hence, the set \(I\) on which \(u_{0}\) and \(\sum_{i=1}^{t}b_{i}\ell_{i}\) differ belongs to \(\binom{[n]}{\leq\mu_{t}(C)}\), which concludes the proof. ### _Complexity analysis_ **Theorem 8**.: _Algorithm 1 runs in \(O(ntk\log(q)^{2})\) time._ Proof.: To find the element \(a\in\mathbb{F}_{q^{t}}\) in the for-loop, recall that \(\{r_{i,j}\}_{j\in\{d_{i-1}+1,...,d_{i}\}}\) are nonzero for every \(i\in[k]\), and compute \(\{\{u_{j}r_{i,j}^{-1}\}_{j\in\{d_{i-1}+1,...,d_{i}\}}\). Let \(a\) be the most frequently occurring value in this multi-set; it is readily verified that the required maximum is obtained by this value. Computing \(r_{i,j}^{-1}\) takes \(O(\log(q)^{2})\) time by the extended Euclidean algorithm, and multiplication of the \(u_{j}r_{i,j}^{-1}\) takes \(O(t\log(q)^{2})\) time. This leads to complexity \(O(t\log(q)^{2}\cdot(d_{i}-d_{i-1}))\) for each \(i\). Summation over all \(i\)'s yields \(O(\log(q)^{2}\cdot n)\) due to the telescopic sum. Furthermore, computing \(u\) and \(u-a\cdot r_{i}\) for each \(i\) takes \(n(t\log(q)^{2}+t\log(q))\) time, and this computation is done \(k\) times, taking \(O(ntk\log(q)^{2})\) time in total. Thus, in total the algorithm runs in \(O(ntk\log(q)^{2})\) time. ## IV Application to Reed Muller codes In this section we specialize our techniques for Reed-Muller codes due to their importance and their interesting covering properties [3]. The challenge in applying our techniques for any code \(C\) is finding the generator matrix \(\Gamma(C)\). This allows us to apply our algorithm to any Reed-Muller code, as opposed to just binary ones. First, we show how to find a chained matrix for \(q\)-ary RM codes, which allows us to use our algorithm. Then, for binary RM codes, we devise an extension of [3] using our algorithm, which improves upon [3] exponentially. ### _Chained generator matrices for Reed-Muller codes_ For \(RM_{2}(r,m)\), there exists a chained generator matrix whose \(j\)'th row is the evaluation of the \(j\)'th monomial of degree \(r\) or less, according to _anti-lexicographic_ order; the full details are given in [9, Thm. 7]. For the general \(q\)-ary case, we provide the following generalization of the \((u,u+v)\) construction. **Theorem 9**.: _Let \(RM_{q}(r,m)\) be a \(q\)-ary RM code with generator matrix \(G_{r,m}\). Then letting \(w=\min(r,q-1)\) and \(\mathbb{F}_{q}=\{0,\gamma^{0},\gamma^{1},\ldots,\gamma^{q-2}\}\), we have that \(G_{r,m}=\)_ \[\begin{bmatrix}(\gamma^{q-2})^{w}G(r-w,m-1)&...&0^{w}G(r-w,m-1)\\ \vdots&\vdots&\vdots\\ (\gamma^{q-2})^{0}G(r,m-1)&...&0^{0}G(r,m-1)\end{bmatrix}.\] Intuitively, Theorem 9 holds by considering the multivariate polynomial ring in which the Reed-Muller code exists as a univariate polynomial ring over the last variable \(x_{m}\). Then, using a standard basis consisting of all monomials of total degree \(r\) or less, order the rows based on the degree of \(x_{m}\), and order the columns based on the field element that \(x_{m}\) evaluates to. When viewing the generator matrix in this block form, we see that because \(G(r-i,m-1)\) is a subcode of \(G(r,m-1)\), we can "subtract" lower block rows (in the block form) from upper block rows (this corresponds to subtracting matching monomials from each other, after evaluating them at \(x_{m}\)). However, we cannot use upper block rows to row-reduce lower block rows. This leads to the following theorem. **Theorem 10**.: _There exists a row reduction (in block form) after which \(G_{r,m}\) becomes lower triangular in the block form presented in Theorem 9._ Proof.: We present such row-reduction as follows, where the block-columns are numbered from \(0\) (rightmost) to \(q-1\) (leftmost). The \(0^{\prime}\)th block-column is already in lower-triangular form, since \(0^{0}G(r,m-1)\) (the bottom block) is the only nonzero block in that block-column. Next, in the \(1\)'st block-column we use multiples of \((\gamma^{0})^{1}G(r-1,m-1)\) to zero-out all the blocks \((\gamma^{0})^{2}G(r-2,m-1),\ldots,(\gamma^{0})^{w}G(r-w,m-1)\) above it; this is possible due to the subcode property mentioned above. In general, in block-column \(i\geq 1\) we use multiples of \((\gamma^{i-1})^{i}G(r-i,m-1)\) to zero-out all the blocks \((\gamma^{i-1})^{i+1}G(r-i-1,m-1),\ldots,(\gamma^{i-1})^{w}G(r-w,m-1)\) above it; this does not spoil the zero blocks to the right of block-column \(i\) since they are already zero. Given this lower triangular form, we have that a chained matrix for \(RM_{q}(r,m)\) can be obtained recursively from RM codes of lower order. While seeing that it is a generator matrix of \(RM_{q}(r,m)\) is rather straightforward, showing that it indeed realizes the GHWs of \(RM_{q}(r,m)\) requires a dive into their definition from [5]. **Theorem 11**.: _Consider an \(RM_{q}(r,m)\) with generator matrix \(G_{r,m}\). Then_ \[\Gamma(G_{r,m})=\begin{bmatrix}\Gamma(G(r-w,m-1))&...&0\\ \vdots&\ddots&\vdots\\ \Gamma(G(r,m-1))&...&\Gamma(G(r,m-1))\end{bmatrix}.\] Proof.: We can see that this is a generator matrix directly from Theorem 9, and will prove that it has the correct form using induction. Our base cases are when \(r=0\) or when \(r=(q-1)m\). When \(r=0\), the generator matrix is just a row of all \(1\)'s, and when \(r=(q-1)m\), then we have the identity matrix, both of which are chained. In the inductive case, it is obvious that if the matrix is lower triangular, and the matrices on the diagonal blocks are chained (for the respective codes spanned by them), then the overall matrix is of the correct form. It remains to be seen, however, if the rows of this matrix realize the correct generalized hamming weights. In [5], the generalized Hamming weights of \(q\)-ary Reed-Muller codes are established. Furthermore, they give a recursive algorithm which can be used to compute the GHWs. We see that if our matrix in fact realizes these weights, then it would also induce a recursive way of computing the GHWs (by computing the GHWs of each of the diagonal blocks). In fact, doing so follows the exact same recursive algorithm that is given in [5, Remark 6.9]. We will go through the four parts of the algorithm to show this. The algorithm runs on a \(4\)-tuple \((r,u,v,m)\). We will view the algorithm as finding the \(r\)-th row of \(\Gamma(G_{r,m})\) by recursively adding up the lengths of the blocks. In this case \(u,m\) are the code parameters of the current block, \(v\) is the block row of the matrix (also the degree of the variable we are considering), and \(r\) is the row whose length we are trying to find. The four steps enumerated in the paper are the following: i) \(v=0\): In this case, we have made it to the last row of the matrix, so we must move to the next variable and make a recursive call on \(u,m-1\). ii) \(v>u\): degree of variable we are considering is higher than max degree of polynomials in the code, therefore since there are no variables of degree higher than \(u\) we set \(v:=u\). iii) \(r>\rho(u-v,m-1)\): then the number of rows in the block row we are considering is greater than \(r\), so we can compute the length of \(G_{u-v,m-1}\) (the element on the diagonal of the row we are considering), and move on to the next row, adjusting \(r\) by the number of rows we just computed. The tuple that is offshooted in this step is simply the length of \(G_{u-v,m-1}\). At the end of the algorithm all of these lengths are added up, equivalent to adding up the lengths of the blocks on the block diagonal of \(\Gamma(G_{r,m})\). iii) \(r<\rho(u-v,m-1)\): Then the row we need is within the block row we are considering, so we must recurse on the block element within the row we are considering to get to the correct row. We see that these four steps, equivalent to the ones described in the paper, describe finding the support of the first \(r\) rows of \(\Gamma(G_{u,m})\). Therefore \(\Gamma(G_{u,m})\) realizes the GHW hierarchy of \(RM_{q}(r,m)\). ### _A modification to Elimelech's covering algorithm_ Elimlech's algorithm only applies to the binary \(RM_{2}(r,m)\), and relies on its recursive \((u,u+v)\) construction. Roughly speaking, it receives as input a \(t\times n\) matrix of \(t\) vectors to cover, splits it in half lengthwise, and calls a routine named RECURSIVE to cover each of the halves individually. Additionally, it attempts to employ the subadditive property mentioned in Theorem 2 using a routine called SUBADDITIVE, which calls RECURSIVE with each row of the input matrix, and returns the minimum of the two. In the base case it requires a brute force computation of the minimum distance between all codewords of \(RM_{2}(1,m)\) and a fixed vector \(v\), which leads the complexity to be quadratic in the length \(n=2^{m}\) and exponential in \(t\). While Algorithm 1 can be applied directly on any \(RM_{2}(r,m)\), a better alternative exists. Specifically, we follow the recursive structure of Elimelech's algorithm and only replace the base case by Algorithm 1 (for which a chained matrix is easily computable). This modification reduces the complexity to be linear in both \(n\) and \(t\) but comes at a cost--the output codewords will cover the input vectors only up to the bound in Theorem 6. Hence, in Appendix B we numerically compare known bounds from [3] against Theorem 6, and show improvement in many cases. Let COVER be the algorithm from [3, Alg. 1], for which we have the following. **Theorem 12**.: _[_3_, Thm. 25]_ _For any \(t,r,m\in\mathbb{N}\), \(\text{COVER}(v,r)\) has complexity_ \[O(t2^{t(1)(\log(n)+1)}(2^{t+1}-1)^{-r}+tn\log(n))).\] Let \(\text{COVER}^{\prime}\) be the version of COVER with the modified base case, for which we have the following. **Theorem 13**.: _For any \(t,r,m\in\mathbb{N}\), \(\text{COVER}^{\prime}(v,r)\) has complexity \(O(tn\log(n))\)._ Proof.: This proof follows the same methodology as the proof of Theorem 12. We analyze the complexity of RECURSIVE\((v,r)\), denoted by \(T(t,r,m)\), in an inductive manner. We have two base cases. When \(r=m\) we have that \(T(t,m,m)=c^{\prime}\) for some constant \(c^{\prime}\). When \(r=1\) we apply Algorithm 1 in \(O(nkt)\) time with \(k=\log(2^{m})+1=\log(n)+1\). Hence, both base cases run in \(O(tn\log(n))\) time. In our inductive step, we assume that the claim holds for \(T(t,r-1,m-1)\) and \(T(t,r,m-1)\), and prove that it holds for \(T(t,r,m)\). The algorithm splits a matrix of size \(t\times n\) in half and then calls two recursive instances. Thus, for some constants \(c\) and \(c^{\prime}\) we have that \[T(t,r,m) =c^{\prime}tn+T(t,r-1,m-1)+T(t,r,m-1)\] \[\leq c^{\prime}tn+2ct(n/2)\log(n/2)=O(tn\log(n)).\] This completes the proof of RECURSIVE\((v,r)\). To complete the proof of the overall algorithm, notice that SUBADDITIVE is merely \(t\) consecutive calls to RECURSIVE with \(t=1\), and hence it does not increase the overall complexity.
2302.01396
Overview of phase-field models for fatigue fracture in a unified framework
In the last ten years, the phase-field method has gained much attention as a novel method to simulate fracture due to its straightforward way allowing to cover crack initiation and propagation without additional conditions. More recently, it has also been applied to fatigue fracture due to cyclic loading. This publication gives an overview of the main phase-field fatigue models published to date. We present all models in a unified variational framework for best comparability. Subsequently, the models are compared regarding their most important features. It becomes apparent that they can be classified in mainly two categories according to the way fatigue is implemented in the model - that is as a gradual degradation of the fracture toughness or with an additional term in the crack driving force. We aim to provide a helpful guide for choosing the appropriate model for different applications and for developing existing models further.
Martha Kalina, Tom Schneider, Jörg Brummund, Markus Kästner
2023-02-02T20:17:36Z
http://arxiv.org/abs/2302.01396v3
# Overview of phase-field models for fatigue fracture in a unified framework ###### Abstract The phase-field method has gained much attention as a novel method to simulate fracture due to its straightforward way allowing to cover crack initiation and propagation without additional conditions. More recently, it has also been applied to fatigue fracture due to cyclic loading. This publication gives an overview of the main phase-field fatigue models published to date. We present all models in a unified variational framework for best comparability. Subsequently, the models are compared regarding their most important features. It becomes apparent that they can be classified in mainly two categories according to the way fatigue is implemented in the model - that is as a gradual degradation of the fracture toughness or as an additional term in the crack driving force. We aim to provide a helpful guide for choosing the appropriate model for different applications and for developing existing models further. keywords: Phase-field, Fracture, Fatigue, Review, Variational + Footnote †: journal: Engineering Fracture Mechanics ## 1 Introduction Fatigue fracture is the main cause of failure in engineering structures [1]. A fatigue crack usually undergoes three stages [2]: The crack initiation stage, followed by stable crack propagation and sudden residual fracture. For many engineering components, the structure is designed to withstand crack initiation, e. g. with the help of component S-N curves (also called Wohler curves). But especially in thin-walled parts, the resistance against fatigue crack growth can be decisive for the design process as well. Often, Paris curves [3], which describe the fatigue crack growth rates in the material, are used to estimate the crack growth for a given number of load cycles, e. g. within one inspection interval. However, more advanced techniques for the estimation of crack growth are currently under development. Modern simulation techniques like cohesive zone models [4] and XFEM [5] suffer from the problem of describing the topology of evolving cracks and require either a predefined crack path or complex enriched shape functions in order to capture the crack. From this perspective, the phase-field method for fracture is advantageous as it describes the crack topology with an additional field variable. The emerging coupled problem covers crack initiation, deflection, branching and merging of cracks in a straightforward way. Due to its flexibility, this method has gained attention and advancement in the past ten years. After the pioneering works of Francfort and Marigo[6] and Bourdin et al. [7; 8] regarding the variational formulation of fracture and the regularisation of the crack geometry, as well as Miehe et al. [9; 10] regarding model formulation and implementation, a variety of different approaches to phase-field modelling of static brittle fracture have been published, see [11] for an overview. The various extensions to ductile fracture are reviewed in [12], see furthermore [13] for an overview of viscous phase-field models. More recently, fatigue fracture has also been a topic of intensive research in the phase-field community. It is the aim of this work to give an overview of the models, explain differences and highlight the loading types and scope of application they might be suitable for. When discussing the modelling of fatigue cracks it is important to consider the different mechanisms that lead to fracture, depending on material and loading type. Under small loading amplitudes, material can withstand large numbers of load cycles (High cycle fatigue - HCF). The material behaves macroscopically mostly elastic. On the other hand, in low cycle fatigue (LCF), load amplitudes are higher, leading to significant inelastic effects, especially around the crack tip. The transition between HCF to LCF depends on the material. For metals, \(10^{2}\) to \(10^{4}\) load cycles are considered to be LCF [2]. LCF cracks are correlated best with elastic-plastic strain quantities while HCF cracks are mostly stress-controlled [2]. Furthermore, not only the load amplitude, but also the mean load, the multiaxiality of the loading and the crack opening mode [14] can have significant influence on fatigue life. The same applies to crack closure effects caused by plastic deformation and roughness of the crack flanks, among others [15]. Historically, due to their great industrial relevance, metals are the materials studied best regarding their fatigue behaviour. Fatigue in metals arises from plasticity [2]. For LCF, macroscopic plastic deformations accompany the crack. However, even for macroscopic stresses below the elastic limit - typical for HCF - stress concentrations at defects on the grain-scale occur, which lead to plastic microdeformations [16]. This effect causes cyclic work-hardening or softening of the material, i. e. increasing or decreasing stress amplitudes in a strain-controlled experiment, compared to the monotonic stress-strain curve [2]. Crack initiation in metals is caused by dislocations in the polycrystalline material. These dislocations accumulate in permanent slip bands, driven by shear stress components and finally lead to material separation [2]. Slip bands often form at stress concentrations, e. g. at notches, imperfections, voids and inclusions [14]. Merging of the, at first, microscopic cracks finally leads to macroscopic crack initiation. This initiation phase can take up to ninety percent of the component's fatigue life [2]. Afterwards, the crack evolves into a so-called _long crack_, i. e. visible crack, with alternating plastic slips on each flank [16] which is well-described by Paris law [17] and then finally undergoes sudden residual fracture. Fatigue in polymers, on the other hand, is mainly caused by formation of cavities and cavitations. Macromolecules are degraded progressively. Although the mechanisms leading to fatigue in polymers are manifold and strongly depend on the type of polymer, damage is mostly controlled by shear, principal strains and the hydrostatic part of the stress tensor. In contrast to metals, cracks can evolve under compressive or hydrostatic stress. [2] Especially elastomers call for the use of finite strain measures even in fatigue simulations as well as rate-dependent models. The majority of the phase-field fatigue models mentioned in this overview is either meant for or at least applied to metals, yet there are also a few for other material classes. The different models mainly vary concerning their fatigue variable, which describes the cyclic loading history of the material, and the way this fatigue variable is incorporated into the model. With regard to the latter, this paper identifies two main model classes most phase-field fatigue models fit into: Those with degraded feature toughness (_type A_) and those with additional crack driving force (_type B_). This paper presents all models in a unified framework to allow for better comparability and to discuss common features and differences. This ought to be a helpful basis for further development of phase-field models for cyclic loads. Furthermore it is meant as a guide for choosing a model for a component of a certain material undergoing a specific loading type. Further demands regarding the simulation time or physical rigorousness of the model may also to be taken into consideration. The paper is structured as follows. Section 2 outlines a general framework for phase-field fatigue models which comprises most models presented later. A variational formulation is used. In addition, a short overview of other derivation strategies is given subsequently. Section 3 includes a short description of all mentioned models as well as a table listing model features for clarity. Section 4 discusses the main model features. The characteristics of model type \(A\) and \(B\) (see above) are emphasised by a numerical example. The paper terminates with conclusion and outlook. \begin{tabular}{l l l} \multicolumn{3}{l}{**Nomenclature**} \\ \(\alpha\) & Isotropic hardening & \(\mathbf{\sigma}^{\rm ov}\) & Overstress \\ \(\bar{\tau}\) & Traction vector & \(\mathbf{\sigma}_{+}\) & Tensile stress part \\ \(\bar{f}\) & Volume force & \(\mathbf{\sigma}_{-}\) & Compressive stress part \\ \(\bar{u}\) & Displacement boundary conditional & \(\sigma^{\rm y}\) & Yield stress \\ \(\mathcal{B}\) & Domain & \(\mathbf{\alpha}\) & Kinematic hardening \\ \(\mathcal{E}\) & Generating functional & \(\mathbf{\chi}\) & Backstress for kinematic hardening \\ \(\mathcal{E}_{\rm ext}\) & Work of external forces & \(\mathbf{\Phi}\) & Damper strain \\ \(\mathcal{F}\) & Fatigue variable & \(\tilde{g}\) & Degradation function of \(W^{B}_{\rm fat}\) \\ \(\mathcal{H}\) & History variable of crack driving force & \(\mathbf{n}\) & Normal vector \\ \(\mathcal{W}\) & Conditions for variational principle & \(\mathbf{q}_{\alpha}\) & Set of plastic variables \\ \(\Delta\) & Dissipative part of \(W\) & \(\mathbf{u}\) & Displacement \\ \(\Delta^{\rm p}\) & Energy density of plastic dissipation & \(\mathbf{x}\) & Location \\ \(\Delta^{\rm reg}\) & Regularisation part of \(W\) & \(a,b,q,\kappa\) & Material constants \\ \(\ell\) & Regularisation length & \(a_{0}\) & Initial crack length \\ \(\mathbf{\varepsilon}\) & Total strain & \(c_{\omega}\) & Constant of \(\gamma_{\ell}\) \\ \(\mathbf{\varepsilon}^{\rm e}\) & Elastic strain & \(d\) & Fradcture phase-field \\ \(\mathbf{\varepsilon}^{\rm p}\) & Plastic strain & \(d_{n}\) & Phase-field of last timestep \\ \(\eta\) & Viscous regularisation constant & \(F\) & Load (force) \\ \(\gamma_{\ell}\) & Regularised crack surface energy density & \(f^{\rm p}\) & Plastic yield function \\ \(G_{\rm c}\) & Fracture toughness & \(f^{d}\) & Yield condition of phase-field problem \\ \(\kappa\) & Fatigue degradation parameter & \(G\) & Energy release rate \\ \(\lambda\) & Plastic multiplier & \(g\) & Degradation function \\ \(\lambda^{\infty}\) & Penalty parameter & \(H\) & Fatigue function of model version "B" \\ \(\omega\) & Local part of \(\gamma_{\ell}\) & \(h\) & Fatigue degradation function of model version "A" \\ \(\partial\mathcal{B}\) & Boundary & \(p\) & Stress associated with isotropic hardening \\ \(\partial\mathcal{B}^{\rm N}\) & Neumann boundary & \(R\) & Load ratio \\ \(\phi^{\rm p}\) & Plastic dissipation potential & \(t,\tau\) & Time \\ \(\phi^{\rm reg}\) & Energy density of regularisation & \(t_{n}\) & Last timestep \\ \(\phi^{\rm visc}\) & Viscous dissipation potential & \(W\) & Generating energy density functional \\ \(\Pi^{\tau}\) & Incremental rate form of \(\mathcal{E}\) & \(w\) & Measurement of CT specimen \\ \(\psi\) & Free energy density & \(W^{A}_{\rm fat}\) & \(W_{\rm fat}\) for model version "A" \\ \(\psi^{\rm e}_{+}\) & Tensile part of elastic energy density & \(W^{B}_{\rm fat}\) & \(W_{\rm fat}\) for model version "B" \\ \(\psi^{\rm e}_{-}\) & Compressive part of elastic energy density & \(W_{\rm el}\) & Elastic part of \(W\) \\ \(\psi^{\rm p}\) & Energy density of hardening & \(W_{\rm fat}\) & Fatigue part of \(W\) \\ \(\mathbf{\sigma}\) & Stress & \(W_{\rm frac}\) & Fracture part of \(W\) \\ \(\mathbf{\sigma}^{*}\) & Undamaged stress & \(W_{\rm pl}\) & Plastic part of \(W\) \\ \(\mathbf{\sigma}^{\rm eq}\) & Equilibrium stress & \(3\) & \(W_{\rm reg}\) & Regularisation part of \(W\) \\ \end{tabular} ## 2 General framework for phase-field fatigue models The respective models are compared using a general phase-field framework for fatigue fracture outlined in the following. Besides the way of integrating fatigue, the models cover a variety of modelling features including various types of plasticity and, albeit few of them, viscous behaviour. At first, the derivation of the governing equations in this chapter is limited to elastic-plastic cyclic behaviour, an alternative for viscous behaviour is given later. However, since most models use viscous regularisation for numerical reasons, it is included standardly. Some other deviations from the general derivation presented here occur for a few models and will become clear in Section 3. Nomenclature from the original papers is commonly abandoned for the sake of comparability. The way of derivation and nomenclature partly follow [18] and [12], though not strictly. The modelling framework is presented using a variational framework. Still, a brief overview of other ways of derivation is given at the end of the section. ### Model derivation via variational framework The domain under consideration is \(\mathcal{B}\subset\mathbb{R}^{n}\) with its boundary \(\partial\mathcal{B}\) and material points described by location \(\mathbf{x}\) at time \(t\). In a small strain setting, the total strain \(\mathbf{\varepsilon}(\mathbf{x},t)\) can be decomposed additively into elastic strain \(\mathbf{\varepsilon}^{\mathrm{e}}(\mathbf{x},t)\) and plastic strain \(\mathbf{\varepsilon}^{\mathrm{p}}(\mathbf{x},t)\) \[\mathbf{\varepsilon}\coloneqq\frac{1}{2}\left(\nabla\mathbf{u}+\nabla\mathbf{u}^{\top} \right)=\mathbf{\varepsilon}^{\mathrm{e}}+\mathbf{\varepsilon}^{\mathrm{p}} \tag{1}\] with \(\mathbf{u}(\mathbf{x},t)\) being the displacement. Plastic deformations can lead to hardening, which is described by the kinematic and isotropic hardening variables \(\mathbf{\alpha}(\mathbf{x},t)\) and \(\alpha(\mathbf{x},t)\), respectively. The plastic variables are summarised in the set \(\mathbf{q}_{\alpha}=\{\mathbf{\varepsilon}^{\mathrm{p}},\mathbf{\alpha},\alpha\}\). Cracks are described in a regularised manner using the phase-field variable \(d(\mathbf{x},t)\), with intact material being marked by \(d=0\) and fully fractured material marked by \(d=1\). The cyclic loading and damage history is described by a scalar fatigue variable \(\mathcal{F}(\mathbf{x},t)\). Dependencies on space, time and other variables are omitted hereafter, if not particularly necessary. #### Energy functional In order to set up a variational principle later on, a generating functional of energy density type \[W(\mathbf{\varepsilon},d,\nabla d,\dot{d},\mathbf{q}_{\alpha},\dot{\mathbf{q}}_{\alpha}; \mathcal{F})\coloneqq\psi(\mathbf{\varepsilon},d,\mathbf{q}_{\alpha})+\Delta(\mathbf{ \varepsilon},d,\nabla d,\dot{d},\mathbf{q}_{\alpha},\dot{\mathbf{q}}_{\alpha}; \mathcal{F}) \tag{2}\] is defined which consists of a free energy density \(\psi\) and a dissipative part \(\Delta\). From the Clausius-Duhem inequality \[\mathbf{\sigma}:\dot{\mathbf{\varepsilon}}-\frac{\partial\psi}{\partial\mathbf{ \varepsilon}}:\dot{\mathbf{\varepsilon}}-\frac{\partial\psi}{\partial\mathbf{q}_{ \alpha}}:\dot{\mathbf{q}}_{\alpha}-\frac{\partial\psi}{\partial d}\,\dot{d}\geq 0 \tag{3}\] we can identify \[-\frac{\partial\psi}{\partial\mathbf{\varepsilon}^{\mathrm{p}}}=\frac{\partial \psi}{\partial\mathbf{\varepsilon}}=:\mathbf{\sigma}\quad-\frac{\partial\psi}{ \partial\mathbf{\alpha}}=:\mathbf{\chi}\quad-\frac{\partial\psi}{\partial\alpha}=:p \tag{4}\] the stress \(\mathbf{\sigma}\), a backstress tensor \(\mathbf{\chi}\) for kinematic hardening and the stress-like quantity \(p\) associated with isotropic hardening. For clarity, the generating density functional \(W\) is here decomposed into \[W\coloneqq W_{\mathrm{el}}(\mathbf{\varepsilon}^{\mathrm{e}},d)+W_{\mathrm{pl}}( \mathbf{\varepsilon},d,\mathbf{q}_{\alpha},\dot{\mathbf{q}}_{\alpha})+W_{\mathrm{frac}}(d, \nabla d)+W_{\mathrm{fat}}(d,\nabla d;\mathcal{F})+W_{\mathrm{reg}}(\dot{d}) \tag{5}\] the elastic free energy \(W_{\mathrm{el}}\), the plastic part \(W_{\mathrm{pl}}\), the contributions from fracture \(W_{\mathrm{frac}}\) and fatigue \(W_{\mathrm{fat}}\), respectively, and the viscous regularisation \(W_{\mathrm{reg}}\). The **elastic** energy density \[W_{\mathrm{el}}(\mathbf{\varepsilon}^{\mathrm{e}},d)\coloneqq g(d)\,\psi_{+}^{ \mathrm{e}}(\mathbf{\varepsilon}^{\mathrm{e}})+\psi_{-}^{\mathrm{e}}(\mathbf{ \varepsilon}^{\mathrm{e}}) \tag{6}\] consists of a degraded part (often the tensile part) \(\psi_{+}^{\mathrm{e}}\) with the degradation function \(g(d)\) and a part (often the compressive part) \(\psi_{-}^{\mathrm{e}}\), which remains undegraded. For this split, various concepts are used by the models compared here, the most common one being the split by Amor et al. [19]. For the stress1 it follows Footnote 1: Some models use a different stress definition, see Section 4.7. \[\boldsymbol{\sigma}(\boldsymbol{\varepsilon}^{\mathrm{e}})\coloneqq\frac{ \partial W_{\mathrm{el}}}{\partial\boldsymbol{\varepsilon}^{\mathrm{e}}}=g(d) \boldsymbol{\sigma}_{+}(\boldsymbol{\varepsilon}^{\mathrm{e}})+\boldsymbol {\sigma}_{-}(\boldsymbol{\varepsilon}^{\mathrm{e}}) \tag{7}\] while the (virtually) undamaged stress is \[\boldsymbol{\sigma}^{*}(\boldsymbol{\varepsilon}^{\mathrm{e}})\coloneqq \boldsymbol{\sigma}_{+}+\boldsymbol{\sigma}_{-}. \tag{8}\] The energy density related to **plasticity2** Footnote 2: Some models also have dependencies on \(\nabla\alpha\) in case of gradient plasticity or an explicit strain measure for ratchetting, e. g. Ulloa et al. [20]. \[W_{\mathrm{pl}}(\boldsymbol{\varepsilon},d,\boldsymbol{q}_{\alpha},\dot{ \boldsymbol{q}}_{\alpha})\coloneqq g(d)\psi^{\mathrm{p}}(\boldsymbol{ \varepsilon},\boldsymbol{q}_{\alpha})+g(d)\Delta^{\mathrm{p}}(\boldsymbol{ \varepsilon},d,\boldsymbol{q}_{\alpha},\dot{\boldsymbol{q}}_{\alpha}) \tag{9}\] consists of a hardening contribution \(\psi^{\mathrm{p}}\) and a dissipative contribution \(\Delta^{\mathrm{p}}\) \[\Delta^{\mathrm{p}}(\boldsymbol{\varepsilon},d,\boldsymbol{q}_{\alpha},\dot{ \boldsymbol{q}}_{\alpha})=\int\limits_{0}^{t}\phi^{\mathrm{p}}(\boldsymbol{ \varepsilon},d,\boldsymbol{q}_{\alpha},\dot{\boldsymbol{q}}_{\alpha})\; \mathrm{d}\tau \tag{10}\] which follows from a plastic dissipation potential \(\phi^{\mathrm{p}}\). Usually, but not always, both are degraded by the same degradation function \(g(d)\) as the elastic contribution. The dissipation potential can e. g. be derived from the principle of maximum dissipation. _Remark_.: In order to create an explicitly viscous model (such as in Loew et al. [21, 22]), \(\psi^{\mathrm{p}}\) and \(\phi^{\mathrm{p}}\) can be substituted by their viscous counterparts, e. g. \[\psi^{\mathrm{visc}}=\int\limits_{0}^{t}\boldsymbol{\sigma}^{\mathrm{ov}}: \dot{\boldsymbol{\varepsilon}}\;\mathrm{d}\tau\quad\text{and}\quad\phi^{ \mathrm{visc}}=\boldsymbol{\sigma}^{\mathrm{ov}}:\dot{\boldsymbol{\Phi}} \tag{11}\] with the non-equilibrium stress \(\boldsymbol{\sigma}^{\mathrm{ov}}\) and the inelastic variable set now including the viscous strain \(\boldsymbol{q}_{\alpha}=\boldsymbol{\Phi}\). The damage dissipation density due to formation of **crack surface** is given by \[W_{\mathrm{frac}}(d,\nabla d)\coloneqq G_{\mathrm{c}}\gamma(d,\nabla d) \tag{12}\] wherein \(G_{\mathrm{c}}\) is the fracture toughness and the regularised crack surface density \(\gamma\) is \[\gamma(d,\nabla d)\coloneqq\frac{1}{c_{\omega}}\left(\frac{\omega(d)}{\ell}+ \ell\nabla d\cdot\nabla d\right). \tag{13}\] For the latter, the two most common formulations are so-called Ambrosio-Tortorelli [23] (AT) 1 with \(c_{\omega}=\frac{3}{8},\omega(d)=d\) and AT 2 with \(c_{\omega}=\frac{1}{2},\omega(d)=d^{2}\). See [24] and the literature cited therein for possible other choices for the local part of the dissipated fracture energy density \(w(d)\). The viscous **regularisation** term \[W_{\mathrm{reg}}=\Delta^{\mathrm{reg}}=\int\limits_{0}^{t}\phi^{\mathrm{reg}} (\dot{d})\;\mathrm{d}\tau\quad\text{with}\quad\phi^{\mathrm{reg}}=\frac{1}{2} \eta d^{2} \tag{14}\] ensures numerical stability in cases of rapidly evolving cracks. Finally, for the **fatigue** contribution \(W_{\mathrm{fat}}\) most models studied in this paper3 use one of the two structures Footnote 3: Except for Avçün et al. [25] and Lo et al. [26], see Section 3 \[W^{A}_{\text{fat}}(\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\texttext{\text oscillation over the cycles. The incremental variational principle reads \[\{\mathbf{u},d,\mathbf{q}_{\alpha}\}=\arg\left\{\min_{\mathbf{u}\in\mathcal{W}_{\bar{\mathbf{u}}}} \,\min_{d\in\mathcal{W}_{\bar{\mathbf{u}}}}\,\min_{\mathbf{q}_{\alpha}\in\mathcal{W}_{p }}\Pi^{\tau}(\mathbf{\varepsilon},d,\nabla d,\dot{d},\mathbf{q}_{\alpha},\dot{\mathbf{q}}_{ \alpha};\mathcal{F})\right\} \tag{20}\] with the spaces of admissible functions, including conditions for the displacement on the boundaries and irreversibility of the phase-field \[\mathcal{W}_{\bar{u}} \coloneqq\{\mathbf{u}\in\mathbb{R}^{3}\,|\,\mathbf{u}=\bar{\mathbf{u}}\text{ on } \partial\mathcal{B}^{\mathrm{D}}\} \tag{21}\] \[\mathcal{W}_{d_{n}} \coloneqq\{d\in\mathbb{R}\,|\,d\geq d_{n}\}\] (22) \[\mathcal{W}_{p} \coloneqq\{\mathbf{q}_{\alpha}\in\mathbb{R}^{n}\}. \tag{23}\] Next, stationarity conditions for the displacement, the plastic variables and the phase-field are exploited one by one in order to derive the model equations. #### Displacement The variational derivative \(\delta_{u}\) of \(\Pi^{\tau}\) (19) with respect to the displacement field yields the weak form of the mechanical equilibrium equation \[\delta_{u}\Pi^{\tau}=\frac{\partial}{\partial\mathbf{u}}\Pi^{\tau}\delta\mathbf{u}+ \frac{\partial}{\partial\nabla\mathbf{u}}\Pi^{\tau}\delta\nabla\mathbf{u}=\int\limits_ {\mathcal{B}}\left[\mathbf{\sigma}:\delta\mathbf{\varepsilon}-\bar{f}\cdot\delta\mathbf{u }\right]\,\,\mathrm{d}v-\int\limits_{\partial\mathcal{B}^{\mathrm{N}}}\bar{ \tau}\cdot\delta\mathbf{u}\,\,\mathrm{d}a=0 \tag{24}\] with the variations of displacement and strain \(\delta\mathbf{u}\) and \(\delta\mathbf{\varepsilon}\). Applying Gauss' theorem retrieves its local form \[\nabla\cdot\mathbf{\sigma}+\bar{\mathbf{f}}=\mathbf{0}\text{ in }\mathcal{B} \tag{25}\] with the boundary condition \(\mathbf{\sigma}\cdot\mathbf{n}=\bar{\mathbf{t}}\) on \(\partial\mathcal{B}^{\mathrm{N}}\) with \(\partial\mathcal{B}=\partial\mathcal{B}^{\mathrm{D}}\cup\partial\mathcal{B}^{ \mathrm{N}}\) and \(\varnothing=\partial\mathcal{B}^{\mathrm{D}}\cap\partial\mathcal{B}^{\mathrm{N}}\). #### Plasticity Variation with respect to the plastic variables yields \[\delta_{p}\Pi^{\tau} =\frac{\partial}{\partial\mathbf{q}_{\alpha}}\Pi^{\tau}\delta\mathbf{q} _{\alpha}+\frac{\partial}{\partial\dot{\mathbf{q}}_{\alpha}}\Pi^{\tau}\delta\dot{ \mathbf{q}}_{\alpha} \tag{26}\] \[=\int\limits_{\mathcal{B}}\left\{\frac{\partial\psi}{\partial\mathbf{ q}_{\alpha}}\,\delta\mathbf{q}_{\alpha}+\int\limits_{t_{n}}^{t}\left[\frac{ \partial\phi^{\mathrm{p}}}{\partial\mathbf{q}_{\alpha}}\delta\mathbf{q}_{\alpha}+\frac{ \partial\phi^{\mathrm{p}}}{\partial\dot{\mathbf{q}}_{\alpha}}\delta\dot{\mathbf{q}}_{ \alpha}\right]\,\,\mathrm{d}\tau\right\}\,\mathrm{d}v\] (27) \[=\int\limits_{\mathcal{B}}\left\{\left(\frac{\partial\psi}{ \partial\mathbf{q}_{\alpha}}+\frac{\partial\phi^{\mathrm{p}}}{\partial\dot{\mathbf{q}} _{\alpha}}\right)\delta\mathbf{q}_{\alpha}+\int\limits_{t_{n}}^{t}\left(\frac{ \partial\phi^{\mathrm{p}}}{\partial\mathbf{q}_{\alpha}}-\left(\frac{\partial\phi^{ \mathrm{p}}}{\partial\dot{\mathbf{q}}_{\alpha}}\right)^{\cdot}\right)\delta\mathbf{q}_{ \alpha}\,\,\mathrm{d}\tau\right\}\,\mathrm{d}v=0. \tag{28}\] Assuming the limiting case \(t\to t_{n}\), the condition \[\frac{\partial\psi}{\partial\mathbf{q}_{\alpha}}+\frac{\partial\phi^{\mathrm{p}}} {\partial\dot{\mathbf{q}}_{\alpha}}=0 \tag{29}\] must hold. This equation is known as Biot's equation and is the basis for deriving the evolution of the plastic variables. For clarity, this is demonstrated with an exemplary dissipation potential taken from Aygun et al. [25] \[\phi^{\mathrm{p}}(\dot{\mathbf{\varepsilon}}^{\mathrm{p}},\dot{\mathbf{\alpha}})= \sigma^{\mathrm{y}}||\dot{\mathbf{\varepsilon}}^{\mathrm{p}}||+\frac{b}{2}\left( \dot{\mathbf{\varepsilon}}^{\mathrm{p}}+\dot{\mathbf{\alpha}}\right)^{2},\quad\sigma^{ \mathrm{y}}=\text{const.},\,\sigma^{\mathrm{y}}>0. \tag{30}\] The plastic set in this case contains \(\mathbf{q}_{\alpha}=\{\mathbf{\varepsilon}^{\mathrm{p}},\mathbf{\alpha}\}\). For the case \(||\mathbf{\varepsilon}^{\mathrm{p}}||\neq 0\), it follows \[\frac{\partial\psi}{\partial\mathbf{\varepsilon}^{\mathrm{p}}}+\frac{ \partial\phi^{\mathrm{p}}}{\partial\mathbf{\varepsilon}^{\mathrm{p}}}=0:\quad\mathbf{ \sigma}=\frac{\partial\phi^{\mathrm{p}}}{\partial\mathbf{\varepsilon}^{\mathrm{p}} }=\sigma^{\mathrm{y}}\frac{\dot{\mathbf{\varepsilon}^{\mathrm{p}}}}{||\dot{\mathbf{ \varepsilon}^{\mathrm{p}}}||}+b\left(\mathbf{\varepsilon}^{\mathrm{p}}+\dot{\mathbf{ \alpha}}\right) \tag{31}\] \[\frac{\partial\psi}{\partial\mathbf{\alpha}}+\frac{\partial\phi^{ \mathrm{p}}}{\partial\dot{\mathbf{\alpha}}}=0:\quad\mathbf{\chi}=\frac{\partial\phi^{ \mathrm{p}}}{\partial\dot{\mathbf{\alpha}}}=b\left(\mathbf{\varepsilon}^{\mathrm{p}}+ \dot{\mathbf{\alpha}}\right). \tag{32}\] From the difference (31)\(-\)(32) we get \[\mathbf{\sigma}-\mathbf{\chi}=\sigma^{\mathrm{y}}\frac{\dot{\mathbf{\varepsilon}^{ \mathrm{p}}}}{||\dot{\mathbf{\varepsilon}^{\mathrm{p}}}||}. \tag{33}\] Defining the plastic multiplier \(\lambda=||\dot{\mathbf{\varepsilon}^{\mathrm{p}}}||\) and the yield function \(f^{\mathrm{p}}=||\mathbf{\sigma}-\mathbf{\chi}||-\sigma^{\mathrm{y}}\) we obtain the evolution equation for the plastic strain and the Karush-Kuhn-Tucker (KKT) conditions \[\dot{\mathbf{\varepsilon}^{\mathrm{p}}}=\lambda\frac{\partial f^{\mathrm{p}}}{ \partial\mathbf{\sigma}}\quad\text{and}\quad\lambda\geq 0,\,f^{\mathrm{p}} \leq 0,\,\lambda f^{\mathrm{p}}=0. \tag{34}\] Subsequently, the consistency condition \(\lambda\dot{f}^{\mathrm{p}}=0\) follows from the KKT. For a detailed derivation including both cases \(||\dot{\mathbf{\varepsilon}^{\mathrm{p}}}||\neq 0\) and \(||\dot{\mathbf{\varepsilon}^{\mathrm{p}}}||=0\) see A. Please note that this model happens to be rate-dependent and was chosen only due to its simple structure. Further, see B for an alternative way of deriving the plastic model equations via a dissipation potential following the principle of maximum dissipation. _Phase-field_ Stationarity conditions w. r. t. the phase-field variable yield the weak form, here for the example AT 2, \[\delta_{d}\Pi^{\tau}= \frac{\partial}{\partial d}\Pi^{\tau}\delta d+\frac{\partial}{ \partial\nabla d}\Pi^{\tau}\delta\nabla d=0 \tag{35}\] \[= \int_{\mathcal{B}}\bigg{\{}\bigg{[}g^{\prime}(d)\left(\psi^{ \mathrm{e}}_{+}+\psi^{\mathrm{p}}+\Delta^{\mathrm{p}}\right)+\tilde{g}^{ \prime}(d)H(\mathcal{F})+h(\mathcal{F})\frac{G_{\mathrm{c}}}{\ell}d\bigg{]}\,\delta d\] \[\qquad+h(\mathcal{F})G_{\mathrm{c}}\ell d_{,l}\,\delta d_{,l}+ \int\limits_{t_{n}}^{t}\eta d\,\delta\dot{d}\,\mathrm{d}\tau\bigg{\}}\mathrm{d }v, \tag{36}\] further demanding \(\dot{d}\geq 0\). The limiting case \(t\to t_{n}\) now leads to the evolution equation \[\eta\dot{d}=G_{\mathrm{c}}h(\mathcal{F})\left(\ell\Delta d-\frac{d}{\ell} \right)+G_{\mathrm{c}}\ell\nabla d\nabla h(\mathcal{F})-g^{\prime}(d)\big{(} \psi^{\mathrm{e}}_{+}+\psi^{\mathrm{p}}+\Delta^{\mathrm{p}}\big{)}-\tilde{g}^ {\prime}(d)H(\mathcal{F}) \tag{37}\] and the boundary condition \(\nabla d\cdot\mathbf{n}=0\). In order to ensure \(\dot{d}\geq 0\), most models use the history variable approach [10]. Adopting the prevalent case \(\tilde{g}(d)=g(d)\), the history variable \(\mathcal{H}\) can be introduced as \[\eta\dot{d}=G_{\mathrm{c}}h(\mathcal{F})\left(\ell\Delta d-\frac{d}{\ell} \right)+G_{\mathrm{c}}\ell\nabla d\nabla h(\mathcal{F})-g^{\prime}(d)\underbrace{ \max_{\tau\in[0,t]}\left(\psi^{\mathrm{e}}_{+}(\tau)+\psi^{\mathrm{p}}(\tau)+ H(\mathcal{F},\tau)+\Delta^{\mathrm{p}}(\tau)\right)}_{\mathcal{H}}. \tag{38}\] This formulation is actually not variationally consistent. See appendix C for an alternative penalisation approach proposed by [29]. _Model equations and variables_ As shown, the variational principle yields a general set of governing equations. All model variables are displayed in Table 1, while Table 2 lists all resulting model equations. ### Alternative ways of model derivation Apart from the incremental variational principle presented here, there are many other ways to derive a phase-field fracture model and the publications mentioned in this paper already cover a wide variety of derivation methods. Since this can impede the comparison of models, it is helpful to demonstrate the analogies and how the different approaches are intertwined. Figure 1 gives an overview of different paths for model derivation for a phase-field model for fatigue fracture, possibly also including elastic-plastic material behaviour. The derivation used in Section 2.1 is highlighted in red ("Way 1"). Quantities that can serve as starting points for general modelling choices are marked in blue. Although it is beyond the scope of this paper to repeat the derivation of the model with all strategies, a few common approaches are listed in the following: * The plastic dissipation potential can be derived by first setting the yield condition and using it then as a constraint for the optimisation following the principle of maximum dissipation. See B. Marked in green as "Way 2" in Figure 1. * Not only elastic-plastic material behaviour, but also the phase-field problem can be modelled using yield equations. Noii et al. [18] show that the evolution equation of the phase-field model can be reformulated to \(\eta\dot{d}=f^{d}\). The yield function \(f^{d}\) being the difference between a (crack) driving force and a (crack) resisting force offers convenient starting point for modelling decisions due to its physical interpretability. See also Miehe et al. [30] for a formulation based on yield functions for both phase-field and plasticity. * The energetic formulation based on a local stability condition and a local energy balance is also a popular way to derive the set of model equations, as shown in [32]. See Figure 1, top right corner. \begin{table} \begin{tabular}{l l l} & **Variable** & **Conjugate variable** \\ \hline Elasticity & Displacement \(\mathbf{u}\) & \\ & Elastic strain \(\mathbf{\varepsilon}^{\text{e}}\) & \(\mathbf{\sigma}=\dfrac{\partial\psi}{\partial\mathbf{\varepsilon}^{\text{e}}}\) \\ \hline Plasticity & Plastic strain \(\mathbf{\varepsilon}^{\text{p}}\) & \(\mathbf{\sigma}=-\dfrac{\partial\psi}{\partial\mathbf{\varepsilon}^{\text{p}}}\) \\ & Kinematic hardening variable \(\mathbf{\alpha}\) & \(\mathbf{\chi}=-\dfrac{\partial\psi}{\partial\mathbf{\alpha}}\) \\ & Isotropic hardening variable \(\alpha\) & \(p=-\dfrac{\partial\psi}{\partial\alpha}\) \\ \hline Fracture & Phase-field \(d\) & \(\zeta^{d}=-\dfrac{\partial\psi}{\partial d}\) \\ & Phase-field gradient \(\nabla d\) & \\ \hline Fatigue & Fatigue damage \(\mathcal{F}\) & \\ \end{tabular} \end{table} Table 1: Overview of model variables and their respective conjugate variables for general phase-field framework for fatigue fracture. \begin{table} \begin{tabular}{l l} \hline & Free energy density \(\psi=g\,\psi^{\rm e}_{+}+\psi^{\rm e}_{-}+\psi^{\rm p}\) \\ & Strain definition \(\mathbf{\varepsilon}=\mathbf{\varepsilon}^{\rm e}+\mathbf{\varepsilon}^{\rm p}=\frac{1}{2}\left(\nabla\mathbf{u}+ \nabla\mathbf{u}^{\top}\right)\) \\ & Stress \(\mathbf{\sigma}=\frac{\partial W_{\rm el}}{\partial\mathbf{\varepsilon}^{\rm e}}\) \\ \hline Equilibrium & Equilibrium \(\nabla\cdot\mathbf{\sigma}+\mathbf{\bar{f}}=\mathbf{0}\) \\ & Boundary conditions \(\mathbf{\sigma}\cdot\mathbf{n}=\mathbf{\bar{t}}\) on \(\partial\mathbf{{\cal B}}^{\rm N}\), \(\mathbf{u}=\mathbf{\bar{u}}\) on \(\partial\mathbf{{\cal B}}^{\rm D}\) \\ \hline Plasticity & Hardening variables \(\mathbf{\chi}=-\frac{\partial\psi}{\partial\mathbf{\alpha}},p=-\frac{\partial\psi}{\partial\alpha}\) \\ & Yield function \(f^{\rm p}(\mathbf{\sigma},\mathbf{\chi},p)\), often \(f^{\rm p}=\sqrt{\frac{3}{2}||{\rm dev}(\mathbf{\sigma})-{\rm dev}( \mathbf{\chi})||^{2}}-\sigma^{\rm y}+p\) \\ & Flow rules and hardening laws, often \(\mathbf{\varepsilon}^{\rm p}=\lambda\mathbf{n}_{\rm p}\), \(\dot{\mathbf{\alpha}}=\lambda\mathbf{n}_{\rm p}\), \(\dot{\mathbf{\alpha}}=\lambda\) with \(\mathbf{n}_{\rm p}=\frac{\frac{3}{2}({\rm dev}\mathbf{\sigma$ }-{\rm dev}\mbox{\boldmath$\chi})}{\sqrt{\frac{3}{2}||{\rm dev}\mathbf{\sigma}-{\rm dev}\mathbf{\chi}||^{2}}}\) \\ & KKT, consistency condition \(f^{\rm p}\leq 0,\lambda\geq 0,f^{\rm p}\lambda=0,\lambda f^{\rm p}=0\) \\ \hline Fracture & Evolution equation (including yield function) \\ & \(+\) irreversibility \(\dot{d}\geq 0\) \\ & \(\underline{\mbox{or KKT }}f^{d}\leq 0,\dot{d}\geq 0,f^{d}\dot{d}=0\) \\ & Boundary conditions \(\nabla d\cdot\mathbf{n}=0\) \\ \hline Fatigue & Evolution of fatigue variable \(\dot{\cal F}\) \\ \hline \end{tabular} \end{table} Table 2: Overview of model equations for general phase-field framework for fatigue fracture. Figure 1: Scheme of different ways of model derivation for a phase-field model for fatigue fracture, explicitly covering elastic-plastic material behaviour. Balance equations are marked in green, while quantities suitable for implementing general modelling choices are marked in blue. Derived quantities are white. Highlighted are two possible ways of deriving model equations. \begin{tabular}{c c c c c} \multicolumn{5}{c}{Table 3: Objective of line-level Runge methods.} \\ \hline \hline \multicolumn{5}{c}{Periods energy density \(\Psi\)} \\ \hline \hline \multirow{5}{*}{\({}^{\text{A}}\)} & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+} \cup(0)\) & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+}\cup(0)\times_{1}^{+}\cup(0)\) & & & \\ & \(s(0)\times_{1}^{+}\cup(0)\times_{2}^{+}\cup(0)\times_{1}^{+} ## 3 Overview of models This section gives an overview of most phase-field models for fatigue fracture published to date. If models share a very similar structure, only one of them is chosen as the representative example. Energy density \(W\), fatigue function \(h(\mathcal{F})\) or \(H(\mathcal{F})\), fatigue variable \(\mathcal{F}\) and phase-field evolution equation of each model, according to the unified notation introduced in Section 2.1, are listed in Table 3. Please note that a similar table for \(A\)-models is presented in [31]. Additionally, a short description of each is given in the following. The models are categorised into type \(A\) and type \(B\) according to their distinct fatigue terms as introduced in Section 2.1, and those that have a unique structure that does not belong to either of the aforementioned categories. ### Type A Carrara _et al._[27] This model was one of the first \(A\)-type models to be published. It is essentially a generalisation of the model by Alessi et al. [32] to 3D. It is a purely elastic model and therefore suitable for brittle material behaviour and HCF. Due to its general and simple nature, many of the following \(A\)-type models refer to this one. Both the fatigue degradation functions and the fatigue variable based on the accumulated strain energy density have been used in other models. The fatigue variable accumulates only during loading which is ensured by a Heaviside function. \(\mathcal{F}\) starts accumulating from the first load cycle. This has to be considered during model fitting in order to be consistent also during static loading. The authors were able to show that applied mean load can shift the Paris curve and that parameter \(\kappa\) in the fatigue degradation function controls the Paris parameters \(C\) and \(m\). Aldakheel _et al._[33] This model is very similar to Carrara's. Another model with the same structure but for piezo-electric materials and therefore a coupling to an electric field was published in Tan et al. [34]. Seiler _et al._[35; 36] The fatigue variable of this model is formulated in cycle domain rather than in time domain, describing the fatigue process continuously instead of simulating each loading and unloading phase. Therefore, a representative (often constant) loading instead of an oscillatory loading is used, saving computational time. See Section 4.6 for further explanation. Fatigue damage is calculated based on a structural durability concept which requires Wohler curves as an input. The local elastic-plastic stress-strain state is approximated with the help of cyclic stress-strain curves. This allows for a simplified modelling of crack tip plasticity as long as the plastic zone stays small. The model is therefore especially suitable for the transitional range between LCF and HCF. Grossmann-Ponemon _et al._[37] This model is based on Mesgarnejad et al. [38]. Its fatigue variable is also of a continuous type, formulated in cycle domain. The model parameters depend on the load ratio \(R\) between load minimum and maximum within a load cycle, which has to be specified as an input. Therefore, the model reproduces mean load effects. It is evaluated both for a cubic and the standard AT 2 degradation function \(g(d)\). Fatigue accumulation is inhibited in strongly degraded areas with \(d\geq 0.5\), thereby preventing further degradation within the zone of very high strain energy density. In another model variant, fatigue accumulation is only allowed in non-intact material where \(d>0\). The earlier paper of Mesgarnejad et al. [38] proposed a formulation which degraded only the \(d\)- and not the \(\nabla d\)-term of the crack surface density. Hasan _and_ Baxevanis _[_39_]_ Although formulated with a fatigue degradation function \(h(\mathcal{F})\), this model resembles the structure of a \(B\)-type model. This becomes obvious when stating the evolution equation with \(h(\mathcal{F})=1/(1+\kappa\mathcal{F})\) \[0=h(\mathcal{F})G_{\mathrm{c}}\left(2\ell\Delta d-\frac{d}{2\ell}\right)-g^{ \prime}(d)\mathcal{H}=G_{\mathrm{c}}\left(2\ell\Delta d-\frac{d}{2\ell}\right) -g^{\prime}(d)(1+\kappa\mathcal{F})\mathcal{H} \tag{39}\] if the gradient term \(\nabla h(\mathcal{F})\) is neglected. The model is consistent for monotonic loading (see also Section 4.2) and is able to reproduce both Paris and Wohler behaviour. Seles _et al._[_40, 41, 42_] This model includes an elastic-plastic material law with isotropic and kinematic hardening of Chaboche type. Since the accumulating plastic strain energy density \(\psi^{\mathrm{p}}\) is part of the crack driving force, plastic processes promote crack growth regardless of the fatigue variable, which only depends on the elastic strain energy density \(\psi^{\mathrm{e}}_{+}\). In this way, this model covers both LCF - driven by plastic strains - and HCF due to small stress amplitudes which cause no macroscopic plastic effects. The model automatically reproduces mean load effects due to the nature of its fatigue variable and Paris behaviour with its parameter \(\mathcal{F}_{\infty}\) controlling the Paris parameter \(C\). Consistency with monotonic loading is ensured by accumulating fatigue damage only during unloading. In order to reduce computational time, a cycle skipping technique by Cojacaru and Karlsson[43] is applied, see also Section 4.6. This is of particular importance for ductile phase-field models which have even higher computational times than brittle ones. Ulloa _et al._[_20_] Here, the model formulation is also based on the framework by Alessi[32]. This ductile phase-field model includes multi-surface kinematic hardening, gradient-enhanced isotropic hardening and softening as well as an explicit ratchettting strain variable. In contrast to Seles, the fatigue variable accumulates from both the elastic and plastic strain energy density \(\psi^{\mathrm{e}}_{+}\) and \(\psi^{\mathrm{p}}\), strengthening the influence of plastic strains on the crack evolution. Again, LCF is mainly driven by plastic strains while HCF is driven by the fatigue variable. The first load cycle already leads to fatigue degradation. Khalil _et al._[_44, 45_] This model is an extension of Carrara's model to elastic-plastic material behaviour described by a Chaboche model with isotropic and nonlinear kinematic hardening. The fatigue variable accumulates from the the current temporal maximum \(\mathcal{H}=\max_{t}\psi^{\mathrm{e}}_{+}(t)\) and the plastic strain energy density \(\psi^{\mathrm{p}}\). The general model formulation can recover both AT 1 and AT 2 as well as a cohesive zone model for suitable parameter choices. Instead of the staggered solution scheme used most frequently, the authors present a new pseudo-monolithic quasi-Newton scheme. Alessi _and_ Ulloa _[_31_]_ The authors introduce a new class of phase-field fatigue models with a strong link to fracture mechanics. Due to elastic material behaviour, they are suitable for HCF only. Still, microstructural ductile effects around the crack tip are acknowledged by introducing a fatigue degradation zone. Following the idea that for HCF these effects are limited to a small zone around the crack tip, fatigue damage is only accumulated within the zone, covering microstructural effects in a phenomenological way. The authors formulate four requirements to the model's behaviour which are met by four functions contributing to the fatigue variable. See Section 4.2 for details. Using the example of a stationary crack they are able to correlate analytical and numerical results from their model with the Paris law. Thereby, they establish direct relations between model parameters and model behaviour, e. g. mean stress dependence and incline of the Paris curve can be controlled by a parameter each. Relying on Griffiths fracture theory, they are able to establish a new solution strategy: In each increment in which the energy release rate \(G\) satisfies \(G\leq h(\mathcal{F})G_{\mathrm{c}}\), no crack propagation can take place and only \(\mathcal{F}\) is accumulated. If instead \(G>h(\mathcal{F})G_{\mathrm{c}}\), the solution is not admissible. In that case, a solution is seeked under the condition \(G=h(\mathcal{F})G_{\mathrm{c}}\). The fatigue crack growth shows three stages: Initial damage accumulation, transient evolution of the crack and, finally, stable crack propagation. ### Type B Amendola _et al._[46] This model is derived in Ginzburg-Landau form. The additive contribution to the crack driving force is controlled by a fatigue variable depending on the strain energy density. The authors also present a model variant for the non-isothermal case. Caputo _and_Fabrizio _[47]_ This model is very similar to Amendola's apart from the stress definition and the fracture surface density. Schreiber _et al._[48, 49] This \(B\)-type model obtains its fatigue damage from Wohler curves. The application of representative loads instead of cycle-wise simulations allows for an accelerated computation. An efficient control for the number of load cycles per increment is presented. The effect of mean loads can be incorporated by using the mean load ratio of the external load in damage accumulation. Being a brittle model, it is only suitable for HCF. Since the fracture and fatigue contributions are interpreted as part of the free energy density, additional stress terms arise from the second law of thermodynamics. With the fatigue variable \(\mathcal{F}(\mathbf{\varepsilon})\) depending on the strain due to the empirical fatigue concept used, the stress has to be defined as \[\mathbf{\sigma}=g(d)\mathbb{C}\mathbf{\varepsilon}+g(d)qb\langle\mathcal{F}-\mathcal{ F}_{\mathrm{min}}\rangle^{b-1}\frac{\partial\mathcal{F}}{\partial\mathbf{ \varepsilon}}. \tag{40}\] The additional stress contributions are interpreted as micro stresses due to microscopic fatigue mechanisms. See Section 4.7 for more details. This model was extended to incorporate thermal effects in Yan et al. [50]. Loew _et al._[21] This model is - in contrast to most other models described here - meant not for metals but for rubber. Due to the nature of this material, the model is formulated in a large strain setting and is of viscous, i. e. rate-dependent nature. The stress \[\mathbf{\sigma}=g(d)\left(\mathbf{\sigma}^{\mathrm{eq}}+\sum_{\alpha=1}^{m}\mathbf{\sigma }_{\alpha}^{\mathrm{ov}}\right) \tag{41}\] contains therefore an additional overstress part. Although this model is a \(B\)-type model, in an earlier publication [51], the authors introduced a model variant without an additional fatigue term, where fatigue fracture was exclusively driven by viscous effects, in the form of an accumulating viscous energy density. See the model Aygun et al. [25] below for a similar concept in plasticity. However, the newer publications include the viscous strain energy density \(\psi^{\mathrm{visc}}\) not only in the crack driving force but also in an additional fatigue variable, strengthening the effect of viscosity on fatigue crack growth. A cycle jump technique is used (at least for the elastic case), introducing an explicit and an implicit acceleration scheme with adaptive jump control [22]. Haveroth _et al._[52, 53] The authors present a comprehensive model framework including plasticity covering non-isothermal conditions and time-rate and inertia effects. Fatigue is incorporated as an extra (phase) field variable with its evolution equation derived from the second law of thermodynamics instead of applying a phenomenological evolution law like most other models. The fatigue phase-field is interpreted as micro-damage variable covering micro-cracks and -voids while the regular phase-field for fracture describes macro- and meso-cracks. Simulations can be accelerated through a cycle jump technique. ### Other models Aygun _et al._[25] This model is included exemplarily for all standard ductile phase-field models which model fatigue effects without an explicit fatigue variable. Fracture is only driven by an accumulating plastic energy density \(\psi^{\mathrm{p}}\) in the crack driving force. In this case, an Armstrong-Frederick elastic-plastic material law is used. Naturally, these types of models are only suitable for LCF with significant plastic strains. This model is rate-dependent. Another example for a fatigue model without a fatigue variable is Schroder et al. [54] for concrete or cementious materials. Tsakmakis and Vormwald [55] also showed the ability of their ductile phase-field model derived in the framework of so-called non-conventional thermodynamics to cover fatigue fracture. Lo _et al._[26] This model represents a totally different type of phase-field fatigue model. Unlike the \(A\)- and \(B\)-type models, neither additional crack driving terms nor degradation of the fracture toughness are used. Instead, the viscosity parameter \(\eta\), which is only seen as a numerical damping parameter in most other quasi-static phase-field models, controls the fatigue crack growth. It is a function fitted to Paris curves which serve as an input for the model. No cycle-wise simulation is performed, the load is applied statically instead. The model uses a linear approximation of the crack surface. ## 4 Discussion This section discusses the most important characteristics and modelling choices, which present similarities and differences between the models listed in the previous section. Apart from an example for the differentiation in \(A\)- and \(B\)-models, the comparison remains on a theoretical level. For extensive numerical examples we refer to the original publications. ### A- and B-models The most distinct feature of the models is the way their fatigue variable is implemented in an originally static phase-field model. Most models reviewed here are of either the \(A\)- or \(B\)-type as introduced in Section 2.1. In the following, a numerical comparison between the two is performed in order to investigate the behaviour of both model types in a cyclic simulation. #### 4.1.1 Numerical setup Both models are tested with a Compact Tension (CT) geometry displayed in Figure 2 with assumed plane strain state. The initial crack is applied as a Dirichlet condition for the phase-field. The mesh is refined in the area of crack growth to a minimum element size of 0.3 mm. The specimen is loaded with load cycles of constant force amplitude with maximum load \(F=2\,\mathrm{kN}\) and a load ratio between minimum and maximum load \(R=-1\). Construction steel is assumed as a material, the corresponding elastic and fracture parameters are listed in Figure 2. The material model is purely elastic. As a fatigue variable, the one by Seiler et al. [56] based on the local strain approach is chosen exemplarily for both models. See also Figure 2 for the parameters used to determine the fatigue variable. The fatigue degradation function for model \(A\) is set as in [56] to \[h(\mathcal{F})=(1-h_{\mathrm{min}})/(1-\mathcal{F})^{\kappa}+h_{\mathrm{min}} \tag{42}\] while the additive energy term for model \(B\) is \[W_{\mathrm{fat}}^{B}=g(d)\,H(\mathcal{F})=g(d)\,b\,\mathcal{F}^{\xi}, \tag{43}\] see Figure 2 for parameters. Fatigue variable and fatigue functions are chosen arbitrarily and are not the subject of this analysis of the model types. The coupled problem is solved using an alternate minimisation algorithm with error control for the iteration over both fields. #### 4.1.2 Results and discussion Figure 3 shows the simulation results for both models after different amounts of load cycles. The difference in total lifetime is a matter of parametrisation and not due to the model types. The pre-existing initial crack grows into a fatigue crack with stable cyclic crack growth, until, finally, it evolves into unstable residual fracture. In this last stage, the crack proceeds under monotonic load without evolution of the fatigue variable, as becomes apparent from the distribution of the fatigue variable. Both the initial crack and the residual crack show an ideal regularised phase-field profile determined by the characteristic length scale \(\ell\). Miehe et al. [9] demonstrated analytically that this regularisation has to be of exponential nature in order to be a solution to the phase-field differential equation. The profile of the initial crack phase-field is plotted in the diagram on the right of Figure 3, marked in orange. However, the section of cyclic crack growth shows different profiles for the two model versions. Model \(A\) yields a - compared to the ideal crack - narrowed crack profile. This becomes evident in both the phase-field contour plot and the green graph in the diagram on the right, Figure 3. It can be explained with the weak form of the phase-field problem (here for the elastic case) \[0=\int_{\mathcal{B}}\left\{\left[g^{\prime}(d)\psi_{+}^{\mathrm{e}}+h( \mathcal{F})\frac{G_{\mathrm{c}}}{\ell}d\right]\delta d\,+\frac{h(\mathcal{F} )G_{\mathrm{c}}\ell\,\nabla d\,\delta(\nabla d)}{\int\limits_{t_{n}}^{t}}\eta \dot{d}\,\delta\dot{d}\,\mathrm{d}\tau\right\}\mathrm{d}v. \tag{44}\] The fatigue degradation function \(h(\mathcal{F})\) reaches very low values \(\ll 1\) in most parametrisations in the literature, here its minimal value is \(h_{\mathrm{min}}=0.05\). It affects the phase-field gradient term underlined in the equation above. This term is meant to regularise the problem and thereby controls the shape of the phase-field profile. When this term is now weakened due to the fatigue degradation function, the profile develops more freely. In the present case this leads to a narrowing of the crack profile as the crack evolves within the narrow "corridor" of lowered fracture toughness controlled by the fatigue variable \(\mathcal{F}\), see also its contour plot. Grossmann-Ponemon et al. [37] and Hasan and Baxevanis [39] also observe this crack narrowing compared to the brittle model. Irregularities due to heterogeneous or non-constant \(G_{\mathrm{c}}\) appear not only in fatigue models. See e. g. [57] for a rate-dependent fracture toughness and [58] for the effect an inhomogeneous distribution of \(G_{\mathrm{c}}\) has on the effective crack resistance, as well as [31] for a discussion on how they consider \(G_{\mathrm{c}}\) not as a material parameter but a material function and an overview of different reasons for non-constant \(G_{\mathrm{c}}\). Model \(B\), on the other hand, shows a widening of the crack profile. In the phase-field evolution equation of this model variant \[\eta\dot{d}=G_{\mathrm{c}}\left(\ell\Delta d-\frac{d}{\ell}\right)-g^{\prime}( d)\underbrace{\left(\psi_{+}^{\mathrm{e}}+H(\mathcal{F})\right)}_{\mathcal{H}} \tag{45}\] Figure 2: Material parameters (left) and geometry of CT specimen (right) for simulations with models \(A\) and \(B\). the fatigue term appears within the crack driving force \(\mathcal{H}\). This leads to a very direct coupling between the fatigue contribution \(H(\mathcal{F})\) and the phase-field distribution. The contour plots of phase-field \(d\) and fatigue variable \(\mathcal{F}\) therefore show a very similar distribution. Hence, due to the strong coupling, the nature of the fatigue variable is even more decisive for the crack appearance for the \(B\)-type model than it is for other model classes. Then again, the model type and fatigue function \(h(\mathcal{F})\) and \(H(\mathcal{F})\) also influence the fatigue variable, which becomes clear from the two different distributions of \(\mathcal{F}\) - which is in this case derived from the strain - for the two model versions. Schreiber[59] also observe the crack widening for their \(B\)-type model for small deviations of their ideal fatigue function. Both the widening and the narrowing of the phase-field profile can lead to deviation of the crack energy which is not (and doesn't necessarily have to be) in accordance with the regularisation of static phase-field models. The crack growth rate can also be affected and responds sensitively to the nature and distribution of the fatigue variable and the fatigue function. An important difference between the model types is also that for \(A\)-type models, the fatigue degradation function \(h(\mathcal{F})\) has obviously to be within the range \([0,1]\), whereas the \(B\)-type fatigue function \(H(\mathcal{F})\) has no upper boundary and its order of magnitude must be calibrated during parametrisation. As shown, both model types entail numerical difficulties reflected in their phase-field profile. The choice of a model variant should eventually be based upon the desired physical interpretation: Some model approaches and applications are suited for a reduction of the material's crack resistance while others go with an increase of the crack driving force compared to the static case. ### Fatigue variable Besides the basic model structure, the fatigue variable \(\mathcal{F}\) is the second most important choice in the model. Most models studied here use either a variation of the accumulated strain energy density (Carrara et al. [27], Grossmann-Ponemon et al. [37], Loew et al. [21] etc.) or an empirical fatigue concept (Schreiber et al. [48], Seiler et al. [35]). Figure 3: Results of phase-field fatigue simulation with \(A\)- and \(B\)-model. Initial setup with pre-defined phase-field crack and results after simulation of \(N\) load cycles. Cross sections on the right show phase-field profile within ideal pre-defined crack (orange) and fatigue crack (green). Model \(A\) narrows the phase-field profile compared to static crack while model \(B\) leads to widening of the phase-field crack. The energy density is an obvious choice due to its easy accessibility in a material routine. Xu et al. [60] explain its suitability from a microscopic point of view: The crack growth rate of short cracks depends on the microstructural crack path and the local crack propagation rate. Conveniently, the stored energy density happens to be a microstructure-sensitive driving force due to being a measure of the energy stored in the lattice structure available to eventually create new crack surface [60]. With a single crystal plasticity slip system, they show that the stored energy density depends on the Burgers vector and the critical resolved shear stress, two characteristics for the microstructure of the material. Furthermore, it is consistent with fracture mechanics, being related to the stress intensity factor which is shown to control fatigue crack growth [60]. They were also able to show experimentally that stored energy at the crack tip (determined with the help of DIC measurements) leads to a higher crack propagation rate. The models that use the strain energy density for the fatigue variable differ from each other regarding the conditions for damage accumulation. Some only accumulate during loading (when the micro cracks evolve, supposedly, Carrara et al. [27]) or only during unloading (Seles et al. [40]) in order to be consistent with models for static loading: Loaded with a purely monotonic load, no fatigue damage should be accumulated. Moreover, most models use the degraded tensile strain energy density \(g(d)\psi_{+}^{\mathrm{e}}\). Some use it without degradation (Seles et al. [40]). In this case, \(\mathcal{F}\) accumulates further even when a phase-field crack has already formed. Alessi and Ulloa[31] present a modular scheme to construct a fatigue variable based on the strain energy density in order to fulfill their four requirements towards the model behaviour. They are met by four functions contributing to \(\mathcal{F}\), respectively. Firstly, they treat the singularity of \(\psi^{\mathrm{e}}\) at the crack tip by smoothing it out within a certain zone, the fatigue degradation zone. Outside the zone, no (or close to no) fatigue variable is accumulated. This is meant to phenomenologically replicate the microstructural ductile effects, which mainly occur around the crack tip. Further, two functions specify the damaging loading types and the mean stress effect shifting the Paris curve in vertical direction, respectively. An additional exponential function controls the incline of the Paris curve. In this way, the phenomena of fatigue crack growth can be tuned individually. The other group of models obtain their fatigue variable through empirical lifetime estimation concepts for engineering components. They use data from standardized experiments, i. e. Paris curves, Wohler curves and strain Wohler curves as input data. Conveniently, this incorporates additional information about the fatigue behaviour of the material into the model. However, the models still include parameters to be be fitted to experimental results, usually as a part of the fatigue function \(h(\mathcal{F})\) or \(H(\mathcal{F})\). Due to their underlying assumptions, these concepts allow for an accelerated model implementation, see Section 4.6. In brief, the latter models use a damage evaluation based on remaining lifetime [16]. This requires some sort of normalization of a lifetime describing variable. The former models based on the strain energy density, on the other hand, do without such a normalization and accumulate \(\mathcal{F}\) "en passant", but have to use more arbitrary parameters without a direct relation to experimental quantities. Multiaxial, possibly even non-proportional loads might call for fatigue variables which can replicate stressing and damage history varying in direction. Even though the strain energy density contains the full stress state, the expression lacks information of direction. Traditional life estimation concepts, on the other hand, are often applied with critical plane concepts [61], accumulating fatigue damage for several discrete directions individually. So far, this has not been exploited yet for phase-field fatigue models, though. ### Fatigue functions \(h(\mathcal{F})\) and \(H(\mathcal{F})\) The fatigue functions \(h(\mathcal{F})\) and \(H(\mathcal{F})\) usually contain the most important parameters for model fitting. Those are thresholds or control the progressive or degressive evolution of the fatigue contribution. In this way, they often influence the inclination and shift of the resulting Paris curve and/or Wohler curve. The distinction between \(A\)- and \(B\)-models and therefore \(h(\mathcal{F})\) and \(H(\mathcal{F})\) functions is only for illustrative purposes: The two formulations can be converted into each other by a suitable choice of the functions. Hasan and Baxevanis[39] chose \(h(\mathcal{F})\) in a way that that creates a \(B\)-type model, see Section 3.1. The other way around, by setting \(H(\mathcal{F})=h(\mathcal{F})G_{\mathrm{c}}\gamma(d,\nabla d)\) one recovers the \(A\)-type model. \(A\)-type models describe the weakening of the material through a gradual decrease of fracture toughness \(G_{\mathrm{c}}\). To date, all functions \(h(\mathcal{F})\) are arbitrary choices since no model is based on an experimental measurement of the degrading fracture toughness yet. ### Treatment of plasticity Strictly speaking, the range of application of an elastic phase-field model is limited to brittle materials or ductile materials (such as metals) only for HCF. Therefore, plasticity is often incorporated in the models in the form of a plastic energy density \(\psi^{\mathrm{p}}\), describing the accumulated energy due to hardening. In the phase-field evolution equation (37) it appears in the static crack driving force. Already this effect alone can describe cyclic material degradation under (comparatively high) cyclic loads leading to phase-field cracks, as shown in Aygun et al. [25]. If combined with a fatigue variable depending on the elastic energy density, this can cover a wide range of loads from LCF to HCF. Some models double the effect of plasticity on the crack evolution by including \(\psi^{\mathrm{p}}\) also in the fatigue variable \(\mathcal{F}\)(Ulloa et al. [20]). This allows for more modelling flexibility and is motivated by the fact that plastic processes also drive static cracks (therefore ensuring consistency with monotonic loading) while at the same time, they influence fatigue qualities of the material, especially on the microscopic scale. Microscopic plastic effects have not been modelled explicitly so far since multiscale phase-field modelling of fracture remains a challenging task, i. a. due to being very computationally intensive. ### Irreversibility The problem of crack irreversibility is a frequently discussed matter in the phase-field community. Different approaches to ensure \(\dot{d}>0\) (the strictest formulation) exist, such as the history variable approach and the penalty parameter, see Section 2.1. While most phase-field fatigue models use a history parameter to formally ensure \(\dot{d}>0\), it is of minor importance in practice. Fatigue cracks at sub-critical loads are driven by a fatigue variable which is ever-increasing anyway. ### Acceleration methods for saving computational time Reducing computational time is crucial in cyclic phase-field fatigue simulations, especially, if elastic-plastic material models are involved. Not only for HCF simulations, cycle-by-cycle simulations are not feasible for components of practical relevance. The models mentioned here adress this problem mainly in two ways: Through representative loads (Schreiber et al. [48], Seiler et al. [35]) and through the cycle jump method (Seles et al. [40], Loew et al. [21], Haveroth et al. [53]). The latter is a general acceleration concept described by Cojacaru and Karlsson [43]. As shown in Figure 4 in green, a few cycles are simulated explicitly before the variables of interest - the fatigue variable, plastic hardening variables etc. - are extrapolated over a certain number of cycles. Then again follow properly simulated cycles. One difficulty is the choice of an appropriate jump size as a compromise between simulation time and accuracy, especially considering the often sudden nature of crack evolution. Simulations with representative loads, on the other hand, are controlled by continuous fatigue instead of continuous time. As shown in Figure 4 in red, not a single cycle is simulated explicitly. Instead, the Figure 4: Comparison of acceleration techniques: Representative loads and cycle jump method. Depiction of applied loads is based on [48]. load applied is a representative load, usually some sort of envelope curve of the real load function. The lack of information due to this simplification is compensated by assumptions which are mostly based on empirical fatigue concepts (see also Section 4.2). This can be an assumption of the stress-strain behaviour and the amount of damage depending on the area inside the stress-strain hysteresis (Seiler et al. [35]) or the damaging effect of load cycles according to their stress amplitude (Schreiber et al. [48]), completed by cyclic material data such as Wohler curves. In this way, the damage contribution of each cycle can be calculated from the stress-strain state at the representative load, possibly complemented with information regarding the load such as the ratio \(R\) between maximum and minimum load. The choice of an appropriate representative load is always based on assumptions, such as that the most intensive crack driving state at the critical crack front happens during maximum load etc. Especially in case of variable amplitudes, several load levels might be necessary (e. g. maximum and minimum load) in order to quantify the damage contribution of that load cycle. This method has the greatest accelerating effect in the case of at least sectionwise constant load amplitudes, since, in that case, several load cycles can be combined in one increment if crack growth rates are small. A new increment is only necessary when significant crack growth has happened and changed the strange state in the specimen. Lastly, Lo et al. [26] use an entirely different strategy, where they do not extrapolate fatigue damage \(\mathcal{F}\), but directly work with crack propagation rates fitted to paris curves. ### Additional stress terms While in most models the stress is defined as \(\boldsymbol{\sigma}(\boldsymbol{\varepsilon})=\frac{\partial W_{\text{el}}}{ \partial\boldsymbol{\varepsilon}}\), Schreiber et al. [48] and Haveroth et al. [53] introduce additional stress terms. The reason for this is lies in their definition of the free energy density \(\psi\), which includes - in contrast to the definition used in Section 2.1 - fracture and fatigue terms, e. g. in Schreiber et al. \[\psi=g(d)\,\psi_{+}^{\text{e}}+\psi_{-}^{\text{e}}+G_{\text{c}}\gamma+\tilde{ g}(d)H(\mathcal{F}). \tag{46}\] Evaluating the Clausius-Duhem inequality \(\sigma:\dot{\boldsymbol{\varepsilon}}-\dot{\psi}\geq 0\) yields \[\underbrace{\sigma:\dot{\boldsymbol{\varepsilon}}-\frac{\partial\psi}{ \partial\boldsymbol{\varepsilon}}\dot{\boldsymbol{\varepsilon}}-\frac{ \partial\psi}{\partial\mathcal{F}}\dot{\mathcal{F}}}_{\text{(a)}}- \underbrace{\partial\psi}_{\text{(b)}}\dot{d}-\frac{\partial\psi}{\partial \nabla d}\left(\nabla d\right)^{\text{}}}_{\text{(b)}}\geq 0. \tag{47}\] Supposing the common dependency \(\mathcal{F}(\boldsymbol{\varepsilon})\), the stress is defined from (a) \(\overset{!}{=}0\) as \[\sigma=\frac{\partial\psi}{\partial\boldsymbol{\varepsilon}}+\frac{\partial \psi}{\partial\mathcal{F}}\frac{\partial\mathcal{F}}{\partial\boldsymbol{ \varepsilon}}, \tag{48}\] which is in this case \[\boldsymbol{\sigma}=g(d)\mathbb{C}\boldsymbol{\varepsilon}+g(d)qb\langle \mathcal{F}-\mathcal{F}_{\text{min}}\rangle^{b-1}\frac{\partial\mathcal{F}}{ \partial\boldsymbol{\varepsilon}}. \tag{49}\] Term (b) yields \[-\frac{\delta\psi}{\delta d}\dot{d}\geq 0, \tag{50}\] leading to the phase-field evolution equation. Additional stress terms entail the issue of physical interpretation of those terms. Schreiber et al. interpret them as microscopic stresses. However, even with this widespread extended definition of the free energy density \(\psi\), additional stress terms are usually avoided by assuming \(\mathcal{F}\) to be constant in time for the considered time step, i. e. independent of \(\boldsymbol{\varepsilon}\). This assumption is valid considering that \(\mathcal{F}\) changes on a large time scale (over the course of several load cycles) compared to e. g. the strain oscillating in each load cycle. ### Range of application Finally, we want to give a list of models addressing certain types of scenarios and problems in simulation. #### Material class While most models published are designed for metals, a few other material classes are adressed as well: * Elastomers: Loew et al. [21] with rate-dependent behaviour and a large strain setting. * Concrete and rock: Schroder et al. [54] with a Drucker-Prager yield criterion and unsymmetric tension-compression behaviour. * Piezoelectric solids: Tan et al. [34] with coupling to electric field. #### Loading For ductile materials like metals, high loading amplitudes (LCF) cause plasticity around the crack tip, therefore calling for an elastic-plastic material model. For low loading amplitudes (HCF) and brittle materials, an elastic model is sufficient. * Elastic: Carrara et al. [27], Grossmann-Ponemon et al. [37], Hasan and Baxevanis [39], Amendola et al. [46], Schreiber et al. [48], Lo et al. [26] * Elastic-plastic: Aygun et al. [25], Seles et al. [40], Ulloa et al. [20], Khalil et al. [44], Haveroth et al. [53] #### Observed phenomena and challenges * Material behaviour dependent on deformation rate: Loew et al. [21], Haveroth et al. [53] * Bauschinger effect (kinematic hardening): Aygun et al. [25], Seles et al. [40], Ulloa et al. [20], Khalil et al. [44] * Ratcheting: Ulloa et al. [20] * Temperature-dependent fatigue behaviour: Amendola et al. [46], Haveroth et al. [53], Yan et al. [50] * Acceleration techniques for computational time: Seiler et al. [35], Seles et al. [40], Schreiber et al. [48], Loew et al. [22], Haveroth et al. [53], Lo et al. [26] * Concentration-dependent material behaviour: Ai et al. [62] implemented a coupled chemo-mechanical fatigue fracture model to simulate cracking in lithium-ion batteries. The phase-field fatigue part is equivalent to Carrara et al. [27]. ## 5 Conclusion In recent years, many groups have adressed the issue of fatigue fracture with a large variety of phase-field models. This paper puts the models published to date into a common variational framework. Based on that, the model structures and characteristics are compared. This paper is meant to provide a basis for both choosing a model type for a specific simulation task and for developing phase-field models further. Similarities and differences between the models are discussed. Thereby, two main model classes based on the model structure are identified: Firstly, \(A\)-type models that degrade the fracture toughness gradually in order to describe the continuous weakening of the material due to cyclic loading. And secondly, the \(B\)-type models characterised by an additional crack driving force compared to the static models, which allows the fatigue crack to propagate at the low fatigue loads. A numerical study shows that both model types actually suffer from fundamental problems regarding the regularised crack profile: While the \(A\)-type models degradation of the regularisation term in the phase-field evolution equation leads to narrower crack profiles compared to static cracks, \(B\)-type models show an unintended broadening of the crack profile due to the direct link between the distribution of the fatigue variable and the final crack profile. Eventually, the choice between both model types should follow the preferred physical explanation of the incorporation of fatigue into the phase-field structure: While some might find a weakening of the material, associated with a decrease of total energy of the system, more plausible, others might prefer an additional fatigue energy contribution. The second-most important modelling choice is the fatigue variable itself. Most groups choose the accumulated strain energy density as a fatigue measure. Not only is this quantity easily accessible, but also its significance as a measure of stored energy available for the forming of new crack surface straightforward. However, some models use empirical fatigue concepts instead. These incorporate additional cyclic material data in the calculation. The empirical assumptions inherent to the concepts actually allow for an acceleration scheme of the fatigue simulation. Alternatively, cycle jump concepts are widely used. Essential for the choice of model are the material and the loading conditions. While most models are meant for metals, some also exist for other material classes. In case of low cycle fatigue with high loading amplitudes, elastic-plastic material models are to be favored due to their ability to model the significant plasticity at the crack tip. Elastic-plastic phase-field models differ in their way of incorporating plasticity in the fatigue variable. By now, most models are able to reproduce typical phenomena observed in cyclic fracture experiments: Wohler curves describing the lifetime of components as well as Paris curves for the crack propagation rates can be reproduced. Mean load effects are captured by some models. Still, the simulation of fatigue cracks remains a challenging task, not only with the phase-field method. All models studied here are phenomenological and macroscopic. It is up to future works to develop models approaching the fatigue phenomenon from a more physical point of view, which always has to be - at least in part - microscopic. Multiscale models have not been developed yet due to the immense computational power required for phase-field fatigue simulations. This is due to required fineness of meshes for phase-field simulations in general and, on the other hand, the high number of load cycles to be simulated inherent to cyclic loads. Especially for 3D simulations and elastic-plastic material behaviour, this problem sets the limits for simulations today. ## Acknowledgements This work was supported by the Deutsche Forschungsgemeinschaft (DFG) via the project _Experimental analysis and phase-field modelling of the interaction between plastic zone and fatigue crack growth in ductile materials under complex loading_ (grant number KA 3309/12-1). The authors are grateful to the Centre for Information Services and High Performance Computing (ZIH) of TU Dresden for providing its facilities for high throughput calculations. The authors thank Franz Dammass for comprehensive discussions and comments on the topic. ## Appendix A Derivation of plastic equations For the exemplary plastic dissipation potential \[\phi^{\mathrm{P}}(\boldsymbol{\varepsilon}^{\mathrm{p}},\dot{\boldsymbol{ \alpha}})=\sigma^{\mathrm{y}}||\boldsymbol{\varepsilon}^{\mathrm{p}}||+\frac {b}{2}\left(\dot{\boldsymbol{\varepsilon}}^{\mathrm{p}}+\dot{\boldsymbol{ \alpha}}\right)^{2},\quad\sigma^{\mathrm{y}}>0 \tag{12}\] the plastic equations are to be derived. From Biot's equation (29) follows for the plastic conjugate variables \[\boldsymbol{\sigma}=\frac{\partial\phi^{\mathrm{P}}}{\partial\boldsymbol{ \varepsilon}^{\mathrm{p}}}\quad\text{and}\quad\boldsymbol{\chi}=\frac{ \partial\phi^{\mathrm{P}}}{\partial\dot{\alpha}}=b\,(\boldsymbol{\varepsilon} ^{\mathrm{p}}+\dot{\alpha}). \tag{13}\] For \(\left|\left|\dot{\mathbf{\varepsilon}}^{\mathrm{p}}\right|\right|\neq 0\) the stress is \[\mathbf{\sigma}=\sigma^{\mathrm{y}}\frac{\dot{\mathbf{\varepsilon}}^{\mathrm{p}}}{|| \dot{\mathbf{\varepsilon}}^{\mathrm{p}}||}+b\left(\dot{\mathbf{\varepsilon}}^{\mathrm{p} }+\dot{\mathbf{\alpha}}\right)\!.\] (A.3) From the difference (A.3)-(A.2) we get \[\mathbf{\sigma}-\mathbf{\chi}=\sigma^{\mathrm{y}}\frac{\dot{\mathbf{\varepsilon}}^{ \mathrm{p}}}{||\dot{\mathbf{\varepsilon}}^{\mathrm{p}}||}\quad\mathrm{and}\quad|| \mathbf{\sigma}-\mathbf{\chi}||=\sigma^{\mathrm{y}}.\] (A.4) Defining \(\lambda=||\dot{\mathbf{\varepsilon}}^{\mathrm{p}}||\) and \(f^{\mathrm{p}}=||\mathbf{\sigma}-\mathbf{\chi}||-\sigma^{\mathrm{y}}\) we obtain \[\dot{\mathbf{\varepsilon}}^{\mathrm{p}}=\lambda\frac{\mathbf{\sigma}- \mathbf{\chi}}{\sigma^{\mathrm{y}}}=\lambda\frac{\partial f^{\mathrm{p}}}{ \partial\mathbf{\sigma}}\] (A.5) \[\lambda=||\dot{\mathbf{\varepsilon}}^{\mathrm{p}}||\geq 0\] (A.6) \[f^{\mathrm{p}}=||\mathbf{\sigma}-\mathbf{\chi}||-\sigma^{\mathrm{y}}=0\] (A.7) \[\lambda f^{\mathrm{p}}=0.\] (A.8) For the case \(\left|\dot{\mathbf{\varepsilon}}^{\mathrm{p}}||\right|=0\) the derivative \(\mathbf{\sigma}=\frac{\partial\phi^{\mathrm{p}}}{\partial\mathbf{\varepsilon}^{ \mathrm{p}}}\), especially the problematic term \(\frac{\partial}{\partial\mathbf{\varepsilon}^{\mathrm{p}}}||\dot{\mathbf{\varepsilon} }^{\mathrm{p}}||\) has to yet to be defined. Here, the fact that the absolute value \(||\dot{\mathbf{\varepsilon}}^{\mathrm{p}}||\) is a convex function can be exploited. Using convex analysis, its derivative is defined as \[\frac{\partial}{\partial\dot{\mathbf{\varepsilon}}^{\mathrm{p}}}||\dot{\mathbf{ \varepsilon}}^{\mathrm{p}}||\bigg{|}_{\dot{\mathbf{\varepsilon}}^{\mathrm{p}}=0}= \mu\mathbf{n}\quad\mathrm{with}\ 0\leq\mu\leq 1,||\mathbf{n}||=1.\] (A.9) Vividly speaking, the derivative of the absolute value function at its kink is defined with arbitrary direction. With \[\mathbf{\sigma}=\sigma^{\mathrm{y}}\mu\mathbf{n}+b\left(\dot{\mathbf{\varepsilon}}^{ \mathrm{p}}+\dot{\mathbf{\alpha}}\right)\] (A.10) and (A.2) follows \[\mathbf{\sigma}-\mathbf{\chi}=\sigma^{\mathrm{y}}\mu\mathbf{n}\quad\mathrm{and}\quad\frac {||\mathbf{\sigma}-\mathbf{\chi}||}{\sigma^{\mathrm{y}}}=\mu.\] (A.11) From \(0\leq\mu\leq 1\) follows \[||\dot{\mathbf{\varepsilon}}^{\mathrm{p}}||=0\] (A.12) \[\lambda=||\dot{\mathbf{\varepsilon}}^{\mathrm{p}}||=0\] (A.13) \[||\mathbf{\sigma}-\mathbf{\chi}||-\sigma^{\mathrm{y}}\leq 0\to f^{ \mathrm{p}}\leq 0\] (A.14) \[\lambda f^{\mathrm{p}}=0.\] (A.15) From the sets of equations for the two cases (A.5)..(A.8) and (A.12)..(A.15) follow for all \(\left|\dot{\mathbf{\varepsilon}}^{\mathrm{p}}||\right|\) the evolution equation for the plastic strain and the KKT conditions \[\dot{\mathbf{\varepsilon}}^{\mathrm{p}}=\lambda\frac{\partial f^{\mathrm{p}}}{ \partial\mathbf{\sigma}}\quad\mathrm{and}\quad\lambda\geq 0,f^{\mathrm{p}} \leq 0,\lambda f^{\mathrm{p}}=0.\] (A.16) The time derivative of (A.15) for the case \(f^{\mathrm{p}}=0\) yields the consistency condition \(\lambda\dot{f}^{\mathrm{p}}=0\). ## Appendix B Derivation of plastic model equations via dissipation potential From the Clausius-Duhem inequality \[\mathbf{\sigma}:\dot{\mathbf{\varepsilon}}-\frac{\partial\psi}{\partial\mathbf{\varepsilon }}:\dot{\mathbf{\varepsilon}}-\frac{\partial\psi}{\partial\mathbf{\varepsilon}^{ \mathrm{p}}}:\dot{\mathbf{\varepsilon}}^{\mathrm{p}}-\frac{\partial\psi}{\partial \mathbf{\alpha}}:\dot{\mathbf{\alpha}}-\frac{\partial\psi}{\partial\alpha}\dot{\alpha}\geq 0\] (B.1) we can identify the plastic conjugate variables \[-\frac{\partial\psi}{\partial\mathbf{\varepsilon}^{\mathrm{p}}}=:\mathbf{\sigma},\quad- \frac{\partial\psi}{\partial\mathbf{\alpha}}=:\mathbf{\chi},\quad-\frac{\partial\psi}{ \partial\alpha}=:p. \tag{111}\] A yield function \[f^{\mathrm{p}}(\mathbf{\sigma},\chi,p;d)\coloneqq\sqrt{\frac{3}{2}||\mathrm{dev}( \mathbf{\sigma})-\mathrm{dev}(\mathbf{\chi})||^{2}}-\sigma^{\mathrm{y}}+p \tag{112}\] is defined. The evolution equations for the internal variables can now be derived e.g. from the principle of maximum plastic dissipation \[\phi^{\mathrm{p}}=\sup_{\mathbf{\sigma},\mathbf{\chi},p,\lambda,z}\hat{\phi}^{\mathrm{p }}=\sup_{\mathbf{\sigma},\mathbf{\chi},p,\lambda,z}\left\{\mathbf{\sigma}:\mathbf{\varepsilon} ^{\mathrm{p}}+\mathbf{\chi}:\dot{\mathbf{\alpha}}+p\,\dot{\alpha}-\lambda(f^{\mathrm{ p}}(\mathbf{\sigma},\chi,p;d)+z^{2})\right\}, \tag{113}\] constrained by the yield function \(f^{\mathrm{p}}\) using a Lagrange multiplier \(\lambda\) and a slack variable \(z\). The supremum requires the partial derivatives of \(\hat{\phi}^{\mathrm{p}}\) with respect to \(\mathbf{\sigma},\mathbf{\chi},p,\lambda,z\) to be \(0\) as well as \(\frac{\partial^{2}\hat{\phi}^{\mathrm{p}}}{\partial z^{2}}\leq 0\). This yields the flow rule and the hardening laws \[\dot{\mathbf{\varepsilon}}^{\mathrm{p}}=\lambda\frac{\partial f^{\mathrm{p}}}{ \partial\mathbf{\sigma}}=\lambda\mathbf{n}^{\mathrm{p}},\quad\dot{\mathbf{\alpha}}=\lambda \frac{\partial f^{\mathrm{p}}}{\partial\mathbf{\chi}}=\lambda\mathbf{n}^{\mathrm{p}} \quad\text{and}\quad\dot{\alpha}=\lambda\frac{\partial f^{\mathrm{p}}}{ \partial p} \tag{114}\] with direction tensor \(\mathbf{n}^{\mathrm{p}}\), as well as the KKT conditions \[f^{\mathrm{p}}\leq 0,\quad\lambda\geq 0\quad\text{and}\quad f^{\mathrm{p}} \lambda=0. \tag{115}\] and, following for \(f^{\mathrm{p}}=0\), the consistency condition \(\lambda\dot{f}^{\mathrm{p}}=0\). ## Appendix C Phase-field equation with penalty approach An alternative penalisation approach to ensure \(\dot{d}\geq 0\) is proposed by [29]. A modified energy functional \[\tilde{\mathcal{E}}(\mathbf{\varepsilon},d,\nabla d,\dot{d},\mathbf{q}_{\alpha},\dot {\mathbf{q}}_{\alpha};\mathcal{F})=\mathcal{E}+\frac{\lambda^{\infty}}{2}\int \limits_{\mathcal{B}}\langle\dot{d}\rangle_{-}^{2}\ \mathrm{d}v \tag{116}\] is introduced. The penalty term with penalty parameter \(\lambda^{\infty}\) and \(\langle x\rangle_{-}\coloneqq\min(0,x)\) yields the evolution equation \[\eta\dot{d}=G_{\mathrm{c}}h(\mathcal{F})\left(\ell\Delta d-\frac{d}{\ell} \right)+G_{\mathrm{c}}\ell\nabla d\nabla h(\mathcal{F})-\lambda^{\infty} \langle\dot{d}\rangle_{-}-g^{\prime}(d)\underbrace{\left(\psi_{+}^{\mathrm{c} }(\mathbf{\varepsilon}^{\mathrm{e}})+\psi^{\mathrm{p}}(\alpha,\mathbf{\alpha})\!+\!H( \mathcal{F})+\Delta^{\mathrm{p}}\right)}_{\mathcal{H}}. \tag{117}\]
2307.04430
On central orderings
We define the notion of central orderings for a general commutative ring $A$ which generalizes the notion of central points of irreducible real algebraic varieties. We study a central and a precentral loci which both live in the real spectrum of the ring $A$ and allow to state central Positivestellens\"atze in the spirit of Hilbert 17th problem.
Goulwen Fichou, Jean-Philippe Monnier, Ronan Quarez
2023-07-10T09:09:34Z
http://arxiv.org/abs/2307.04430v1
# On central orderings ###### Abstract. We define the notion of central orderings for a general commutative ring \(A\) which generalizes the notion of central points of irreducible real algebraic varieties. We study a central and a precentral loci which both live in the real spectrum of the ring \(A\) and allow to state central Positivestellensatze in the spirit of Hilbert 17th problem. Key words and phrases:real algebraic geometry, orderings, real spectrum, real algebra 2020 Mathematics Subject Classification: 06F25, 14P99,13A99,26C99 The authors have received support from the Henri Lebesgue Center ANR-11-LABX-0020-01 and the project Enum-Geom ANR-18-CE40-0009. In section 3, we study central ideals which have already been defined in [15]. This subcategory of real prime ideals has recently been used in [9] to develop the theory of central seminormalization (real version of the seminormalization introduced by Traverso [17]). The motivation was the property that central ideals (that are in particular real ideals) behave much better than real ideals when we consider integral extensions of rings. Similarly to Dubois notion of central point of a real algebraic variety, we consider the notion of central orderings introduced in [2], as the elements of \(\operatorname{Spec}_{r}A\) which are in the closure of \(\operatorname{Spec}_{r}\mathcal{K}(A)\) for the topology of the real spectrum. The set of central orderings, denoted by \(\operatorname{Spec}_{c}A\), is a closed subset of \(\operatorname{Spec}_{r}A\) and the supports of central orderings are exactly the central prime ideals of \(A\). We study these central orderings whose definition of topological nature is not easy to handle in order to prove algebraic statements as positivstellensatze. This motivates us to introduce another sort of orderings which we call precentral and are defined by a simple and natural algebraic condition. The precentral orderings are those orderings which contain the cone \(A\cap\sum\mathcal{K}(A)^{2}\) and hence are a sup-class of central orderings. The set of precentral orderings, denoted by \(\operatorname{Spec}_{pc}A\), is also a closed subset of \(\operatorname{Spec}_{r}A\) and the supports of precentral orderings are again exactly the central prime ideals of \(A\). In section 4, we study the differences between central and precentral orderings, giving characterizations of these two kinds of orderings. Although theses orderings are distinct in general, it appears that they coincide for real algebraic varieties of dimension less than or equal to two. In section 5, we give some precentral Positivstellensatze which comes naturally from the algebraic nature of precentral orderings. One of the main results of the paper is as follows. **Theorem A**.: _Let \(f_{1},\ldots,f_{r}\) in \(A\) and \(f\in A\). Denote by \(P\subset A\) the cone \((A\cap\sum\mathcal{K}(A)^{2})[f_{1},\ldots,f_{r}]\) and by \(\Lambda\subset\operatorname{Spec}_{pc}A\) the set \(\{\alpha\in\operatorname{Spec}_{pc}A\mid f_{1}(\alpha)\geq 0,\ldots,f_{r}( \alpha)\geq 0\}\). Then_ 1. \(f\geq 0\) _on_ \(\Lambda\) _if and only if_ \(fq=p+f^{2m}\) _for some_ \(p,q\) _in_ \(P\) _and_ \(m\in\mathbb{N}\)_._ 2. \(f>0\) _on_ \(\Lambda\) _if and only if_ \(fq=1+p\) _for some_ \(p,q\) _in_ \(P\)_._ 3. \(f=0\) _on_ \(\Lambda\) _if and only if_ \(f^{2m}+p=0\) _for some_ \(p\) _in_ \(P\) _and_ \(m\in\mathbb{N}\)_._ As a consequence, we obtain also some central positivstellensatze when the positivity conditions on central and precentral orderings coincide. In particular, we get geometric central positivstellensatze for algebraic varieties of dimension less than or equal to two. The study done in section 4 shows that we cannot differentiate central and precentral orderings by the global positivity of a single function. It enables to state a general version of Hilbert 17th property. **Theorem B**.: _Let \(f\in A\). The following properties are equivalent :_ 1. \(f\geq 0\) _on_ \(\operatorname{Spec}_{c}A\)_._ 2. \(f\geq 0\) _on_ \(\operatorname{Spec}_{pc}A\)_._ 3. \(f\in\sum\mathcal{K}(A)^{2}\)_._ Note that when \(A\) is the coordinate ring of an irreducible affine algebraic variety \(V\) over a real closed field \(R\), the previous properties are equivalent to \(f\geq 0\) on \(\operatorname{Cent}(V(R))\). Using the abstract formalism developed here, we are able to extend our positivstellensatze to other geometric settings than real algebraic varieties, namely the Nash and the real analytic settings. The final section 6 deals with continuous rational functions. As previously recalled, one knows that a nonnegative \(f\in R[x_{1},\ldots,x_{n}]\) on \(R^{n}\) is a sum of squares of rational functions. From [11] it appears that \(f\) is in fact a sum of squares of rational functions which can be extended continuously to the whole \(R^{n}\). We study then the question of adding a continuity property in the third property of Theorem B, and prove surprisingly that it is not always possible. Anyway, we establish a continuous central Hilbert 17th property when the non-negativity is assumed on the whole real spectrum. In all the paper, \(R\) denotes a real closed field and all the rings are commutative and contain \(\mathbb{Q}\). ## 2. Preliminaries on real algebra In this section we revisit the real algebra (introduced in [4] and [12]) from the angle of ideals convex with respect to the cone of sums of squares. In this section \(A\) is a ring. ### Preordering, convexity, convex and real ideals and the support mapping **Definition 2.1**.: A cone of \(A\) is a subset \(P\) of \(A\) such that \(P+P\subset P\), \(P\cdot P\subset P\) and \(A^{2}\subset P\). A cone \(P\) is called proper if \(-1\not\in P\). Note that the set \(\sum A^{2}\) of sums of squares is the smallest cone of \(A\). In case \(-1\not\in\sum A^{2}\), we say that \(A\) is a formally real ring, which means also that it admits a proper cone. Another example of major interest in the paper is, if \(A\) is an integral domain with fraction field \(\mathcal{K}(A)\), the cone \(A\cap\sum\mathcal{K}(A)^{2}\) of elements in \(A\) that are sum of squares of elements in \(\mathcal{K}(A)\). This cone plays a crucial role in the paper, it will be denoted simply by \(\mathcal{C}=A\cap\sum\mathcal{K}(A)^{2}\). We will encounter the notion of cone generated by a subset. Let \(P\) be cone of \(A\). If \(S\subset A\) then \(P[S]=\{\sum\limits_{i=1}^{n}t_{i}s_{i}\mid t_{i}\in P,\ s_{i}\in S\}\) is the smallest cone of \(A\) containing \(P\) and \(S\). If \(S=\{f_{1},\ldots,f_{k}\}\) then we also denote \(P[S]\) by \(P[f_{1},\ldots,f_{k}]\). Recall that for a given cone \(P\) of \(A\), the set \(P\cap-P\) is called the support of \(P\) and is denoted by \(\operatorname{supp}(P)\). **Proposition 2.2**.: _We have a support map_ \[\operatorname{supp}:\operatorname{Cone}(A)\to\operatorname{Ideal}(A),\ P \mapsto\operatorname{supp}(P)\] _which preserves inclusions._ _Let \(A\to B\) be a ring morphism. The diagram_ \[\begin{array}{ccc}\operatorname{Cone}(B)&\stackrel{{ \operatorname{supp}}}{{\to}}&\operatorname{Ideal}(B)\\ \downarrow&&\downarrow\\ \operatorname{Cone}(A)&\stackrel{{\operatorname{supp}}}{{\to}}& \operatorname{Ideal}(A)\end{array}\] _is commutative, where the vertical arrows are the natural maps \(\operatorname{Ideal}(B)\to\operatorname{Ideal}(A)\), \(I\mapsto\varphi^{-1}(I)\), and \(\operatorname{Cone}(B)\to\operatorname{Cone}(A)\), \(P\mapsto\varphi^{-1}(P)\)._ Proof.: The fact that the support map is well-defined follows directly from the formula \[xy=\frac{1}{4}x(y+1)^{2}-\frac{1}{4}x(y-1)^{2}\] for \(x,y\in A\). The commutativity of the diagram is straightforward. Note that the support map sends a proper cone on a proper ideal. We are interested more generally by characterizing the image of the support map. Note that this map is in general not surjective, for example the prime ideal \((x^{2}+1)\subset\mathbb{R}[x]\) is not the support of a cone otherwise this one would not be proper. We recall to this aim the notion of convexity of an ideal related to a given cone [4]. **Definition 2.3**.: Let \(P\) be a cone of \(A\). An ideal \(I\) of \(A\) is called \(P\)-convex if \[p_{1}+p_{2}\in I\text{ with }p_{1}\in P\text{ and }p_{2}\in P\Rightarrow p_{1}\in I\text{ and }p_{2}\in I.\] The support of a cone \(P\) is always convex for this cone, and it is easy to check that it is even the smallest \(P\)-convex ideal. We give an elementary property about convexity that will be useful in the sequel. **Lemma 2.4**.: _Let \(P\) and \(Q\) be cones of \(A\) with \(P\subset Q\), and \(I\subset A\) be a \(Q\)-convex ideal. Then \(I\) is \(P\)-convex._ The following result is useful to study the image of the support map. **Lemma 2.5**.: _Let \(P\) be a cone and \(I\) be an ideal of \(A\). There exists a cone \(Q\) of \(A\) such that \(P\subset Q\) and \(\operatorname{supp}(Q)=I\) if and only if \(I\) is \(P\)-convex. In this situation, \(I+P\) is the smallest cone containing \(P\) with support \(I\) and it satisfies \(I+P=P[I]\)._ Proof.: Assume that \(I\) is \(P\)-convex. The point is to prove that \(\operatorname{supp}I+P=I\). To prove the non-obvious inclusion, let \(q=a+b\in\operatorname{supp}(I+P)\) with \(a\in I\) and \(b\in P\). So \(q=a+b\in-(I+P)\) and thus \(b\in-(I+P)\). We have \(b=-a^{\prime}-b^{\prime}\) with \(a^{\prime}\in I\) and \(b^{\prime}\in P\) and it follows that \(b+b^{\prime}\in I\). Since \(I\) is \(P\)-convex then \(b\in I\) and thus \(q\in I\). It proves \(I\) is the support of \(I+P\). The converse implication comes from Lemma 2.4. We answer now to the question asked above concerning the image of the support map. **Theorem 2.6**.: _The image of the support map \(\operatorname{supp}:\operatorname{Cone}(A)\to\operatorname{Ideal}(A)\) is the set of \(\sum A^{2}\)-convex ideals of \(A\)._ Proof.: For \(P\) a cone of \(A\), \(\operatorname{supp}(P)\) is \(P\)-convex and thus \(\sum A^{2}\)-convex since \(\sum A^{2}\subset P\). Let \(I\) be a \(\sum A^{2}\)-convex ideal of \(A\). Then the cone \(I+\sum A^{2}\) is a cone with support \(I\) from Lemma 2.5. There exists a notion of radical ideal with respect to a cone which is no more than the convexity with respect to the cone plus the classical radicality. **Definition 2.7**.: Let \(P\) be a cone of \(A\). An ideal \(I\) of \(A\) is called \(P\)-radical if \[a^{2}+p\in I\text{ with }a\in A\text{ and }p\in P\Rightarrow a\in I\] It means equivalently that the ideal is radical and \(P\)-convex by [4, Prop. 4.2.5]. For instance a real ideal, which is by definition a \(\sum A^{2}\)-radical ideal, is radical and \(\sum A^{2}\)-convex. Our interest in the notion of \(\sum A^{2}\)-convex ideals is motivated by the natural feeling that some non real ideals seem closer to be real (like the ideal \((x^{2})\) in \(\mathbb{R}[x]\)) than others (e.g the ideal \((x^{2}+1)\) in \(\mathbb{R}[x]\)). And indeed one may check that the ideal \((x^{2})\) is \(\sum\mathbb{R}[x]^{2}\)-convex. ### Orderings, real and Zariski spectra and the support mapping We denote by \(\operatorname{Spec}A\) (resp. \(\operatorname{R-Spec}A\)) the (resp. real) Zariski spectrum of \(A\), i.e the set of all (resp. real) prime ideals of \(A\). The set of maximal (resp. and real) ideals is denoted by \(\operatorname{Max}A\) (resp. \(\operatorname{R-Max}A\)). We endow \(\operatorname{Spec}A\) with the Zariski topology (whose closed sets are) generated by the sets \(\mathcal{V}(f)=\{\mathfrak{p}\in\operatorname{Spec}A\mid f\in\mathfrak{p}\}\) for \(f\in A\). The subsets \(\operatorname{R-Spec}A\), \(\operatorname{Max}A\) and \(\operatorname{R-Max}A\) of \(\operatorname{Spec}A\) are endowed with the induced Zariski topology. We denote also \(\mathcal{V}(I)=\{\mathfrak{p}\in\operatorname{Spec}A\mid I\subset\mathfrak{p}\}\) for \(I\) an ideal of \(A\). For \(\mathfrak{p}\in\operatorname{Spec}A\), we denote by \(k(\mathfrak{p})\) the residue field at \(\mathfrak{p}\) i.e the fraction field of \(A/\mathfrak{p}\). **Definition 2.8**.: A proper cone \(P\) is called an ordering if it satisfies \[ab\in P\Rightarrow a\in P\text{ or }-b\in P.\] The set of orderings of \(A\) is denoted by \(\operatorname{Spec}_{r}A\). We recall the principal properties of orderings. **Proposition 2.9**.: _[_4_]_ _Let \(P\) be an ordering of \(A\). We have_ 1. \(P\cup-P=A\)_._ 2. \(\operatorname{supp}(P)\) _is a real prime ideal of_ \(A\)_._ 3. \(\overline{P}=\{\overline{a}/\overline{b}\in k(\operatorname{supp}(P))\mid ab \in P\}\) _is an ordering of_ \(k(\operatorname{supp}(P))\) _such that_ \(P=\varphi^{-1}(\overline{P})\) _with_ \(\varphi:A\to k(\operatorname{supp}(P))\) _the canonical morphism and_ \(\overline{a}\)_,_ \(\overline{b}\) _denote the classes of_ \(a\) _and_ \(b\) _in_ \(A/\operatorname{supp}(P)\) _._ 4. _There exists a morphism_ \(\alpha:A\to R_{\alpha}\) _such that_ \(R_{\alpha}\) _is a real closed field,_ \(\ker\alpha=\operatorname{supp}(P)\) _and_ \(P=\phi^{-1}((R_{\alpha})_{+})\)_._ _Conversely, given \(\alpha:A\to R_{\alpha}\) a morphism into a real closed field, \(P_{\alpha}=\alpha^{-1}((R_{\alpha})_{+})\) is an ordering of \(A\) with support \(\ker\alpha=\operatorname{supp}(P_{\alpha})\)._ Thus one can see an ordering of \(A\) equivalently as a morphism into a real closed field. We will use this identification in all the paper. By [4, Thm. 4.3.7], \(A\) is formally real if and only if \(\operatorname{Spec}_{r}A\neq\emptyset\) if and only if \(\operatorname{R-Spec}A\neq\emptyset\). Let \(\alpha\in\operatorname{Spec}_{r}A\). Let \(a\in A\), we set \(a(\alpha)\geq 0\) if \(a\in P_{\alpha}\), \(a(\alpha)>0\) if \(a\in P_{\alpha}\setminus\operatorname{supp}(P_{\alpha})\), \(a(\alpha)=0\) if \(a\in\operatorname{supp}(P_{\alpha})\). A set of the form \[\mathcal{S}(f_{1},\dots,f_{k})=\{\alpha\in\operatorname{Spec}_{r}A\mid f_{1}( \alpha)>0,\dots,f_{k}(\alpha)>0\}\] for \(f_{1},\dots,f_{k}\) some elements of \(A\), is called a basic open subset of \(\operatorname{Spec}_{r}A\). A basic open subset of the form \(\mathcal{S}(f)\), for a \(f\in A\), is called principal. The real spectrum \(\operatorname{Spec}_{r}A\) is a topological space for the topology (whose open sets are) generated by the basics open sets. In the sequel, if \(T\) is a subset of \(\operatorname{Spec}_{r}A\) then we will denote by \(\overline{T}\) the closure of \(T\) for the real spectrum topology. A constructible subset of \(\operatorname{Spec}_{r}A\) is a finite boolean combination of basic open sets. Given two orderings \(\alpha\) and \(\beta\), one says that \(\beta\) specializes to \(\alpha\), and write \(\beta\to\alpha\), when \(P_{\alpha}\subset P_{\beta}\). An equivalent characterization from [4, Prop. and Defn. 7.1.18] is \((P_{\alpha}\setminus(-P_{\alpha}))\subset(P_{\beta}\setminus(-P_{\beta}))\), and another is \(\alpha\in\overline{\{\beta\}}\). As a consequence closed subsets of the real spectrum are closed by specialization, and the converse is also true for constructible closed subsets [4, Prop. 7.1.21]. The restriction of the support map \(\operatorname{supp}:\operatorname{Cone}(A)\to\operatorname{Ideal}(A)\) to orderings gives a map \(\operatorname{supp}:\operatorname{Spec}_{r}A\to\operatorname{Spec}A\) whose image is contained in \(\operatorname{R-Spec}A\). We complete here the study of this support map initiated in Theorem 2.6 and Proposition 2.2. **Proposition 2.10**.: _The support map \(\operatorname{supp}:\operatorname{Spec}_{r}A\to\operatorname{Spec}A\) is continuous and its image is \(\operatorname{R-Spec}A\)._ _A morphism \(\varphi:A\to B\) induces natural maps \(\operatorname{R-Spec}(B)\to\operatorname{R-Spec}(A)\) and \(\operatorname{Spec}_{r}B\to\operatorname{Spec}_{r}A\) and hence a commutative diagram:_ \[\begin{array}{ccc}\operatorname{Spec}_{r}B&\stackrel{{\operatorname {supp}}}{{\to}}&\operatorname{R-Spec}B\\ \downarrow&&\downarrow\\ \operatorname{Spec}_{r}A&\stackrel{{\operatorname{supp}}}{{\to}}& \operatorname{R-Spec}A\end{array}\] Proof.: Let \(\mathfrak{p}\) be a real prime ideal. Since \(\mathfrak{p}\) is \(\sum A^{2}\)-convex then it follows from [4, Prop. 4.3.8] that there exists an ordering with support equal to \(\mathfrak{p}\). It proves \(\operatorname{supp}(\operatorname{Spec}_{r}A)=\operatorname{R-Spec}A\). The continuity follows from [4, Prop. 7.1.8] or [12, Prop. 4.11]. For the convenience of the reader, we recall from [4, Prop. 4.4.1] the formal Positivstellensatz, a key tool that we will use several times in the paper. **Theorem 2.11**.: _Let \(A\) a commutative ring. In \(A\) consider a subset \(H\), a monoid \(M\) generated by the \((b_{j})_{j\in L}\) and an ideal \(I\) generated by the \((c_{k})_{k\in T}\)._ _There is no \(\alpha\in\operatorname{Spec}_{r}A\) such that \(H\subset P_{\alpha}\), \(\forall j\in L\)\(b_{j}\notin\operatorname{supp}(\alpha)\), and \(\forall k\in T\)\(c_{k}\in\operatorname{supp}(\alpha)\) if and only if we have an identity_ \[p+b^{2}+c=0\] _where \(p\in\sum A^{2}[H]\), \(b\in M\), \(c\in I\)._ ### Real spectrum of geometric rings Let us recall how the real points of a variety are related to the real spectrum of its coordinate ring. Assume \(V=\operatorname{Spec}R[V]\) is an affine algebraic variety over \(R\) with coordinate ring \(R[V]\). We denote by \(V(R)\) the set of real closed points of \(V\) i.e the subset of \(\mathfrak{p}\in V\) such that \(k(\mathfrak{p})=R\). We have inclusions \[V(R)\hookrightarrow\text{R-Spec}\,R[V]\hookrightarrow\text{Spec}\,R[V]\] that makes \(V(R)\) a topological space for the (induced) Zariski topology. The real zero sets \(\mathcal{Z}(f)=\mathcal{V}(f)\cap V(R)\), for \(f\in R[V]\), generate the closed subsets of \(V(R)\) for the Zariski topology. If \(T\) is a subset of \(V(R)\) then we will denote by \(\overline{T}^{Z}\) the closure of \(T\) for the Zariski topology. Since \(R[V]=R[x_{1},\dots,x_{n}]/I\) for a radical ideal \(I\subset R[x_{1},\dots,x_{n}]\) then we get an inclusion \[V(R)\hookrightarrow R^{n}\] that identifies \(V(R)\) as a closed subset of \(R^{n}\) for the Zariski topology and also for the Euclidean topology. Recall that the unique ordering on \(R\) gives rise to an order topology on the affine spaces \(R^{n}\) called the Euclidean topology [4], in a similar way than the Euclidean topology on \(\mathbb{R}^{n}\), even if the topological space \(R\) is not connected (except in the case \(R=\mathbb{R}\)) or the closed interval \([0,1]\) is in general not compact. If \(T\) is a subset of \(V(R)\) then we will denote by \(\overline{T}^{E}\) the closure of \(T\) for the Euclidean topology. An element of \(V(R)\) can also be seen as a morphism from \(R[V]\) to \(R\). So we get a third inclusion \[V(R)\hookrightarrow\text{Spec}_{r}\,R[V]\] that identifies \(V(R)\) as a closed (by specialization) subset of \(\text{Spec}_{r}\,R[V]\). For \(x\in V(R)\), we denote by \(\alpha_{x}:R[V]\to R\), \(f\mapsto f(x)\) the associated ordering of \(R[V]\). A set of the form \[S(f_{1},\dots,f_{k})=\{x\in V(R)\mid f_{1}(x)>0,\dots,f_{k}(x)>0\}\] for \(f_{1},\dots,f_{k}\) some elements of \(R[V]\), is called a basic open subset of \(V(R)\), it is an open subset of \(V(R)\) for the Euclidean topology. A basic open subset of the form \(S(f)\), for a \(f\in R[V]\), is called principal. Clearly, the principal open subsets generates the Euclidean topology. A semialgebraic subset of \(V(R)\) is a finite boolean combination of basic open sets. By [4, Prop. 7.2.2 and Thm. 7.2.3], the inclusion \(V(R)\hookrightarrow\text{Spec}_{r}\,R[V]\) induces a one-to-one map between the semialgebraic subsets of \(V(R)\) and the constructible subsets of \(\text{Spec}_{r}\,R[V]\), this map sends the semialgebraic set \(S\) to the constructible \(\widetilde{S}\) described by the same inequalities than \(S\). For \(f_{1},\dots,f_{k}\) in \(R[V]\) we have \(\widetilde{S(f_{1},\dots,f_{k})}=\mathcal{S}(f_{1},\dots,f_{k})\). One important property of this map is the commutation with the closures for the Euclidean topology and the real spectrum topology [4, Thm. 7.2.3], namely for a semialgebraic subset \(S\) of \(V(R)\) we have \[\widetilde{(\overline{S}^{E})}=\overline{(\widetilde{S})}.\] ### Stability index The material of this subsection will be used in section 4, hence the reader may momentarily skip it until reaching this section. **Definition 2.12**.: The stability index of \(A\) is the infimum of the numbers \(k\in\mathbb{N}\) such that for any basic open subset \(S\) of \(\text{Spec}_{r}\,A\) there exist \(f_{1},\dots,f_{k}\) in \(A\) such that \(S=\mathcal{S}(f_{1},\dots,f_{k})\). Similarly, the stability index \(\operatorname{s}(U)\) of an open subset \(U\) of \(\text{Spec}_{r}\,A\) is the infimum of the numbers \(k\in\mathbb{N}\) such that for any basic open subset \(S\) of \(\text{Spec}_{r}\,A\) such that \(S\subset U\) there exist \(f_{1},\dots,f_{k}\) in \(A\) such that \(S=\mathcal{S}(f_{1},\dots,f_{k})\). If \(U=\emptyset\) then we set \(\operatorname{s}(U)=0\). When \(V=\text{Spec}\,R[V]\) is an affine algebraic variety over \(R\) with coordinate ring \(R[V]\), from the properties of the map \(S\mapsto\widetilde{S}\) exposed previously then, it is clear that the stability index of \(R[V]\) is also the infimum of the numbers \(k\in\mathbb{N}\) such that for any basic open subset \(S\) of \(V(R)\) there exist \(f_{1},\dots,f_{k}\) in \(R[V]\) such that \(S=S(f_{1},\dots,f_{k})\). In that case, the stability index of \(R[V]\) is also called the stability index of \(V(R)\). We recall the famous theorem of Brocker [5] and Scheiderer [16]: **Theorem 2.13**.: _(Brocker-Scheiderer) Let \(V=\operatorname{Spec}R[V]\) be an affine algebraic variety over \(R\) with coordinate ring \(R[V]\). Then, the stability index of \(R[V]\) coincides with the stability index of \(V(R)\) and is equal to the dimension of \(V(R)\) (as a semialgebraic set)._ Noe that in the case of finitely generated algebras over a non real closed field, the formula is not as simple. Concerning the stability index of abstract rings, we refer to [1]. ## 3. Central algebra From now on, the ring \(A\) is assumed to be a domain with fraction field \(\mathcal{K}(A)\). Classical real algebra is developed around the structural cone \(\sum A^{2}\). It is a fruitful tool to make a link between algebra and the geometry of the real points of a variety. In this section, we develop a notion of central algebra in order to take into account the central points of a real variety, i.e. the Euclidean closure of the nonsingular real closed points. The central locus of a real algebraic variety has been defined in [4] inspired by the work of Dubois [6]. This central algebra is built around the cone \(\mathcal{C}=A\cap\sum\mathcal{K}(A)^{2}\) of the sums of squares of elements of the fraction field that belong to the ring \(A\). Note that the word _central_ already appeared in the literature in an algebraic context : a notion of central ideal is introduced in [15], a definition of central ordering is given in [2]. Our goal is to show that the central algebra unifies these notions, and is a good framework to state abstract central Positivstellensatze. ### Cones and orderings with support the null ideal In this section we are interested by describing the inverse image of the null ideal by the support maps \(\operatorname{Cone}(A)\to\operatorname{Ideal}(A)\) and \(\operatorname{Spec}_{r}A\to\operatorname{Spec}A\). Remark that the cones (resp. orderings) of \(\mathcal{K}(A)\) with support the null ideal are exactly the proper cones (resp. the orderings) of \(\mathcal{K}(A)\). Now we aim to compare the proper cone of \(\mathcal{K}(A)\) with the cone of \(A\) with support the null ideal. **Proposition 3.1**.: _The map \(P\mapsto P\cap A\) sends injectively the proper cones (resp. the orderings) of \(\mathcal{K}(A)\) into the cones (resp. the orderings) of the ring \(A\) with support the null ideal._ _The map between \(\operatorname{Spec}_{r}\mathcal{K}(A)\) and the set of orderings of \(A\) with support \((0)\) given by \(P\mapsto P\cap A\) is bijective and the inverse map is given by_ \[Q\mapsto Q_{\mathcal{K}(A)}:=\{a/b\in\mathcal{K}(A)\mid ab\in Q\}.\] Proof.: The first point comes from the property of the support map, cf. Proposition 2.2 and Proposition 2.10. The second point is a consequence of (3) of Proposition 2.9. In the sequel, we identify the proper cones of \(\mathcal{K}(A)\) with a subset of \(\operatorname{Cone}(A)\) with support the null ideal and \(\operatorname{Spec}_{r}\mathcal{K}(A)\) with the subset of \(\operatorname{Spec}_{r}A\) with support the null ideal. We illustrate in the following example the non-surjectivity in the case of cones. **Example 3.2**.: Let \(A=\mathbb{R}[V]\) be the coordinate ring of the cubic curve \(V\) with an isolated point, namely \(A=\mathbb{R}[x,y]/(y^{2}-x^{2}(x-1))\). It is easy to see that the cone \(\sum A^{2}\) has support the null ideal, however it does not come from a cone of \(\mathcal{K}(A)\). Indeed, assume that \(\sum A^{2}=P\cap A\) for a cone \(P\) of \(\mathcal{K}(A)\). We have \(x-1=(y/x)^{2}\in\mathcal{K}(A)^{2}\) and thus \(x-1\in P\cap A=\sum A^{2}\). It follows that \(x-1\) must be nonnegative on \(V(\mathbb{R})\) and evaluating at the isolated point we get a contradiction. ### \(\mathcal{C}\)-convex and central ideals We also recall the definition of central ideal introduced in [15]. **Definition 3.3**.: Let \(I\) be an ideal of \(A\). Then \(I\) is called central if \(I\) is \(\mathcal{C}\)-radical. The central radical of \(I\) is defined as \[\overset{C}{\vee}\overline{I}=\{a\in A|\ \exists m\in\mathbb{N}\ \exists b\in \mathcal{C}\text{ such that }a^{2m}+b\in I\}.\] We denote by \(\operatorname{C-Spec}A\) (resp. \(\operatorname{C-Max}A\)) the subset of \(\operatorname{Spec}A\) of central prime (resp. maximal) ideals. From [4, Prop. 4.2.5], we know that an ideal is central if and only if it is radical and \(\mathcal{C}\)-convex. A \(\mathcal{C}\)-convex (resp. central) ideal is \(\sum A^{2}\)-convex (resp. real) but the converse is not true as illustrated by the ideal \(I=(x,y)\) in Example 3.2. Indeed we have \(1+(y/x)^{2}=x\in I\), \(1\in\mathcal{C}\) and \((y/x)^{2}\in\mathcal{C}\) but \(1\not\in I\). So \(I\) is a real ideal which is not \(\mathcal{C}\)-convex. By [15, Prop. 3.14], \(\sqrt[C]{I}\) is the intersection of the central prime ideals containing \(I\) and moreover \(I\) is central if and only if \(I=\sqrt[C]{I}\). We give several characterizations of the existence of a central ideal in a domain. **Proposition 3.4**.: _The following properties are equivalent:_ 1. \(\mathcal{K}(A)\) _is a formally real field._ 2. \(\operatorname{C-Spec}A\neq\emptyset\)_._ 3. \(A\) _has a proper_ \(\mathcal{C}\)_-convex ideal._ 4. \(A\) _has a proper central ideal._ 5. \((0)\) _is a central ideal of_ \(A\)_._ Proof.: The equivalence between (1), (2), (4), (5) follows from [15, Prop. 3.16]. Clearly (4) implies (3). Let \(I\) be a proper \(\mathcal{C}\)-convex ideal. We claim that \(\sqrt[C]{I}\) is also proper and the proof will be done. Assume \(1\in\sqrt[C]{I}\). There exists \(b\in\mathcal{C}\) such that \(1+b\in I\) and since \(I\) is \(\mathcal{C}\)-convex then \(1\in I\), a contradiction. In the case where \(A\) is the coordinate ring of an irreducible affine algebraic variety \(V\) over \(R\), the existence of a central ideal is equivalent to the existence of a so-called central point. From [4, Defn. 7.6.3], the central locus of \(V(R)\) or \(V\) and denoted by \(\operatorname{Cent}V(R)\), is defined to be the closure for the Euclidean topology of the nonsingular real closed points i.e \(\operatorname{Cent}V(R)=\overline{V_{reg}(R)}^{E}\). In the sequel, we say that \(V\) is a central variety if \(\operatorname{Cent}V(R)=V(R)\). It follows from the definition that a nonsingular variety is central. On the contrary, the isolated point in the cubic exhibited in Example 3.2 is non-central. Note that \(\operatorname{Cent}V(R)\) is a closed semialgebraic set since \(V_{reg}(R)\) and the Euclidean closure of a semialgebraic set remains semialgebraic [4, Prop. 2.2.2]. The definition of central ideals gives a new formulation of the Central Nullstellensatz stated in [4, Cor. 7.6.6]. **Theorem**(Central Nullstellensatz) Let \(V\) be an irreducible affine algebraic variety over \(R\). Then: \[I\subset R[V]\text{ is a central ideal }\Leftrightarrow\ I=\mathcal{I}( \mathcal{Z}(I)\cap\operatorname{Cent}V(R))\ \Leftrightarrow\ I=\mathcal{I}(\mathcal{V}(I)\cap\operatorname{Cent}V(R))\] In particular, we have \(\operatorname{C-Max}R[V]=\operatorname{Cent}V(R)\). It furnishes a tool to decide geometrically whether an ideal is central. **Example 3.5**.: Let \(V\) be the Whitney umbrella given by the equation \(y^{2}=zx^{2}\). Then \(\mathfrak{p}=(x,y)\subset\mathbb{R}[V]\) is a central prime ideal since the stick \(\mathcal{Z}(\mathfrak{p})\) of the umbrella meets \(\operatorname{Cent}V(\mathbb{R})\) in maximal dimension. **Example 3.6**.: Let \(V\) be the Cartan umbrella given by the equation \(x^{3}=z(x^{2}+y^{2})\). Then \(\mathfrak{p}=(x,y)\subset\mathbb{R}[V]\) is a real prime ideal but not a central ideal by the Central Nullstellensatz since the stick \(\mathcal{Z}(\mathfrak{p})\) of the umbrella meets \(\operatorname{Cent}V(\mathbb{R})\) in a single point. Alternatively, one can show algebraically that \(\mathfrak{p}\) is not central by the identity \[b=x^{2}+y^{2}-z^{2}=x^{2}+y^{2}-\frac{x^{6}}{(x^{2}+y^{2})^{2}}=\frac{3x^{4}y ^{2}+3x^{2}y^{4}+y^{6}}{(x^{2}+y^{2})^{2}}\in\mathbb{R}[V]\cap\sum\mathcal{K}( V)^{2}.\] Indeed \(z^{2}+b=x^{2}+y^{2}\in\mathfrak{p}\) but \(z\not\in\mathfrak{p}\). One goal in the paper is to generalize this Central Nullstellensatz to Central Positivstellensatze in order to get algebraic certificates of positivity on subsets of the central locus. ### Central cones and precentral orderings In this section we introduce the notion of central cones whose supports give the \(\mathcal{C}\)-convex ideals. **Definition 3.7**.: A cone \(P\subset A\) is called central if there exists a cone \(Q\) of \(\mathcal{K}(A)\) such that \((Q\cap A)\subset P\). We denote by \(\operatorname{Cone}_{c}(A)\) the subset of all central cones in \(\operatorname{Cone}(A)\). An ordering which is a central cone is called a precentral ordering. We denote by \(\operatorname{Spec}_{pc}A\) the subset of all precentral orderings in \(\operatorname{Spec}_{r}A\). We say that \(A\) is precentral if \(\operatorname{Spec}_{pc}A=\operatorname{Spec}_{r}A\). Since any cone of \(\mathcal{K}(A)\) contains \(\sum\mathcal{K}(A)^{2}\), it follows from the definition that a cone \(P\) of \(A\) is central if and only if \(\mathcal{C}\subset P\). This allows one to write \[\operatorname{Spec}_{pc}A=\bigcap_{f\in\mathcal{C}}\{\alpha\in\operatorname{ Spec}_{r}A\mid f(\alpha)\geq 0\}.\] This shows that \(\operatorname{Spec}_{pc}A\) is a closed subset in \(\operatorname{Spec}_{r}A\) as an intersection of closed subsets. Beware that it is not necessarily a constructible set as it will be point in the sequel. We now study the restriction of our support map to the set of central cones and show that it coincides with the set of all \(\mathcal{C}\)-convex ideals. **Proposition 3.8**.: _Let \(\operatorname{supp}:\operatorname{Cone}(A)\to\operatorname{Ideal}(A)\) be the support map. We have_ 1. \(\operatorname{supp}(\operatorname{Cone}_{c}(A))\) _is the set of_ \(\mathcal{C}\)_-convex ideals._ 2. \(\operatorname{supp}(\operatorname{Spec}_{pc}A)=\operatorname{C-Spec}A\)_._ Proof.: Let \(P\) be a central cone. Since \(\mathcal{C}\subset P\) then \(\operatorname{supp}(P)\) is \(\mathcal{C}\)-convex by Lemma 2.4. Let \(I\) be a \(\mathcal{C}\)-convex ideal. Since \(I\) is \(\mathcal{C}\)-convex then from Lemma 2.5 then \(I+\mathcal{C}\) is a cone with support equal to \(I\). Since \(I+\mathcal{C}\) is clearly a central cone then we have proved (1). From (1) then \(\operatorname{supp}(\operatorname{Spec}_{pc}A)\subset\operatorname{C-Spec}A\). To show the converse implication, assume \(\mathfrak{p}\) is a central prime ideal. Then \(\mathfrak{p}\) is \(\mathcal{C}\)-convex and we conclude by using [4, Prop. 4.3.8]. Looking at the example 3.5 of the Whitney umbrella, it is easy to see that a cone and even an ordering with support a \(\mathcal{C}\)-convex ideal is not always central as a cone. **Proposition 3.9**.: _An ordering of \(A\) with support the null ideal is precentral i.e \(\operatorname{Spec}_{r}\mathcal{K}(A)\subset\operatorname{Spec}_{pc}A\)._ Proof.: Let \(P\subset A\) be an ordering that \(\operatorname{supp}(P)=(0)\). By Proposition 3.1 then there exists \(Q\in\operatorname{Spec}_{r}\mathcal{K}(A)\) such that \(P=Q\cap A\). Since \(\sum\mathcal{K}(A)^{2}\subset Q\) then \(\mathcal{C}\subset P\). From the example 3.2 of the cubic, we know that a cone with support the null ideal is not always central i.e the statement of Proposition 3.9 cannot be relaxed to cones. One may give equivalent conditions on the existence of a precentral ordering: **Proposition 3.10**.: _The following properties are equivalent:_ 1. \(\mathcal{K}(A)\) _is a formally real field._ 2. _There is a proper cone in_ \(\operatorname{Cone}_{c}(A)\)_._ 3. _There exists a proper_ \(\mathcal{C}\)_-convex ideal in_ \(A\)_._ 4. \(\operatorname{Spec}_{pc}A\neq\emptyset\)_._ 5. \(\operatorname{C-Spec}A\neq\emptyset\)_._ Proof.: One may use Propositions 3.4, 3.8 and 3.9. Let us end this section by considering the ring \(A=\mathbb{R}[x,y]\) which is clearly precentral, namely \(\operatorname{Spec}_{pc}A=\operatorname{Spec}_{r}A\). However, \(\sum A^{2}\neq\mathcal{C}\) (for instance consider the Motzkin polynomial) and hence \(\operatorname{Cone}_{c}(A)\neq\operatorname{Cone}(A)\). ### Central orderings We begin with recalling the definition of central ordering from [2], definition which has inspired the definition 3.7 of a central cone in the preceding section. **Definition 3.11**.: A cone \(P\in\operatorname{Spec}_{r}A\) is called central if there exists an ordering \(Q\in\operatorname{Spec}_{r}\mathcal{K}(A))\) such that \((Q\cap A)\to P\). We denote by \(\operatorname{Spec}_{c}A\) the subset of \(\operatorname{Spec}_{r}A\) of central orderings. We say that \(A\) is central if \(\operatorname{Spec}_{c}A=\operatorname{Spec}_{r}A\). The notion of central ordering is a priori different from that of precentral ordering introduced in the preceding section, and studying the difference is a crucial issue in this paper. In the geometric setting, let us see that the central spectrum is compatible with the notion of central points previously recalled. Again, this comes from results in [4]: **Proposition 3.12**.: _Let \(V\) be an irreducible affine algebraic variety over \(R\). Then,_ \[\operatorname{Spec}_{c}R[V]=\widetilde{\operatorname{Cent}V(R)}.\] _Moreover, \(V\) is central if and only if \(R[V]\) is central._ Proof.: To show the first statement, one knows from [4, Prop. 7.6.2] that if \(x\in\operatorname{Cent}V(R)\), then \(x\) is the specialization of an ordering in \(\operatorname{Spec}_{r}\mathcal{K}(V)\), in other word, \(\widetilde{\operatorname{Cent}V}(R)\subset\widetilde{\operatorname{Spec}_{r} \mathcal{K}(V)}\). The converse inclusion comes from [4, Prop. 7.6.4] which we recall the argument since we will need it in an abstract setting after. For dimensional reasons, \(\operatorname{Spec}_{r}\mathcal{K}(V)\subset\widetilde{V_{reg}(R)}\) and thus, using that the tilda map commutes with the closures for the Euclidean topology and the real spectrum topology, we get \[\operatorname{Spec}_{c}R[V]=\widetilde{\operatorname{Spec}_{r}\mathcal{K}(V)} \subset\widetilde{\widetilde{V_{reg}(R)}}=\widetilde{\operatorname{Cent}V(R)}.\] It follows that \(\operatorname{Spec}_{c}R[V]=\widetilde{\operatorname{Cent}V(R)}\). Now, let us deduce the second statement. Assuming that \(R[V]\) is central, then \(V(R)=\operatorname{Spec}_{r}R[V]\cap V(R)=\operatorname{Spec}_{c}R[V]\cap V(R)= \operatorname{Cent}V(R)\). Conversely, assuming that \(V\) is central, namely \(V(R)=\operatorname{Cent}V(R)\), one gets \((\operatorname{Spec}_{r}R[V]\setminus\operatorname{Spec}_{c}R[V])=( \widetilde{V(R)}\setminus\widetilde{\operatorname{Cent}V(R)})=(V(R)\ \widetilde{\operatorname{Cent}V(R)})=\emptyset\) and hence \(R[V]\) is a central domain. An alternative way of saying that an ordering \(P\) is central is to say that there is \(Q\in\operatorname{Spec}_{r}A\) such that \(\operatorname{supp}(Q)=(0)\) and \(Q\to P\). Of course, any central ordering is central as a cone and thus it is a precentral ordering. Moreover, \(\operatorname{Spec}_{c}A=\widetilde{\operatorname{Spec}_{r}\mathcal{K}(A)}\) is naturally a closed subset of \(\operatorname{Spec}_{r}A\). Let us see now how to mimic the geometric argument motivating our definition in the abstract case. As usual, if \(A\) is noetherian then set \(\operatorname{Reg}A\) to be the set of all prime ideal \(\mathfrak{p}\) in \(A\) such that \(A_{\mathfrak{p}}\) is a regular local ring. The complementary \(\operatorname{Sing}A\) of \(\operatorname{Reg}A\) in \(\operatorname{Spec}A\) is a nonempty closed subset for the Zariski topology whenever the ring \(A\) satisfied the so-called property (\(J1\)) [13, SS32 B]. Note that excellent rings satisfy this condition. If \(A\) is excellent then \(\operatorname{Reg}A\) and \(\operatorname{Sing}A\) are Zariski constructible subsets of \(\operatorname{Spec}A\). One may derive the associated constructible \(\widetilde{\operatorname{Reg}A}\) and \(\widetilde{\operatorname{Sing}A}\) in \(\operatorname{Spec}_{r}A\). Namely \(\widetilde{\operatorname{Sing}A}=\{\alpha\in\operatorname{Spec}_{r}A\mid \operatorname{supp}(\alpha)\in\operatorname{Sing}A\}\) and \(\widetilde{\operatorname{Reg}A}=\operatorname{Spec}_{r}A\setminus\widetilde{ \operatorname{Sing}A}\). We give an abstract version of [4, Prop. 7.6.4]. **Proposition 3.13**.: _Let \(A\) be an excellent domain. Considering the closure in \(\operatorname{Spec}_{r}A\), one has_ \[\widetilde{\operatorname{Spec}_{r}\mathcal{K}(A)}=\widetilde{\operatorname{ Reg}(A)}\] Proof.: We start by showing that \(\widetilde{\operatorname{Spec}_{r}\mathcal{K}(A)}\subset\widetilde{ \operatorname{Reg}(A)}\). Let us assume that \(\alpha\in\operatorname{Spec}_{r}\mathcal{K}(A)\setminus\widetilde{ \operatorname{Reg}(A)}\). Then, \(\operatorname{supp}(\alpha)\in\operatorname{Sing}(A):\ I\subset\operatorname{ supp}(\alpha)\), where \(\operatorname{Sing}A=\mathcal{V}(I)\). This is impossible if \(\operatorname{supp}(\alpha)=0\). It remains to show that \[\widetilde{\operatorname{Reg}(A)}\subset\overline{\operatorname{Spec}_{r}\mathcal{ K}(A)}\] For this, we use an analogous of \((i)\implies(iii)\) from [4, Prop. 7.6.2]. Let \(\alpha\) be an ordering in \(A\) whose support \(\mathfrak{p}\) is in \(\operatorname{Reg}(A)\). Since \(A\) has finite dimension and \(A_{\mathfrak{p}}\) has same fraction field as \(A\), we deduce the existence of \(\beta\in\operatorname{Spec}_{r}\mathcal{K}(A)\) which specializes to \(\alpha\), by using [1, Lem. 3.4] which says that, given any regular local ring \(A\) of dimension \(d\), of residue field \(k\) and fraction field \(K\), any ordering on \(k\) admits \(2^{d}\) generalizations in \(\operatorname{Spec}_{r}A\) which are orderings in \(K\). With this framework in an abstract setting we recover the usual geometric properties. The end of the section will consist in studying the restriction of the support mapping to central orderings. Let us start with the following: **Lemma 3.14**.: _Let \(\beta\in\operatorname{Spec}_{c}A\) with support the null ideal. Let \(\mathfrak{q}\) be a prime ideal of \(A\) which is \(P_{\beta}\)-convex. Then_ \[P_{\alpha}=\mathfrak{q}+P_{\beta}\] _is a central ordering of \(A\) with support \(\mathfrak{q}\) which is a specialization of \(P_{\beta}\)._ Proof.: From Lemma 2.5 then \(P_{\alpha}\) is a cone with support \(\mathfrak{q}\) which is a specialization of \(P_{\beta}\). It follows that \(P_{\alpha}\) is proper. Since \(\mathfrak{q}\) is \(P_{\beta}\)-convex then we may easily show that \(P_{\alpha}=\mathfrak{q}\cup P_{\beta}\). If \(ab\in P_{\alpha}\) then it follows from the fact that \(\mathfrak{q}\) is prime and \(P_{\beta}\) is an ordering that \(a\in P_{\alpha}\) or \(-b\in P_{\alpha}\). The proof is done. Is is not possible to differentiate the supports of precentral orderings from those of central ones: **Proposition 3.15**.: _Let \(\operatorname{supp}:\operatorname{Spec}_{r}A\to\operatorname{Spec}A\) be the support map. We have_ \[\operatorname{supp}(\operatorname{Spec}_{c}A)=\operatorname{C-Spec}A.\] Proof.: From Proposition 3.8 one has \(\operatorname{supp}(\operatorname{Spec}_{c}A)\subset\operatorname{C-Spec}A\). Assume \(\mathfrak{p}\in\operatorname{C-Spec}A\). By [4, Prop. 4.2.9] there exists an ordering \(P^{\prime}\in\operatorname{Spec}_{r}\mathcal{K}(A)\) such that \(\mathfrak{p}\) is \((P^{\prime}\cap A)\)-convex. To end the proof use Proposition 3.1 and Lemma 3.14 where \(\beta\) is associated to \((P^{\prime}\cap A)\) and \(\mathfrak{q}=\mathfrak{p}\). This allows us to say that the existence of a central ordering, or in other words the fact that \(\operatorname{Spec}_{c}A\neq\emptyset\), is equivalent to the conditions given in Proposition 3.10. ## 4. Central versus precentral This section is the heart of the paper. We aim to compare central and precentral orderings, orderings which have the same supports by Propositions 3.8 and 3.15. Since a precentral ordering of \(A\) contains a proper cone of \(\mathcal{K}(A)\) and a central ordering contains an ordering of \(\mathcal{K}(A)\), it follows that a central ordering is precentral as previously noted. However the converse implication does not hold, and the goal of this section is to study the difference. We assume in the sequel that \(\mathcal{K}(A)\) is a formally real field since otherwise we do not have any precentral nor central ordering. First note that for closed points of varieties, both notions coincide. **Proposition 4.1**.: _Let \(V\) be an irreducible affine algebraic variety over \(R\) and let \(x\in V(R)\). Then \(\alpha_{x}\) is central if and only if \(\alpha_{x}\) is precentral._ Proof.: Assume \(\alpha_{x}\) is precentral. By Proposition 3.8 then \(\mathfrak{m}_{x}=\operatorname{supp}(\alpha_{x})\in\operatorname{C-Spec}R[V]\). Since \(R[V]/\mathfrak{m}_{x}=R\) is real closed, \(\alpha_{x}\) is the unique ordering of \(\operatorname{Spec}_{r}R[V]\) with support \(\mathfrak{m}_{x}\) then it follows from Proposition 3.15 that \(\alpha_{x}\) is central. One may readily generalize to an abstract setting: **Proposition 4.2**.: _Let \(\alpha\in\operatorname{Spec}_{r}A\) be such that the residue field \(k(\operatorname{supp}(\alpha))\) admits a unique ordering. Then, \(\alpha\) is central if and only if \(\alpha\) is precentral._ Since \(\operatorname{Spec}_{c}A\subset\operatorname{Spec}_{pc}A\subset\operatorname{ Spec}_{r}A\), we already know that \(A\) is precentral whenever \(A\) is central and the latter condition is satisfied for instance when \(A\) is the coordinate ring of an irreducible affine algebraic variety over \(R\) which is central (see Proposition 3.12). Our aim is to give characterizations of central and precentral orderings. To start, let us note that, since \(\operatorname{Spec}_{c}A\) and \(\operatorname{Spec}_{pc}A\) are closed subsets, a point \(\alpha\) belongs to \(\operatorname{Spec}_{c}A\) (resp. \(\operatorname{Spec}_{pc}A\)) if and only if \(U\cap\operatorname{Spec}_{c}A\neq\emptyset\) (resp. \(U\cap\operatorname{Spec}_{pc}A\neq\emptyset\)) for any open subset \(U\) containing \(\alpha\). One may replace in that statement \(U\) with basic open subsets like \(\tilde{\mathcal{S}}(f_{1},\dots,f_{k})\) which are a basis of neighbourhoods. **Lemma 4.3**.: _Let \(\alpha\in\operatorname{Spec}_{r}A\) and \(f_{1},\dots,f_{k}\) in \(A\setminus\{0\}\) such that \(\alpha\in\mathcal{S}(f_{1},\dots,f_{k})\). Let us consider the following properties:_ 1. \(A\cap\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\subset P_{\alpha}\)_._ 2. \(\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\) _is proper in_ \(\mathcal{K}(A)\)_._ 3. \(A\cap\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\) _is proper in_ \(A\)_._ 4. \(\mathcal{S}(f_{1},\dots,f_{k})\cap\operatorname{Spec}_{c}A\neq\emptyset\)_._ 5. \(\mathcal{S}(f_{1},\dots,f_{k})\cap\operatorname{Spec}_{pc}A\neq\emptyset\)_._ _One has \((1)\implies(2)\iff(3)\iff(4)\implies(5)\)._ Proof.: The equivalence between (2) and (3) is clear. We prove (2) implies (4). Assume that \(\mathcal{S}(f_{1},\dots,f_{k})\subset(\operatorname{Spec}_{r}A\setminus \operatorname{Spec}_{c}A)\). It follows that \(\{\beta\in\operatorname{Spec}_{r}\mathcal{K}(A)\mid f_{1}(\beta)>0,\dots,f_{k} (\beta)>0\}=\emptyset\). Since the \(f_{i}\) are non zero, one may equivalently say that \(\{\beta\in\operatorname{Spec}_{r}\mathcal{K}(A)\mid f_{1}(\beta)\geq 0,\dots,f_{k} (\beta)\geq 0\}=\emptyset\). By the Positivstellensatz recalled in Theorem 2.11, one gets an identity \(1+p=0\) in \(\mathcal{K}(A)\) with \(p\in\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\). It follows that \(-1\in A\cap\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\) and it proves that (2) implies (4) by contraposition. We prove (4) implies (2). Let \(\alpha\in\mathcal{S}(f_{1},\dots,f_{k})\cap\operatorname{Spec}_{c}A\). There exists \(\beta\in\operatorname{Spec}_{r}A\) such that \(\operatorname{supp}(\beta)=(0)\) and \(\beta\to\alpha\). We have \(\beta\in\operatorname{Spec}_{r}\mathcal{K}(A)\) by Proposition 3.1. For \(i=1,\dots,k\), we have \(f_{i}\in P_{\alpha}\setminus\operatorname{supp}(\alpha)\) and thus \(f_{i}\in P_{\beta}\setminus\{0\}\). It follows that \(\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\subset P_{\beta}\) (viewed in \(\mathcal{K}(A)\)) and thus \(\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\) is proper. We prove (1) implies (2). Assume that \(\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\) is not proper. It follows that \(-1\in A\cap\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\) and since \(-1\not\in P_{\alpha}\) we get that \(A\cap\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\not\subset P_{\alpha}\). Since \(\operatorname{Spec}_{c}A\subset\operatorname{Spec}_{pc}A\) then (4) implies (5). The point (2) does not necessarily imply (1) as one can see looking for example at a point distinct from the origin in the stick of the Cartan umbrella (Example 3.6) and \(k=0\). Likewise (3) implies (1) does not hold in general, nevertheless it becomes true after quantification on the family \(f_{1},\dots,f_{k}\) and we derive the following characterizations for central orderings: **Proposition 4.4**.: _Let \(\alpha\in\operatorname{Spec}_{r}A\). The following properties are equivalent:_ 1. \(\alpha\in\operatorname{Spec}_{c}A\)_._ 2. _For any_ \(f_{1},\dots,f_{k}\in A\) _such that_ \(\alpha\in\mathcal{S}(f_{1},\dots,f_{k})\)_, the cone_ \(\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\) _is proper in_ \(\mathcal{K}(A)\)_._ 3. _For any_ \(f_{1},\dots,f_{k}\in A\) _such that_ \(\alpha\in\mathcal{S}(f_{1},\dots,f_{k})\)_, the cone_ \(A\cap\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\) _is proper in_ \(A\)_._ 4. _For any_ \(f_{1},\dots,f_{k}\in A\) _such that_ \(\alpha\in\mathcal{S}(f_{1},\dots,f_{k})\)_, the intersection_ \(\mathcal{S}(f_{1},\dots,f_{k})\cap\operatorname{Spec}_{c}A\) _is non-empty._ 5. _For any_ \(f_{1},\dots,f_{k}\in A\) _such that_ \(\alpha\in\mathcal{S}(f_{1},\dots,f_{k})\)_, the cone_ \(A\cap\sum\mathcal{K}(A)^{2}[f_{1},\dots,f_{k}]\subset P_{\alpha}\)_._ Proof.: The equivalence between (2), (3) and (4) is given by Lemma 4.3. Let us prove that (1) implies (5). Let \(\alpha\in\operatorname{Spec}_{c}A\). There exists \(\beta\in\operatorname{Spec}_{r}A\) such that \(\operatorname{supp}(\beta)=(0)\) and \(\beta\to\alpha\). We have \(\beta\in\operatorname{Spec}_{r}\mathcal{K}(A)\) by Proposition 3.1. Let \(f_{1},\dots,f_{k}\in P_{\alpha}\setminus\operatorname{supp}(\alpha)\) then \(\forall i\), \(P_{\beta}\setminus\{0\}\) and it follows that \(\sum\mathcal{K}(A)^{2}[f_{1},\ldots,f_{k}]\subset P_{\beta}\) (viewed in \(\mathcal{K}(A)\)) and thus \(\sum\mathcal{K}(A)^{2}[f_{1},\ldots,f_{k}]\) is proper. Consequently, \[A\cap\sum\mathcal{K}(A)^{2}[f_{1},\ldots,f_{k}]\subset P_{\beta}\subset P_{ \alpha}.\] By Lemma 4.3, we have that (5) implies (2). As already said, we know that (4) implies (1) since \(\operatorname{Spec}_{c}A\) is closed for the topology of \(\operatorname{Spec}_{r}A\). Here appears the notion of stability index we recalled in section 2.4. Namely, the stability index \(\operatorname{s}(U)\) of an open subset \(U\) of \(\operatorname{Spec}_{r}A\) is the infimum of the numbers \(k\in\mathbb{N}\) such that for any basic open subset \(S\) of \(\operatorname{Spec}_{r}A\) satisfying \(S\subset U\) there exist \(f_{1},\ldots,f_{k}\) in \(A\) with \(S=\mathcal{S}(f_{1},\ldots,f_{k})\). It leads to new characterizations for central orderings: **Theorem 4.5**.: _Let \(\alpha\in\operatorname{Spec}_{r}A\). The following properties are equivalent:_ 1. \(\alpha\in\operatorname{Spec}_{c}A\)_._ 2. _For any_ \(f_{1},\ldots,f_{k}\in A\) _such that_ \(\alpha\in\mathcal{S}(f_{1},\ldots,f_{k})\) _and_ \(k\leq\operatorname{s}(\operatorname{Spec}_{r}A\setminus\operatorname{Spec}_{c }A)\)_, the cone_ \(\sum\mathcal{K}(A)^{2}[f_{1},\ldots,f_{k}]\) _is proper in_ \(\mathcal{K}(A)\)_._ 3. _For any_ \(f_{1},\ldots,f_{k}\in A\) _such that_ \(\alpha\in\mathcal{S}(f_{1},\ldots,f_{k})\) _and_ \(k\leq\operatorname{s}(\operatorname{Spec}_{r}A\setminus\operatorname{Spec}_{c }A)\)_, the cone_ \(A\cap\sum\mathcal{K}(A)^{2}[f_{1},\ldots,f_{k}]\) _is proper in_ \(A\)_._ 4. _For any_ \(f_{1},\ldots,f_{k}\in A\) _such that_ \(\alpha\in\mathcal{S}(f_{1},\ldots,f_{k})\) _and_ \(k\leq\operatorname{s}(\operatorname{Spec}_{r}A\setminus\operatorname{Spec}_{c }A)\)_, the intersection_ \(\mathcal{S}(f_{1},\ldots,f_{k})\cap\operatorname{Spec}_{c}A\) _is non-empty._ 5. _For any_ \(f_{1},\ldots,f_{k}\in A\) _such that_ \(\alpha\in\mathcal{S}(f_{1},\ldots,f_{k})\) _and_ \(k\leq\operatorname{s}(\operatorname{Spec}_{r}A\setminus\operatorname{Spec}_{c }A)\)_, the cone_ \(A\cap\sum\mathcal{K}(A)^{2}[f_{1},\ldots,f_{k}]\subset P_{\alpha}\)_._ Proof.: We know that (1) is equivalent to (4) of Proposition 4.4. By definition of the stability index, (4) of Proposition 4.4 is equivalent to (4): indeed, if \(\alpha\in S\) with \(S\) a basic open subset of \(\operatorname{Spec}_{r}A\) which cannot be described by less than \(\operatorname{s}(\operatorname{Spec}_{r}A\setminus\operatorname{Spec}_{c}A)+1\) inequalities, then we must have \(S\cap\operatorname{Spec}_{c}A\neq\emptyset\). By Proposition 4.4, (1) implies (5). By Lemma 4.3, (5) implies (2) and (2), (3), (4) are equivalent. Note that if the stability index \(\operatorname{s}(\operatorname{Spec}_{r}A\setminus\operatorname{Spec}_{c}A)\) happens to be zero and more generally if \(k=0\) in Theorem 4.5, then \(\sum\mathcal{K}(A)^{2}[f_{1},\ldots,f_{k}]\) reduces to \(\sum\mathcal{K}(A)^{2}\) as the cone in \(\mathcal{K}(A)\) generated by the empty family. In this case, assertion (5) of Theorem 4.5 is equivalent to say that \(P_{\alpha}\) is precentral. We also notice that being a precentral ordering is equivalent to satisfy condition (5) of Theorem 4.5 only for \(k=0\). Then, we give similar characterizations for precentral orderings, namely, one recover conditions (2), (3) and (4) of Theorem 4.5 for \(k\leq 1\). **Theorem 4.6**.: _Let \(\alpha\in\operatorname{Spec}_{r}A\). The following properties are equivalent:_ 1. \(\alpha\in\operatorname{Spec}_{pc}A\)_._ 2. _For any_ \(f\in A\) _such that_ \(\alpha\in\mathcal{S}(f)\)_, the cone_ \(\sum\mathcal{K}(A)^{2}[f]\) _is proper._ 3. _For any_ \(f\in A\) _such that_ \(\alpha\in\mathcal{S}(f)\)_, the cone_ \(A\cap\sum\mathcal{K}(A)^{2}[f]\) _is proper and precentral._ 4. _For any_ \(f\in A\) _such that_ \(\alpha\in\mathcal{S}(f)\)_, the intersection_ \(\mathcal{S}(f)\cap\operatorname{Spec}_{c}A\) _is non-empty._ Proof.: The equivalence between (2), (3) and (4) are clear from Lemma 4.3. Assume \(\alpha\not\in\operatorname{Spec}_{pc}A\). There exists \(g\in A\cap\sum\mathcal{K}(A)^{2}\) such that \(g\not\in P_{\alpha}\). Remark that \(g\neq 0\). We have \(-g\in(P_{\alpha}\setminus\operatorname{supp}(\alpha))\) and thus \(\alpha\in\mathcal{S}(-g)\). Suppose there exists \(\beta\in(\operatorname{Spec}_{c}A\cap\mathcal{S}(-g))\). By definition of a central ordering and by Proposition 3.1, there is \(\gamma\in\operatorname{Spec}_{r}A\) such that \(\operatorname{supp}(\gamma)=(0)\) and \(\gamma\to\beta\). Since \((P_{\beta}\setminus\operatorname{supp}(\beta))\subset(P_{\gamma}\setminus\{0\})\) then \(-g\in(P_{\gamma}\setminus\{0\})\), it is impossible because \(g\in A\cap\sum\mathcal{K}(A)^{2}\subset P_{\gamma}\). We get \(\mathcal{S}(-g)\subset(\operatorname{Spec}_{r}A\setminus\operatorname{Spec}_{c }A)\) and it proves (4) implies (1). Assume there exists \(g\in A\) such that \(\alpha\in\mathcal{S}(g)\) and \(\mathcal{S}(g)\subset(\operatorname{Spec}_{r}A\setminus\operatorname{Spec}_{c }A)\). We have \(-g\not\in P_{\alpha}\). Moreover \(\forall\beta\in\operatorname{Spec}_{c}A\), \(-g\in P_{\beta}\) and thus \(\forall\beta\in\operatorname{Spec}_{c}A\) such that \(\operatorname{supp}(\beta)=(0)\), \(-g\in P_{\beta}\). From Proposition 3.1 and the Positivstellensatz, cf. Theorem 2.11, we get \(-g\in\sum\mathcal{K}(A)^{2}\). It shows that \(\alpha\not\in\operatorname{Spec}_{pc}A\) and it proves (1) implies (4). With this characterization and the one in Theorem 4.5, one may view precentral orderings as central orderings of "level \(1\)", and going further in that direction would lead to the consideration of central orderings of "level \(k\)". We decide not to develop such a formalism until we find some relevant applications. The value of the stability index of the non-central locus appears to be related to the existence of a precentral ordering that is not central. Namely, one has \(\operatorname{Spec}_{pc}A=\operatorname{Spec}_{c}A\) whenever \(\operatorname{s(Spec}_{r}A\setminus\operatorname{Spec}_{c}A)\leq 1\). In the geometric case, using Brocker-Scheiderer Theorem and Theorem 4.5, one gets a family of geometric rings where any precentral ordering is central : **Corollary 4.7**.: _Let \(V\) be an irreducible affine algebraic variety over \(R\) such that \(\dim V\leq 2\). Then, \(\operatorname{Spec}_{pc}R[V]=\operatorname{Spec}_{c}R[V]\)._ Proof.: One has \(\operatorname{s(Spec}_{r}R[V]\setminus\operatorname{Spec}_{c}R[V])\leq \operatorname{s(V(R)\setminus V_{reg}(R))}\). And from Brocker-Scheiderer Theorem one gets that the stability index of \(V(R)\setminus V_{reg}(R)\) is at most \(1\). Let us now give an example of a precentral ordering which is not central. **Example 4.8**.: Let \(V\) be the irreducible affine algebraic variety over \(\mathbb{R}\) with coordinate ring \(A=\mathbb{R}[V]=\mathbb{R}[x,y,z,t_{1},t_{2}]/(z^{2}+t_{1}x^{2}+t_{2}y^{2})\). The real part of the singular locus of \(V\) is contained in the real plane in \(t_{1}\) and \(t_{2}\) and we are going to describe \(S=V(\mathbb{R})\setminus(\operatorname{Cent}V(\mathbb{R}))\) in this plane. By [4, Prop. 7.6.2], \(S\) is the locus of points of \(V(\mathbb{R})\) where the local semi-algebraic dimension is \(<4\). Seeing \(V(\mathbb{R})\) as a variety with parameters \(t_{1}\) and \(t_{2}\) then we can show that \(S=S(t_{1},t_{2})\), the open right upper quadrant of the plane, and the local dimension at points of \(S\) in \(V(\mathbb{R})\) is equal to two. Let us consider the four elements of the real spectrum \(-1_{+\uparrow},-1_{+\downarrow},1_{+\uparrow},1_{+\downarrow}\) which, as described in [4, Ex. 10.4.3], have support the ideal \((x,y,z)\) (which define the plane in \(t_{1},t_{2}\)), the first two specializing to the point \((-1,0)\) and the second two specializing to the point \((1,0)\). These four orderings define a fan \(F\) (see [4, Defn. 10.4.2]). It is easy to check that \(F\cap\widetilde{S}=\{1_{+\uparrow}\}\). Since \(1_{+\uparrow}\in\widetilde{S}\) then it cannot be central. This can also be seen algebraically by considering the property (5) of Theorem 4.5 since \(t_{i}>0\) on \(1_{+\uparrow}\) for \(i=1,2\) and since \(-1=t_{1}(x/z)^{2}+t_{2}(y/z)^{2}\in\sum\mathcal{K}(V)^{2}[t_{1},t_{2}]\). Assume now that there exists \(f\in\mathbb{R}[V]\) such that \(1_{+\uparrow}\in\mathcal{S}(f)\). If we assume \(\mathcal{S}(f)\subset\widetilde{S}\), then \(\#(F\cap\mathcal{S}(f))=1\) and by [1, V Cor. 1.9] we get a contradiction. It follows that \(\#(F\cap\mathcal{S}(f))>1\). Since \(F\setminus\{1_{+\uparrow}\}\subset\operatorname{Spec}_{c}A\) then using (4) of Theorem 4.6 we get that \(1_{+\uparrow}\in\operatorname{Spec}_{pc}A\). To end, let us note that it is possible to give a similar example in dimension \(3\) by intersecting our variety with the hypersurface with equation \(z-xy=0\). This example shows also that the precentral spectrum is not necessarily constructible. Indeed, if \(\operatorname{Spec}_{pc}\mathbb{R}[V]\) were a constructible subset, then, by the correspondence between semialgebraic subsets \(S\) and constructible subsets \(\widetilde{S}\) and using Proposition 4.1, one would get \(\operatorname{Spec}_{pc}\mathbb{R}[V]=\operatorname{Spec}_{pc}\widetilde{ \mathbb{R}[V]}\cap V(\mathbb{R})=\operatorname{Spec}_{c}\widetilde{\mathbb{R }[V]}\cap V(\mathbb{R})=\operatorname{Spec}_{c}\mathbb{R}[V]\) a contradiction. Although the central spectrum and the precentral spectrum seem to be close, it is not true that for any \(\alpha\in\operatorname{Spec}_{pc}A\), there exists \(\gamma\in\operatorname{Spec}_{c}A\) such that \(\alpha\to\gamma\) as one can see using again the previous example 4.8. Indeed, take \(\alpha\in\operatorname{Spec}_{pc}A\) be the ordering of all polynomial functions which are nonnegative on a \(+\infty\) neigbourhood of the transcendant curve of equation \(t_{2}=e^{-t_{1}}\) in the plane \((t_{1},t_{2})\). This ordering does not admit any strict specialization and it is not central since the non central locus is \(\widetilde{S}(t_{1},t_{2})\). Moreover, arguing again with a \(4\)-elements fan (which cannot intersect any given principal open subset at a single element), one shows that \(\alpha\) is precentral. From (4) of Theorem 4.6, a precentral ordering of \(\operatorname{Spec}_{r}A\) is an ordering that cannot be separated from the central locus \(\operatorname{Spec}_{c}A\) by principal open subsets. In the spirit of Example 4.8, it is possible to create precentral but non-central orderings with an higher level of non-separation with the central locus. Namely, let \(k\) be an integer \(\geq 2\) and let \(V\) be the irreducible affine algebraic variety over with coordinate ring \(\mathbb{R}[V]=\mathbb{R}[x_{1},\ldots,x_{k},z,t_{1},\ldots,t_{k}]/(z^{2}+t_{1}x_{1 }^{2}+\ldots+t_{k}x_{k}^{2})\). There exists \(\alpha\in(\operatorname{Spec}_{pc}A\setminus\operatorname{Spec}_{c}A)\) such that for any basic open subset \(S\) of \(\operatorname{Spec}_{r}A\) given by \(t\leq k-1\) strict inequalities then \(S\cap\operatorname{Spec}_{c}A\neq\emptyset\). Moreover we have \(\alpha\in\mathcal{S}(t_{1},\ldots,t_{k})\subset(\operatorname{Spec}_{r}A \setminus\operatorname{Spec}_{c}A)\). In the equivalent properties of Theorem 4.6, recall that we get rid of the condition (5) of Theorem 4.5: 1. \(\forall f\in A\) such that \(\alpha\in\mathcal{S}(f)\), \(A\cap\sum\mathcal{K}(A)^{2}[f]\subset P_{\alpha}\). Moreover, there is another condition which arises naturally as the following one: 1. \(\forall f\in A\) such that \(\alpha\in\mathcal{S}(f)\), \(\exists\beta\in\operatorname{Spec}_{c}A\) with \(\operatorname{supp}(\alpha)=\operatorname{supp}(\beta)\) such that \(\beta\in\mathcal{S}(f)\). Indeed, by Theorem 4.6 we know that \[\alpha\in\operatorname{Spec}_{pc}A\iff\forall f\in A\text{ such that }\alpha\in\mathcal{S}(f),\exists\beta\in \operatorname{Spec}_{c}A\text{ such that }\beta\in\mathcal{S}(f),\] and by Propositions 3.8 and 3.15 we also know that \[\alpha\in\operatorname{Spec}_{pc}A\implies\exists\beta\in\operatorname{Spec} _{c}A\text{ such that }\operatorname{supp}(\alpha)=\operatorname{supp}(\beta).\] So, a natural question is study how \(\alpha\in\operatorname{Spec}_{pc}A\) is related to (6). **Proposition 4.9**.: _Let \(\alpha\in\operatorname{Spec}_{r}A\). Then, condition (6) is equivalent to the following_ \[\forall f\in A\text{ such that }\alpha\in\mathcal{S}(f)\text{, }\operatorname{supp}(\alpha)\text{ is }A\cap\sum\mathcal{K}(A)^{2}[f]\text{-convex}.\] _Moerover, one has the following implications_ \[\alpha\in\operatorname{Spec}_{c}A\implies(5)\implies(6)\implies\alpha\in \operatorname{Spec}_{pc}A.\] Proof.: Let us show that condition (6) implies the one of the proposition. Let \(f\in A\) and \(\mathfrak{p}\in\operatorname{Spec}A\). Assume there exists \(\beta\in\operatorname{Spec}_{c}A\) with \(\mathfrak{p}=\operatorname{supp}(\beta)\) such that \(\beta\in\mathcal{S}(f)\). By Proposition 3.15 then \(\mathfrak{p}\in\operatorname{C-Spec}A\). There exists \(\gamma\) of support \((0)\) such that \(\gamma\to\beta\). We have \(f(\gamma)>0\) and thus \(\sum\mathcal{K}(A)^{2}[f]\subset P_{\gamma}\) in \(\mathcal{K}(A)\). Since \(\mathfrak{p}\) is \(P_{\beta}\)-convex then \(\mathfrak{p}\) is convex for \(P_{\gamma}\) and hence also for \(A\cap\sum\mathcal{K}(A)^{2}[f]\). Conversely, take \(\alpha\in\operatorname{Spec}_{r}A\) such that \(\operatorname{supp}(\alpha)=\mathfrak{p}\), \(\alpha\in\mathcal{S}(f)\) and \(\mathfrak{p}\) is \(A\cap\sum\mathcal{K}(A)^{2}[f]\)-convex. By [4, Prop. 4.2.9] there exists an ordering \(\gamma\in\operatorname{Spec}_{r}\mathcal{K}(A)\) such that \(\mathfrak{p}\) is \(P_{\gamma}\cap A\)-convex. By Propositions 3.1 and 3.14 there exists \(\beta\in\operatorname{Spec}_{r}A\) such that \(P_{\gamma}\cap A\to P_{\beta}\) and \(\operatorname{supp}(\beta)=\mathfrak{p}\). Clearly \(\beta\in\operatorname{Spec}_{c}A\) and since \(A\cap\sum\mathcal{K}(A)^{2}[f]\subset P_{\beta}\) then \(f(\beta)\geq 0\). Since \(\alpha\in\mathcal{S}(f)\) and \(\operatorname{supp}(\beta)=\operatorname{supp}(\alpha)=\mathfrak{p}\) then \(\beta\in\mathcal{S}(f)\). We have shown the first assertion. Let us now prove the implications. The first one comes directly from characterizations of Proposition 4.4. The second one relies on the fact that \(\operatorname{supp}(\alpha)\) is \(P_{\alpha}\)-convex and if \(A\cap\sum\mathcal{K}(A)^{2}[f])\subset P_{\alpha}\), then \(\operatorname{supp}(\alpha)\) is \(A\cap\sum\mathcal{K}(A)^{2}[f]\)-convex. And the last implication is also immediate using Theorem 4.6. Example 4.8 allows us to study converse implications. Namely, the precentral ordering \(\alpha=1_{+\uparrow}\) considered there satisfies condition (6) (since \(1_{+\downarrow}\in\operatorname{Spec}_{c}\mathbb{R}[V]\)) but not condition (5). Indeed, one has identity \(-1=t_{1}(x/z)^{2}+t_{2}(y/z)^{2}\in\sum\mathcal{K}(V)^{2}[t_{1},t_{2}]\). Multiplying by \(t_{1}\), one gets \(-t_{1}\in\mathbb{R}[V]\cap\sum\mathcal{K}(V)^{2}[t_{1}t_{2}]\) but \(\alpha\in\mathcal{S}(t_{1}t_{2})\) and \(-t_{1}\not\in P_{\alpha}\). It happens also that the converse of the first implication doesn't hold. Indeed, consider the slighly modified example: \(A=\mathbb{R}[V]=\mathbb{R}[x_{1},x_{2},x_{3},z,t_{1},t_{2},t_{3}]/(z^{2}+t_{1}x_ {1}^{2}+t_{2}x_{2}^{2}+t_{3}x_{3}^{2})\). Let us take \(\beta\in\operatorname{Spec}A\setminus\operatorname{Spec}_{c}A=\mathcal{S}(t_{1 },t_{2},t_{3})\) such that \(\beta\in\operatorname{Spec}_{pc}A\). Then, any basic open defined by two or less inequalities which contains \(\beta\) has to intersect \(\operatorname{Spec}_{c}A\). Let us argue by contradiction and assume that \(\beta\) does not satisfy property (5). Then, there is \(f\in A\) such that \(\beta\in\mathcal{S}(f)\) and \(A\cap\sum\mathcal{K}(A)^{2}[f]\not\subset P_{\beta}\). One has an identity \(a=s+ft\notin P_{\beta}\) with \(s,t\in\sum\mathcal{K}(A)^{2}\), namely \(\beta\in\mathcal{S}(f,-a)\) and hence \(\mathcal{S}(f,-a)\) intersects \(\operatorname{Spec}_{c}A\). Let \(\beta^{\prime}\in\mathcal{S}(f,-a)\cap\operatorname{Spec}_{c}A\), there exists \(\gamma\) of support \((0)\) such that \(\gamma\to\beta^{\prime}\). Relatively to the ordering \(\gamma\), one gets \(s+ft>0\) whereas \(a<0\), a contradiction. Hence \(\beta\) satisfies (5) and gives a counterexample to the first implication. And concerning the converse of the last implication, we did not succeed to prove or disprove it. To end this section whose aim was to compare central and precentral orderings, we recall that a central domain is obviously precentral and we prove that the converse is also true in the geometric case. **Proposition 4.10**.: _Let \(V\) be an irreducible affine algebraic variety over \(R\). The following properties are equivalent:_ 1. \(V\) _is central._ 2. \(R[V]\) _is central._ 3. \(R[V]\) _is precentral._ Proof.: By Proposition 3.12 we are left to prove (3) implies (1). Assume \(R[V]\) is precentral. We have \(\operatorname{Spec}_{r}R[V]=\operatorname{Spec}_{pc}R[V]=\widetilde{V(R)}\) and thus \(\operatorname{Spec}_{pc}R[V]\cap V(R)=V(R)\). Using Proposition 4.1 then \(\operatorname{Spec}_{pc}R[V]\cap V(R)=\operatorname{Spec}_{c}R[V]\cap V(R)\) and the proof is done. ## 5. Applications: Central and precentral Positivstellensatze Unless otherwise stated, \(A\) is a domain with fraction field \(\mathcal{K}(A)\) which is formally real. ### Around the Hilbert 17th problem Real geometers have always tried to give certificates of positivity for different types of functions. The most famous of these certificates is undoubtedly that of the 17th problem. Hilbert thus wanted to characterize the polynomials of \(A=R[x_{1},\dots,x_{n}]\) which are nonnegative on \(R^{n}\). Since such a polynomial is not necessarily a sum of squares in \(R[x_{1},\dots,x_{n}]\) which one may reformulate saying that, if \(n\geq 2\), the intersection of all cones in \(A\) is strictly contained in the intersection of all orderings in \(A\), namely \[\bigcap_{P\in\operatorname{Cone}(A)}P\subsetneq\bigcap_{P\in\operatorname{Spec }_{r}A}P.\] The Artin answer to Hilbert 17th problem says that a nonnegative polynomial is a sum of squares in \(\mathcal{K}(A)=R(x_{1},\dots,x_{n})\) : the trace on \(A\) of the intersection of all cones of \(\mathcal{K}(A)\) coincide with the intersection of all orderings in \(A\), namely: \[A\cap\bigcap_{P\in\operatorname{Cone}(\mathcal{K}(A))}P=A\cap\bigcap_{P\in \operatorname{Spec}_{r}\mathcal{K}(A)}P=\bigcap_{P\in\operatorname{Spec}_{r}A}P.\] We are interested by getting central Hilbert 17th properties for a general domain \(A\), namely finding what kind of positivity is given by the algebraic certificate of an \(f\) which belongs to the cone \(\mathcal{C}=A\cap\sum\mathcal{K}(A)^{2}\). In this direction, the classical Hilbert 17th property can be reformulated with the language of central cones and orderings of \(A\). Since \(A=R[x_{1},\dots,x_{n}]\) is a central and precentral ring, the following sequence of inclusions \[\mathcal{C}=\bigcap_{P\in\operatorname{Cone}_{c}(A)}P\subset\bigcap_{P\in \operatorname{Spec}_{pc}A}P\subset\bigcap_{P\in\operatorname{Spec}_{c}A}P,\] are in fact equalities. In the sequel we give some central or precentral certificates of vanishing (Nullstellensatze) and of positivity (Positivstellensatze) on several subsets of \(\operatorname{Spec}_{r}A\). ### Central and precentral Nullstellensatze We start by studying certificates of vanishing. Let \(I\) be an ideal of \(A\). Denote by \(\mathcal{Z}^{r}(I)\) the set of all \(\alpha\in\operatorname{Spec}_{r}A\) such that \(I\subset\operatorname{supp}(\alpha)\). Then we write \(\mathcal{Z}^{c}(I)=\mathcal{Z}^{r}(I)\cap\operatorname{Spec}_{c}A\) and \(\mathcal{Z}^{pc}(I)=\mathcal{Z}^{r}(I)\cap\operatorname{Spec}_{pc}A\). Let \(W\subset\operatorname{Spec}_{r}A\), we denote by \(\mathcal{I}(W)\) the set of \(f\in A\) such that \(W\subset\mathcal{Z}^{r}(f)\); it is clearly an ideal of \(A\). Note that the inclusion \(\mathcal{Z}^{c}(I)\subset\mathcal{Z}^{pc}(I)\) can be strict, take \(I=(0)\) in Example 4.8. One may nevertheless go further to get an abstract central and precentral Nullstellensatze: **Proposition 5.1**.: _(Central and precentral Nullstellensatze): Let \(I\) be an ideal of \(A\). One has_ \[\mathcal{I}(\mathcal{Z}^{c}(I))=\mathcal{I}(\mathcal{Z}^{pc}(I))=\sqrt[C]{I}.\] Proof.: Since \(\operatorname{Spec}_{c}A\subset\operatorname{Spec}_{pc}A\) then \(\mathcal{Z}^{c}(I)\subset\mathcal{Z}^{pc}(I)\) and thus \(\mathcal{I}(\mathcal{Z}^{c}(I))\supset\mathcal{I}(\mathcal{Z}^{pc}(I)))\). Clearly \(\sqrt[C]{I}\subset\mathcal{I}(\mathcal{Z}^{pc}(I))\). Indeed, let \(a\in\sqrt[C]{I}\). We have \(a^{2m}+b\in I\) with \(b\in\mathcal{C}\). Let \(\alpha\in\mathcal{Z}^{pc}(I)\): one has \(\operatorname{supp}(\alpha)\supset I\), and hence \(a^{2m}+b\in\operatorname{supp}(\alpha)\). By Proposition 3.8 we have \(\operatorname{supp}(\alpha)\in\operatorname{C-Spec}A\). A central ideal is \(\mathcal{C}\)-convex and we get \(a^{2m}\in\operatorname{supp}(\alpha)\). By radicality of \(\operatorname{supp}(\alpha)\) then \(a\in\operatorname{supp}(\alpha)\). Let \(f\in\mathcal{I}(\mathcal{Z}^{c}(I))\). Let us show that, if \(\mathfrak{p}\) is a central prime ideal containing \(I\), then \(f\in\mathfrak{p}\). By [15, Prop. 3.14] we will then get \(f\in\sqrt[C]{I}\). Using Proposition 3.15 there exists \(\alpha\in\operatorname{Spec}_{c}A\) such that \(\operatorname{supp}(\alpha)=\mathfrak{p}\). Clearly \(\alpha\in\mathcal{Z}^{c}(I)\) and thus \(f(\alpha)=0\) i.e \(f\in\operatorname{supp}(\alpha)=\mathfrak{p}\). We get \(\mathcal{I}(\mathcal{Z}^{c}(I)\subset\sqrt[C]{I}\) and it ends the proof. ### Precentral Positivstellensatze One may carry on further to get central and precentral Positivstellensatze having in mind that the algebraic nature of precentrality seems much more convenient than the geometric nature of centrality. To get geometric central Positivstellensatze, our strategy is to establish first abstract precentral Positivstellensatze and then derive some abstract central ones and finally get geometric central ones. A set of the form \(\overline{\mathcal{S}}(f_{1},\ldots,f_{k})=\{\alpha\in\operatorname{Spec}_{r}A \mid f_{1}(\alpha)\geq 0,\ldots,f_{k}(\alpha)\geq 0\}\) for \(f_{1},\ldots,f_{k}\) some elements of \(A\), is called a basic closed subset of \(\operatorname{Spec}_{r}A\). We denote by \(\mathcal{S}^{c}(f_{1},\ldots,f_{k})\), \(\overline{\mathcal{S}}^{c}(f_{1},\ldots,f_{k})\), \(\mathcal{S}^{pc}(f_{1},\ldots,f_{k})\) and \(\overline{\mathcal{S}}^{pc}(f_{1},\ldots,f_{k})\), the sets \(\mathcal{S}(f_{1},\ldots,f_{k})\), \(\overline{\mathcal{S}}(f_{1},\ldots,f_{k})\) intersected respectively with \(\operatorname{Spec}_{c}A\) and \(\operatorname{Spec}_{pc}A\). With these notations, one gets: **Theorem 5.2**.: _(Precentral Positivstellensatze): Let \(f_{1},\ldots,f_{r}\) in \(A\) and \(f\in A\). One has:_ 1. \(f\geq 0\) _on_ \(\overline{\mathcal{S}}^{pc}(f_{1},\ldots,f_{r})\) _if and only if_ \[fq=p+f^{2m}\] _where_ \(p,q\) _are in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ 2. \(f>0\) _on_ \(\overline{\mathcal{S}}^{pc}(f_{1},\ldots,f_{r})\) _if and only if_ \[fq=1+p\] _where_ \(p,q\) _are in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ 3. \(f=0\) _on_ \(\overline{\mathcal{S}}^{pc}(f_{1},\ldots,f_{r})\) _if and only if_ \[f^{2m}+p=0\] _where_ \(p\) _is in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ Proof.: 1. Let \(H\) be the set \(\mathcal{C}\cup\{f_{1},\ldots,f_{r},-f\}\) and \(M\) the monoid generated by \(f\). Then, there is no \(\alpha\in\operatorname{Spec}_{pc}A\) such that \(-f(\alpha)\geq 0\), \(f_{1}(\alpha)\geq 0,\ldots,f_{r}(\alpha)\geq 0\) and \(f(\alpha)\neq 0\). if and only if there is no \(\alpha\in\operatorname{Spec}_{r}A\) such that \(H\subset\alpha\) and \(-f(\alpha)\neq 0\). Then, from the formal Positivstellensatz recalled in Theorem 2.11, it is equivalent to have an identity of the form \(p-fq+f^{2m}=0\) where \(p,q\in\mathcal{C}[f_{1},\ldots,f_{r}]\). 2. Let \(H\) be the set \(\mathcal{C}\cup\{f_{1},\ldots,f_{r},-f\}\) and \(M\) the monoid generated by \(1\). There is no \(\alpha\in\operatorname{Spec}_{pc}A\) such that \(f_{1}(\alpha)\geq 0,\ldots,f_{r}(\alpha)\geq 0\), \(-f(\alpha)\geq 0\) and \(1(\alpha)\neq 0\) if and only if and only if there is no \(\alpha\in\operatorname{Spec}_{r}A\) such that \(H\subset\alpha\) and \(1(\alpha)\neq 0\) and conclude using the formal Positivstellensatz, cf. Theorem 2.11. 3. Let \(H\) be the set \(\mathcal{C}\cup\{f_{1},\ldots,f_{r}\}\) and \(M\) the monoid generated by \(f\). Since there is no \(\alpha\in\operatorname{Spec}_{pc}A\) such that \(f_{1}(\alpha)\geq 0,\ldots,f_{r}(\alpha)\geq 0\) and \(f(\alpha)\neq 0\) if and only if there is no \(\alpha\in\operatorname{Spec}_{r}A\) such that \(H\subset\alpha\) and \(f(\alpha)\neq 0\), we get the proof using again the formal Positivstellensatz, cf. Theorem 2.11. As a particular case, one gets an asbtract precentral Hilbert 17th property: **Theorem 5.3**.: _(Precentral Hilbert 17th property): Let \(f\in A\). The following properties are equivalent:_ 1. \(f\geq 0\) _on_ \(\operatorname{Spec}_{pc}A\)_._ 2. _There exist_ \(p,q\) _in_ \(\mathcal{C}\) _such that_ \(fq=p+f^{2m}\)_._ 3. _There exist_ \(p,q\in\mathcal{C}\) _such that_ \(q^{2}f=p\) _and_ \(\mathcal{Z}^{pc}(q)\subset\mathcal{Z}^{pc}(f)\)_._ 4. \(f\in\mathcal{C}\)_._ Proof.: Les us show (1) \(\Rightarrow\) (2). Assume \(f\geq 0\) on \(\operatorname{Spec}_{pc}A\). Since \(f\geq 0\) on \(\overline{\mathcal{S}}^{pc}(1)\) then by Theorem 5.2 one gets (2). Let us show (2) implies (3). Assume \(fq=p+f^{2m}\) with \(p,q\in\mathcal{C}\). One may assume that \(q\neq 0\) or equivalently that \(p+f^{2m}\neq 0\) since otherwise \(f=0\) (by hypothesis \(\mathcal{K}(A)\) is formally real and hence the null ideal is real in \(A\)). One gets \(f(p+f^{2m})=f^{2}q\) which gives \(f=\frac{(f^{2}q)(p+f^{2m})}{(p+f^{2m})^{2}}\in\mathcal{C}\). Set \(s=(f^{2}q)(p+f^{2m})\) and \(t=p+f^{2m}\). We have \(t^{2}f=s\) and \(s,t\in\mathcal{C}\). We have \(\mathcal{Z}^{pc}(t)\subset\mathcal{Z}^{pc}(f)\): if \(\alpha\in\mathcal{Z}^{pc}(p+f^{2m})\) then \(p+f^{2m}\in\operatorname{supp}(\alpha)\) which is a central ideal and by definition we get \(f\in\operatorname{supp}(\alpha)\). This shows the desired implication. Trivially (3) implies (4). To end, let us show that (4) implies (1). Assume \(f\in\mathcal{C}\) then it is clear from the definition of a precentral cone that \(f\geq 0\) on \(\operatorname{Spec}_{pc}A\). Hence, in any domain \(A\) with formally real fraction field, the intersection of all central cones coincides with the intersection of all precentral orderings. ### Central Positivstellensatze From the precentral Positivstellensatze, one may deduce some central ones. Let us remind first that Proposition 5.1 gives that \(f=0\) on \(\mathcal{Z}^{pc}(I)\) if and only if \(f=0\) on \(\mathcal{Z}^{c}(I)\). One also have : **Lemma 5.4**.: 1. \(f>0\) _on_ \(\operatorname{Spec}_{pc}A\) _if and only if_ \(f>0\) _on_ \(\operatorname{Spec}_{c}A\)_._ 2. \(f\geq 0\) _on_ \(\operatorname{Spec}_{pc}A\) _if and only if_ \(f\geq 0\) _on_ \(\operatorname{Spec}_{c}A\)_._ Proof.: The direct implications are clear since \(\operatorname{Spec}_{c}A\subset\operatorname{Spec}_{pc}A\). Let \(f>0\) on \(\operatorname{Spec}_{c}A\). Then, \(f>0\) on \(\operatorname{Spec}_{r}\mathcal{K}(A)\) and hence \(f\in\sum\mathcal{K}(A)^{2}\). Hence, \(f\geq 0\) on \(\operatorname{Spec}_{pc}A\). If there exists \(\alpha\in\operatorname{Spec}_{pc}A\) such that \(f(\alpha)=0\) then \(f\in\operatorname{supp}(\alpha)\) and by Proposition 3.8 then \(\operatorname{supp}(\alpha)\) is a central ideal. By Proposition 3.15, there is \(\beta\in\operatorname{Spec}_{c}A\) such that \(\operatorname{supp}(\alpha)=\operatorname{supp}(\beta)\) and thus \(f(\beta)=0\), a contradiction. Let \(f\geq 0\) on \(\operatorname{Spec}_{c}A\). Assume that \(f(\beta)<0\) with \(\beta\) a precentral ordering. By characterization of Theorem 4.6, one gets the existence of \(\gamma\) central such that \(f(\gamma)<0\), contradiction. On the other hand, beware that in general \(\mathcal{S}^{pc}(f)\neq\mathcal{S}^{c}(f)\) as one can see using Example 4.8 with \(f=t_{1}\). Likewise, beware that in general \(\overline{\mathcal{S}}^{pc}(f)\neq\overline{\mathcal{S}}^{c}(f)\). Indeed, if we work now in the domain \(B=A[t]/(tt_{1}-1)\) where \(A\) is the domain of Example 4.8, one has also an element \(g=-t_{2}\) such that \(g\geq 0\) on \(\overline{\mathcal{S}}^{c}(f)\) but not on \(\overline{\mathcal{S}}^{pc}(f)\). These observations shows that it is possible to derive a Hilbert 17th property from the precentral one, although it is not possible to derive central Positivstellensatze from Theorem 5.2. **Proposition 5.5**.: _(Hilbert 17th Property): Let \(f\in A\). The following properties are equivalent:_ 1. \(f\geq 0\) _on_ \(\operatorname{Spec}_{c}A\)_._ 2. _There exist_ \(p,q\) _in_ \(\mathcal{C}\) _such that_ \(fq=p+f^{2m}\)_._ 3. _There exist_ \(p,q\in\mathcal{C}\) _such that_ \(q^{2}f=p\) _and_ \(\mathcal{Z}^{c}(q)\subset\mathcal{Z}^{c}(f)\) 4. \(f\in\mathcal{C}\)_._ Proof.: Using Lemma 5.4 and Theorem 5.3, we see that (1), (2) and (4) are equivalent. Clearly (3) implies (4). Let us show that (2) implies (3). Assume \(fq=p+f^{2m}\) with \(p,q\in\mathcal{C}\). Repeating the arguments used in the proof of (2) implies (3) of Theorem 5.3 we get an identity \(t^{2}f=s\) with \(s,t\in\mathcal{C}\) and \(\mathcal{Z}^{pc}(t)\subset\mathcal{Z}^{pc}(f)\) and it is easy to see that we also have \(\mathcal{Z}^{c}(t)\subset\mathcal{Z}^{c}(f)\). Note that \(\operatorname{Spec}_{c}A\) is the smallest closed subset of \(\operatorname{Spec}_{r}A\) such that the nonnegativity on this subset is equivalent to be in \(\mathcal{C}\). One may also give an interpretation with cones, namely : in any domain with formally real fraction field the intersection of all central cones coincides with the intersection of all central orderings. Although we do not obtain central Positivstellensatze in the general case, under a condition on the stability index, the central and precentral spectra coincide and one gets from Theorem 5.2: **Proposition 5.6**.: _(Central Positivstellensatze in low dimension): Assume that \(\operatorname{s(Spec}_{r}A\setminus\operatorname{Spec}_{c}A)\leq 1\). Let \(f_{1},\dots,f_{r}\) in \(A\) and \(f\in A\). One has:_ 1. \(f\geq 0\) _on_ \(\overline{\mathcal{S}}^{c}(f_{1},\dots,f_{r})\) _if and only if_ \[fq=p+f^{2m}\] _where_ \(p,q\) _are in_ \(\mathcal{C}[f_{1},\dots,f_{r}]\)_._ 2. \(f>0\) _on_ \(\overline{\mathcal{S}}^{c}(f_{1},\dots,f_{r})\) _if and only if_ \[fq=1+p\] _where_ \(p,q\) _are in_ \(\mathcal{C}[f_{1},\dots,f_{r}]\)_._ 3. \(f=0\) _on_ \(\overline{\mathcal{S}}^{c}(f_{1},\dots,f_{r})\) _if and only if_ \[f^{2m}+p=0\] _where_ \(p\) _is in_ \(\mathcal{C}[f_{1},\dots,f_{r}]\)_._ **Remark** 1. Assertion (1) is false in the case \(\operatorname{s(Spec}_{r}A\setminus\operatorname{Spec}_{c}A)=2\). Consider the domain \(B=A[t]/(tt_{2}-1)\) where \(A\) is the ring in Example 4.8. As previously noticed, working in \(\operatorname{Spec}_{r}B\), \(-t_{2}\geq 0\) on \(\overline{\mathcal{S}}^{c}(t_{1})\) but not on \(\overline{\mathcal{S}}^{pc}(t_{1})\). By Theorem 5.2, we cannot have an identity of the form \(-t_{2}q=p+(t_{2})^{2m}\) where \(p,q\) are in \(\mathcal{C}[t_{1}]\). 2. Likewise, using Example 4.8, assertion (3) is false in the case \(\operatorname{s(Spec}_{r}A\setminus\operatorname{Spec}_{c}A)=2\). Indeed, \(t_{1}t_{2}=0\) on \(\overline{\mathcal{S}}^{c}(t_{1},t_{2})\) but not on \(\overline{\mathcal{S}}^{pc}(t_{1},t_{2})\). 3. One may give a counter example to (2) in the case \(\operatorname{s(Spec}_{r}A\setminus\operatorname{Spec}_{c}A)=2\).. Indeed, let us consider the domain \(B=A[t,s]/(tt_{1}-1,st_{2}-1)\) where \(A\) is the ring in Example 4.8. It can be shown that \(-t_{2}>0\) on \(\overline{\mathcal{S}}^{c}(t_{1})\) but not on \(\overline{\mathcal{S}}^{pc}(t_{1})\). 4. In the full case i.e if \(\overline{\mathcal{S}}^{c}(f_{1},\dots,f_{r})=\operatorname{Spec}_{c}A\) then the assertions (1) and (2) of the proposition are valid without assumption on the stability index by Proposition 5.5. Besides, without assumption on the stability index, assertion (3) is always valid whenever we consider a basic closed subset with a single inequality. **Proposition 5.7**.: _Let \(f,g\) in \(A\). Then,_ \[f=0\text{ on }\overline{\mathcal{S}}^{c}(g)\text{ if and only if }f^{2m}+p=0\text{ where }p\text{ is in }\mathcal{C}[g].\] Proof.: Assume that \(f=0\) on \(\overline{\mathcal{S}}^{c}(g)\). Let \(\alpha\in\overline{\mathcal{S}}^{pc}(g)\). If \(g(\alpha)=0\), then \(g\) belongs to the support of \(\alpha\) which is known to be also the support of a central ordering \(\beta\). By assumption \(f(\beta)=0\) and hence \(f(\alpha)=0\). If \(g(\alpha)>0\), then there is a central ordering \(\beta^{\prime}\) such that \(g(\beta^{\prime})>0\) by (4) of Theorem 4.6. Hence, there is an ordering \(\gamma\) of support \((0)\) such that \(\gamma\to\beta^{\prime}\) and thus \(g(\gamma)>0\). By assumption \(f(\gamma)=0\) and hence \(f=0\), in particular \(f(\alpha)=0\). It follows that \(f=0\) on \(\overline{\mathcal{S}}^{pc}(g)\) and we get the proof by (3) of Theorem 5.2. ### Geometric central Positivstellensatze Let us now write down the cases of geometric rings. These are obtained from the central abstract results of the previous subsection together with the so-called Artin-Lang property (cf [4, Thm. 4.1.2]). We first give a slightly more detailed version of [4, Thm. 6.1.9]. **Proposition 5.8**.: _(Geometric Hilbert 17th Property): Let \(V\) be an irreducible algebraic variety over \(R\). Let \(f\in R[V]\) and \(\mathcal{C}=R[V]\cap\sum\mathcal{K}(V)^{2}\). The following properties are equivalent:_ 1. \(f\geq 0\) _on_ \(\operatorname{Cent}V(R)\)_._ 2. _There exist_ \(p,q\) _in_ \(\mathcal{C}\) _such that_ \(fq=p+f^{2m}\)_._ 3. _There exist_ \(p,q\in\mathcal{C}\) _such that_ \(q^{2}f=p\) _and_ \(\mathcal{Z}(q)\cap\operatorname{Cent}V(R)\subset\mathcal{Z}(f)\cap \operatorname{Cent}V(R)\)_._ 4. \(f\in\mathcal{C}\)_._ Proof.: By the Artin-Lang property (cf [4, Thm. 4.1.2]) or alternatively by Tarski-Seidenberg property, if \(f\geq 0\) on \(\operatorname{Cent}V(R)\), then \(f\geq 0\) on \(\operatorname{Spec}_{c}R[V]\) since \(\widetilde{\operatorname{Cent}V(R)}=\operatorname{Spec}_{c}R[V]\). Remark also that for \(g\in R[V]\) we have \(\widetilde{\mathcal{Z}(g)\cap\operatorname{Cent}V(R)}=\mathcal{Z}^{c}(g)\). One may then use Proposition 5.5. One also has: **Proposition 5.9**.: _(Geometric central Positivstellensatze for surfaces): Let \(V\) be an irreducible algebraic variety over \(R\) such that \(\dim(V(R))\leq 2\). Let \(f,f_{1},\ldots,f_{r}\) in \(R[V]\) and \(\mathcal{C}=R[V]\cap\sum\mathcal{K}(V)^{2}\). One has:_ 1. \(f\geq 0\) _on_ \(\overline{\mathcal{S}}(f_{1},\ldots,f_{r})\cap\operatorname{Cent}V(R)\) _if and only if_ \[fq=p+f^{2m}\] _where_ \(p,q\) _are in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ 2. \(f>0\) _on_ \(\overline{\mathcal{S}}(f_{1},\ldots,f_{r})\cap\operatorname{Cent}V(R)\) _if and only if_ \[fq=1+p\] _where_ \(p,q\) _are in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ 3. \(f=0\) _on_ \(\overline{\mathcal{S}}(f_{1},\ldots,f_{r})\cap\operatorname{Cent}V(R)\) _if and only if_ \[f^{2m}+p=0\] _where_ \(p\) _is in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ Proof.: Since \(\dim(V(R))\leq 2\), one has \(\operatorname{s}(\operatorname{Spec}_{r}R[V]\setminus\operatorname{Spec}_{c}R[ V])\leq 1\). Let us show just (1) since one proceeds likewise for the other properties. We show the non obvious implication. Let us assume that \(f\geq 0\) on \(\overline{\mathcal{S}}(f_{1},\ldots,f_{r})\cap\operatorname{Cent}V(R)\). By Artin-Lang property [4, Thm. 4.1.2], one deduces that \(f\geq 0\) on \(\overline{\mathcal{S}}^{c}(f_{1},\ldots,f_{r})\). One concludes then by application of (1) Proposition 5.6. #### 5.5.1. In the Nash setting Recall that a Nash function on \(R^{n}\) is a semialgebraic function of class \(C^{+\infty}\) (typically \(\sqrt{1+x^{2}}\) is a Nash function on \(R\)). Let us denote by \(\mathcal{N}(R^{n})\) the ring of all Nash functions on \(R^{n}\). Let us consider an irreducible Nash set \(V=\mathcal{Z}(I)\) given by a prime ideal \(I\subset\mathcal{N}(R^{n})\). Let us denote by \(A\) or \(\mathcal{N}(V)\) the quotient ring \(\mathcal{N}(R^{n})/I\) which can be seen as the ring of Nash functions over \(V\). This ring is an excellent ring as one can see using the same argument than in the proof of [1, VIII Prop. 8.4], namely the criterion stated in [1, VII Prop. 2.4]. As for the polynomials, one may define the central locus \(\operatorname{Cent}(V)\) of \(V\) as the Euclidean closure of the set \(\operatorname{Reg}(V)\) of Nash regular points of \(V\) (and its coincides with the algebraic central locus when \(V\) is algebraic). Namely, a point \(x\in V\) associated to the maximal ideal \(m_{x}\) is said to be regular if the local ring \(A_{m_{x}}\) is regular. Note that since a Nash function is semialgebraic, it gives sense to \(\widetilde{\operatorname{Cent}(V)}\subset\widetilde{R^{n}}\). Moreover, recall from [4, Prop. 8.8.1] that the canonical morphism \(\operatorname{Spec}_{r}\mathcal{N}(R^{n})\to\operatorname{Spec}_{r}R[x_{1}, \ldots,x_{n}]=\widetilde{R^{n}}\) is an homeomorphism and induces another homeomorphism \[\operatorname{Spec}_{r}\mathcal{N}(V)\simeq\widetilde{V}.\] One key tool of the polynomial case to relate geometry to algebra is the tilde operator. For instance, we have already seen that it commutes with the topological closure. Namely, for any semialgebraic subset \(S\) of \(R^{n}\), by [4, Thm. 7.2.3] one has \(\widetilde{\overline{S}^{E}}=\overline{\overline{S}}\). Roughly speaking, this commutation is still valid in the Nash case: **Lemma 5.10**.: _We have \(\widetilde{\operatorname{Cent}(V)}=\operatorname{Spec}_{c}\mathcal{N}(V)\)._ Proof.: As previously recalled, we see both quantities \(\widetilde{\operatorname{Cent}(V)}\) and \(\operatorname{Spec}_{c}\mathcal{N}(V)\) in \(\widetilde{R^{n}}=\operatorname{Spec}_{r}R[x_{1},\ldots,x_{n}]\). At the Zariski spectrum level, one has \(\operatorname{Sing}(\mathcal{N}(V))=\mathcal{V}(J)\), whereas at the geometrical level on a has \(\operatorname{Sing}(V)=\mathcal{Z}(J)\), where \(J\) is an ideal of \(\mathcal{N}(R^{n})\) containing \(I\). Hence, at the real spectrum level, one gets \(\widetilde{\operatorname{Sing}(\mathcal{N}(V))}=\widetilde{\operatorname{ Sing}(V)}\) where the first tilde send a Zariski constructible subset of \(\operatorname{Spec}\mathcal{N}(V)\) to a constructible subset of \(\widetilde{R^{n}}\) (see just before Proposition 3.13) and the second tilde is the usual one from \(R^{n}\) to \(\widetilde{R^{n}}\). Then, one derives \(\widetilde{\operatorname{Reg}(\mathcal{N}(V))}=\widetilde{\operatorname{Reg}( V)}\) and \(\widetilde{\operatorname{Reg}(\mathcal{N}(V))}=\widetilde{\operatorname{Reg}( V)}=\widetilde{\operatorname{Reg}(V)}^{E}\). Using Proposition 3.13 we get \(\operatorname{Spec}_{c}\mathcal{N}(V)=\widetilde{\operatorname{Cent}(V)}\). Then, one gets a similar statement than in the algebraic case. **Proposition 5.11**.: _(Central Nash Hilbert 17th Property): Let \(V\) be an irreducible Nash set. Let \(f\in A=\mathcal{N}(V)\) and \(\mathcal{C}=A\cap\sum\mathcal{K}(A)^{2}\). The following properties are equivalent:_ 1. \(f\geq 0\) _on_ \(\operatorname{Cent}V\)_._ 2. _There exist_ \(p,q\) _in_ \(\mathcal{C}\) _such that_ \(fq=p+f^{2m}\)_._ 3. _There exist_ \(p,q\in\mathcal{C}\) _such that_ \(q^{2}f=p\) _and_ \((\mathcal{Z}(q)\cap\operatorname{Cent}V)\subset(\mathcal{Z}(f)\cap \operatorname{Cent}V)\)_._ 4. \(f\in\mathcal{C}\)_._ Proof.: Saying \(f\geq 0\) on \(\operatorname{Cent}V\) is equivalent to say that the semialgebraic subset \(S=\{f<0\}\cap\operatorname{Cent}V\) of \(R^{n}\) is empty. By the Artin-Lang property (cf [4, Thm. 4.1.2]) and Lemma 5.10, it is equivalent to the emptiness of \(\widetilde{S}=\mathcal{S}(-f)\cap\operatorname{Spec}_{c}A=\mathcal{S}^{c}(-f) \subset\operatorname{Spec}_{r}A\) i.e \(f\geq 0\) on \(\operatorname{Spec}_{c}A\). One may conclude using Proposition 5.5. **Proposition 5.12**.: _(Nash Central Positivstellensatze for surfaces): Let \(f,f_{1},\ldots,f_{r}\) in \(A=\mathcal{N}(V)\) the ring of Nash functions on an irreducible Nash set \(V\) of dimension \(\leq 2\). Let \(\mathcal{C}=A\cap\sum\mathcal{K}(A)^{2}\). Then,_ 1. \(f\geq 0\) _on_ \(\overline{\mathcal{S}}(f_{1},\ldots,f_{r})\cap\operatorname{Cent}(V)\) _if and only if_ \(fq=p+f^{2m}\) _where_ \(p,q\) _are in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ 2. \(f>0\) _on_ \(\overline{\mathcal{S}}(f_{1},\ldots,f_{r})\cap\operatorname{Cent}(V)\) _if and only if_ \(fq=1+p\) _where_ \(p,q\) _are in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ 3. \(f=0\) _on_ \(\overline{\mathcal{S}}(f_{1},\ldots,f_{r})\cap\operatorname{Cent}(V)\) _if and only if_ \(f^{2m}+p=0\) _where_ \(p\) _is in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ Proof.: To use the general framework we have introduced, we have first to show that the stability index corresponds to the dimension also in the Nash setting. We use the Artin-Mazur description of Nash functions (cf [4, Thm. 8.4.4]) which states that for any Nash functions \(f_{1},\ldots,f_{r}:R^{n}\to R\) there is a nonsingular algebraic set \(X\subset R^{q}\) of dimension \(n\), an open semi-algebraic subset \(W\) of \(X\), a Nash diffeomorphism \(\sigma:R^{n}\to W\) and some polynomial function \(g_{1},\ldots,g_{r}\) on \(W\) such that \(f_{i}=g_{i}\circ\sigma\). Hence, any description of a basic open subset \(\{f_{1}>0,\ldots,f_{r}>0\}\) in \(V\subset R^{n}\) where the \(f_{i}\)'s are Nash functions can be translated via \(\sigma\) into a basic open subset \(\{g_{1}>0,\ldots,g_{r}>0\}\) in \(W\) where the \(g_{i}\)'s are polynomial functions. Then, one may apply the Brocker Scheiderer theorem for the stability index of algebraic varieties. Let us show now the first assertion, one proceeds likewise for the others. Saying that \(f\geq 0\) on \(\overline{\mathcal{S}}(f_{1},\ldots,f_{r})\cap\operatorname{Cent}(V)\) means that the semialgebraic subset \(S=\{f<0,f_{1}\geq 0,\ldots,f_{r}\geq 0\}\cap\operatorname{Cent}(V)\) of \(V\) is empty. By the Artin-Lang property (cf [4, Thm. 4.1.2]) and Lemma 5.10, it is equivalent to the emptiness of \(\widetilde{S}=\mathcal{S}^{c}(-f)\cap\overline{\mathcal{S}}^{c}(f_{1},\ldots, f_{r})\subset\operatorname{Spec}_{r}A\). It means \(f\geq 0\) on \(\overline{\mathcal{S}}^{c}(f_{1},\ldots,f_{r})\). We may then use Proposition 5.6 to conclude. #### 5.5.2. In the analytic setting Less classical than in the real algebraic setting, one may define the central locus of an irreducible real analytic set as the euclidean closure of the set of all regular points. Let \(\Omega\) be a real analytic variety and \(C\) be a compact global semianalytic subset of \(\Omega\). Let \(\mathcal{O}(\Omega_{C})\) be the ring of all germs of all real analytic functions at \(C\) ; it is a noetherian ring. For \(X_{C}\) a semianalytic germ of \(\Omega_{C}\), we set \(A=\mathcal{O}(X_{C})=\mathcal{O}(\Omega_{C})/\mathcal{I}(X_{C})\) to be the ring of analytic function germs of \(X_{C}\) where \(\mathcal{I}(X_{C})\) is the ideal of functions germs vanishing at \(X_{C}\). In all the following, we consider \(X_{C}\) irreducible, meaning that \(A\) is a domain and \(\mathcal{K}(A)\) is the field of germs of meromorphic functions. We are mainly interested in germs of analytic functions at a point and also by the analytic functions on a compact subset both cases covered by our framework. Note that without the compactness assumption, the ring \(A\) would not be so nice, for instance not noetherian. From now on, we consider that \(X_{C}\) is a subanalytic set germ of \(\Omega_{C}\). For \(x\in X_{C}\), we say that \(x\) is regular if the ring \(\mathcal{O}(X_{C})_{m_{x}}\) is regular where \(m_{x}\) is the ideal associated to \(x\). One may then define \(\operatorname{Cent}(X_{C})\) the analytic central locus germ of \(X_{C}\) to be the euclidean closure of the set of regular points in \(X_{C}\). Since \(C\) is compact, by [1, VIII Thm. 7.2], \(\operatorname{Cent}(X_{C})\) is a closed semianalytic subset germ of \(X_{C}\) and it can be defined by a _finite_ of a conjonction of inequalities. Again since \(C\) is compact, one may use the key tool taken from [1, VIII Prop. 8.2] that we recall for the convenience of the reader: **Proposition:** The tilde correspondance \[\cup_{i}\{f_{i1}>0,\ldots f_{ir_{i}}>0,g_{i1}=0,\ldots,g_{is_{i}}=0\}\mapsto \cup_{i}\{\alpha\in\widetilde{X_{C}}\,|f_{i1}(\alpha)>0,\ldots\] \[\ldots f_{ir_{i}}(\alpha)>0,g_{i1}(\alpha)=0,\ldots,g_{is_{i}}(\alpha)=0\}\] induces an isomorphism between the boolean algebra of semianalytic subset germs of \(X_{C}\) onto that of constructible subsets of \(\widetilde{X_{C}}=\operatorname{Spec}_{r}\mathcal{O}(X_{C})\). Beware that without the compactness hypothesis, the tilde operation is no more well defined and we only have a weak Artin-Lang property. From this, one may derive the counterpart of the classical properties on the tilde operator between semialgebraic subsets of \(\mathbb{R}^{n}\) and constructible subsets of \(\overline{\mathbb{R}^{n}}\). One also gets an Artin-Lang property, namely \(S=\emptyset\) if and only if \(\widetilde{S}=\emptyset\) for sets \(S\) as described in the proposition. Let us write down the compatibility of the tilde operator with closure : **Lemma 5.13**.: _Let \(S\) be a semianalytic subset germ of \(X_{C}\). Then, \(\overline{\widetilde{S}}=\widetilde{\widetilde{S}}\)._ Proof.: One has obviously \(\widetilde{S}\subset\widetilde{\widetilde{S}}\). By compacity of \(C\), we get from [1, VIII Thm. 7.2] that \(\overline{S}\) is a compact subanalytic subset germ and it can be written as a _finite_ union of sets of the form \(\{f_{1}\geq 0,\ldots f_{r}\geq 0\}\), hence \(\overline{\widetilde{S}}\) is closed and \(\overline{\widetilde{S}}\subset\overline{\widetilde{\widetilde{S}}}\). Let us show the converse inclusion. Let us consider \(\alpha\in\widetilde{\overline{S}}\). Let \(V\) be an open subset containing \(\alpha\), one may assume that \(V=\widetilde{U}\). Hence, \(\alpha\in\widetilde{\overline{S}}\cap\widetilde{U}=\widetilde{\overline{S}\cap U}\). Since \(\widetilde{\overline{S}\cap U}\neq\emptyset\), one has also \(\overline{S}\cap U\neq\emptyset\). Since \(U\) is open, one gets \(S\cap U\neq\emptyset\) and hence \(\widetilde{S}\cap\widetilde{U}\neq\emptyset\). We have shown that \(\widetilde{\overline{S}}\subset\widetilde{\overline{S}}\). We recall from [1, VIII Thm. 8.4] that \(A=\mathcal{O}(X_{C})\) is an excellent ring. From Proposition 3.13 and Lemma 5.13, one gets that \[\widetilde{\mathrm{Cent}(X_{C})}=\mathrm{Spec}_{c}\,\mathcal{O}(X_{C}).\] To conclude, we proceed as in the proof of Proposition 5.12. Note again that the stability index coincides with the dimension by [1, VIII Thm. 6.3] in the analytic setting, and we get: **Proposition 5.14**.: _(Analytic Central Positivstellensatze for surfaces): Let \(f,f_{1},\ldots,f_{r}\) in \(A=\mathcal{O}(X_{C})\) and assume \(\dim X_{C}\leq 2\). Let \(\mathcal{C}=A\cap\sum\mathcal{K}(A)^{2}\). Then,_ 1. \(f\geq 0\) _on_ \(\{f_{1}\geq 0,\ldots,f_{r}\geq 0\}\cap\mathrm{Cent}(X_{C})\) _if and only if_ \(fq=p+f^{2m}\) _where_ \(p,q\) _are in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ 2. \(f>0\) _on_ \(\{f_{1}\geq 0,\ldots,f_{r}\geq 0\}\cap\mathrm{Cent}(X_{C})\) _if and only if_ \(fq=1+p\) _where_ \(p,q\) _are in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ 3. \(f=0\) _on_ \(\{f_{1}\geq 0,\ldots,f_{r}\geq 0\}\cap\mathrm{Cent}(X_{C})\) _if and only if_ \(f^{2m}+p=0\) _where_ \(p\) _is in_ \(\mathcal{C}[f_{1},\ldots,f_{r}]\)_._ ## 6. Sums of squares of rational continuous functions on the central spectrum From Artin solution of the 17th problem of Hilbert, we have already mentioned that if \(p\in R[x_{1},\ldots,x_{n}]\) is non-negative on \(R^{n}\) then \(p\) is a sum of squares of rational functions. From [11], it is possible to find such a sum of squares such that the rational functions appearing can be extended continuously to \(R^{n}\) for the Euclidean topology i.e are rational continuous functions on \(R^{n}\). We denote by \(\mathcal{K}^{0}(R^{n})\) the ring of rational continuous functions on \(R^{n}\). Rational continuous functions are called regulous when they are still rational continuous by restriction to any subvariety. Rational continuous and regulous functions are introduced and studied in [10], [7], [14] and [3]. The above result involving sums of squares of rational continuous functions can be generalized as it is done in [8, Thm. 6.1], namely \(f\in\mathcal{K}^{0}(R^{n})\) is nonnegative on \(R^{n}\) if and only if \(f\in\sum\mathcal{K}^{0}(R^{n})^{2}\). The proof of this result in [8] is over \(\mathbb{R}\) but it is also valid over any real closed field in place of \(\mathbb{R}\). We may wonder if it is possible to get a continuity property in Theorem 5.3 and Proposition 5.5. As continuity is a topological notion, we choose here to only look at a continuous version of Proposition 5.5, the one associated with central orderings of a topological nature. To do that, we recall some material about abstract continuity on the real spectrum which is mainly taken from [1]. Any \(f\in A\) may be associated to a function defined on \(\mathrm{Spec}_{r}\,A\), assigning \(\alpha\mapsto f(\alpha)\in R_{\alpha}\) where \(R_{\alpha}\) is a real closure of \(k(\mathrm{supp}(\alpha))\). It does not give functions in the usual sense since the \(R_{\alpha}\)'s vary. One may then define abstract semialgebraic functions on \(\mathrm{Spec}_{r}\,A\) given by a first order formula with parameters in \(A\) (see [1, II 5]). For instance, for \(p\in A\) and \(q\in A\setminus\{0\}\), one may define the abstract semialgebraic function \(f\) by setting \(f(\alpha)=\frac{p(\alpha)}{q(\alpha)}\) whenever \(q(\alpha)\neq 0\) and \(f(\alpha)=0\) otherwise. One may also define functions on a proconstructible subset \(Y\) of \(\mathrm{Spec}_{r}\,A\). To end, one says that an abstract semialgebraic function \(f\) is continuous on \(Y\) if, for any specialization \(\beta\to\alpha\) in \(Y\), one has \(f(\beta)\in W_{\beta\alpha}\) and \(\lambda_{\beta\alpha}(f(\beta))=f(\alpha)\) where \(W_{\beta\alpha}\) and \(\lambda_{\beta\alpha}\) are respectively the valuation ring and the place associated to the specialization \(\beta\to\alpha\) (see [1, II 3.10]). Let \(f\) be an abstract semialgebraic function on \(\mathrm{Spec}_{c}\,A\), we say that \(f\) is rational continuous on \(\mathrm{Spec}_{c}\,A\) if \(f\) is continuous on \(\mathrm{Spec}_{c}\,A\) and if there exist \(p\in A\) and \(q\in A\setminus\{0\}\), such that \(f(\alpha)=\frac{p(\alpha)}{q(\alpha)}\) whenever \(\alpha\in\mathrm{Spec}_{c}\,A\setminus\mathcal{Z}^{c}(q)\). We denote by \(\mathcal{K}^{0}(\mathrm{Spec}_{c}\,A)\) the ring of rational continuous functions on \(\mathrm{Spec}_{c}\,A\). **Proposition 6.1**.: _The ring \(\mathcal{K}^{0}(\mathrm{Spec}_{c}\,A)\) is a domain whose fraction field is \(\mathcal{K}(A)\)._ Proof.: Consider the map \(\mathcal{K}^{0}(\operatorname{Spec}_{c}A)\to\mathcal{K}(A)\) which send \(f\in\mathcal{K}^{0}(\operatorname{Spec}_{c}A)\) to the class of \(p/q\) in \(\mathcal{K}(A)\) with \(p\in A\), \(0\neq q\in A\), and \(f(\alpha)=\frac{p(\alpha)}{q(\alpha)}\) whenever \(\alpha\in\operatorname{Spec}_{c}A\setminus\mathcal{Z}^{c}(q)\). We have to prove this map is injective and thus we assume the class of \(p/q\) in \(\mathcal{K}(A)\) is \(0\). It follows that \(pq\) vanishes on \(\operatorname{Spec}_{c}A\) i.e \(pq\in\mathcal{I}(\mathcal{Z}^{c}((0)))\). Since \(\mathcal{K}(A)\) is formally real then \((0)\) is a central ideal of \(A\) (Proposition 3.4) and thus, using the central Nullstellensatz (Proposition 5.1) we get \(pq=0\) in \(A\). Since \(A\) is a domain and \(q\neq 0\) then it follows that \(p=0\). Let \(\alpha\in\operatorname{Spec}_{c}A\). If \(\alpha\not\in\mathcal{Z}^{c}(q)\) then \(f(\alpha)=\frac{p(\alpha)}{q(\alpha)}=0\). Assume now \(\alpha\in\mathcal{Z}^{c}(q)\). By definition of a central ordering and by Proposition 3.1, there is \(\beta\in\operatorname{Spec}_{c}A\) such that \(\operatorname{supp}(\beta)=(0)\) and \(\beta\to\alpha\). Clearly \(\beta\not\in\mathcal{Z}^{c}(q)\) and thus \(f(\beta)=\frac{p(\beta)}{q(\beta)}=0\in W_{\beta\alpha}\) and \(\lambda_{\beta\alpha}(f(\beta))=f(\alpha)=0\). It follows that \(f=0\) in \(\mathcal{K}^{0}(\operatorname{Spec}_{c}A)\) and the proof is done. Since \(A\subset\mathcal{K}^{0}(\operatorname{Spec}_{c}A)\subset\mathcal{K}(A)\) then it follows that \(\mathcal{K}(\mathcal{K}^{0}(\operatorname{Spec}_{c}A))=\mathcal{K}(A)\). If \(V\) an irreducible irreducible irreducible algebraic variety over \(R\), we denote by \(\mathcal{K}^{0}(\operatorname{Cent}V(R))\) the ring of rational continuous functions on \(\operatorname{Cent}V(R)\). These functions are defined in the same way as for \(R^{n}\) (see [9]). It is then clear that the restriction to \(\operatorname{Cent}V(R)\) of an element of \(\mathcal{K}^{0}(\operatorname{Spec}_{c}R[V])\) is in \(\mathcal{K}^{0}(\operatorname{Cent}V(R))\). Let us define the following subcone of \(\mathcal{C}=A\cap\sum\mathcal{K}(A)^{2}\) which is convenient to deal with continuity: \[\mathcal{C}^{0}=A\cap\sum\mathcal{K}^{0}(A)^{2}.\] Adding a continuity property in Theorem 5.3 and Proposition 5.5 would suggest, for a given \(f\in A\), that \(f(\operatorname{Spec}_{c}A)\geq 0\) if and only if \(f\in\mathcal{C}^{0}\). Unfortunately, this equivalence is false i.e, in general, an element of \(A\) nonnegative on \(\operatorname{Spec}_{c}A\) is not necessarily a sum of squares in \(\mathcal{K}^{0}(\operatorname{Spec}_{c}A)\) as it is shown by the following: **Example 6.2**.: Let \(V\) be the Whitney umbrella with coordinate ring \(A=\mathbb{R}[V]=\mathbb{R}[x,y,z]/(y^{2}-zx^{2})\). Its normalization \(V^{\prime}\) is smooth and we have \(B=\mathbb{R}[V^{\prime}]=\mathbb{R}[x,Y,z]/(Y^{2}-z)\). The ring morphism \(A\to B\) associated to the normalization map \(\pi^{\prime}:V^{\prime}\to V\) is given by \(x\mapsto x\), \(y\mapsto Yx\) and \(z\mapsto z\). Let \(f=z\in A\). Since \(f=(x/y)^{2}\in\mathcal{K}(A)^{2}\) then it follows from Proposition 5.5 that \(f\geq 0\) on \(\operatorname{Spec}_{c}A\). We prove now that \(f\not\in\mathcal{C}^{0}\). Assume \(f\in\mathcal{C}^{0}\) then we get, by restriction to \(\operatorname{Cent}(V(\mathbb{R}))\), \[f=\sum_{i=1}^{n}f_{i}^{2}\] with \(f_{i}\in\mathcal{K}^{0}(\operatorname{Cent}V(\mathbb{R}))\). By composition with \(\pi^{\prime}_{|\mathbb{R}}:V^{\prime}(\mathbb{R})\to V(\mathbb{R})\) then we get \[g=f\circ\pi^{\prime}=\sum_{i=1}^{n}g_{i}^{2}\] with \(g_{i}=f_{i}\circ\pi^{\prime}_{|\mathbb{R}}\in\mathcal{K}^{0}(V^{\prime}( \mathbb{R}))^{2}\) (note that \(\pi^{\prime}_{|\mathbb{R}}\) is surjective on \(\operatorname{Cent}V(\mathbb{R})\)). Since \(V^{\prime}\) is smooth then the \(g_{i}\) are regulous functions and thus are still rational continuous by restriction to a subvariety [10]. So we restrict our identity \(g=\sum_{i=1}^{n}g_{i}^{2}\) to the curve \(C\subset V^{\prime}(\mathbb{R})\) with equations \(x=0\) and \(Y^{2}=z\) and since the curve is smooth then the restriction of the \(g_{i}\) to \(C\) are regular functions [14]. On \(C\) we get \[g=z=\sum_{i=1}^{n}(\frac{a_{i,1}(z)+a_{i,2}(z)Y}{b_{i,1}(z)+b_{i,2}(z)Y})^{2}\] with the \(a_{i,j}\) and \(b_{i,j}\) polynomials in \(z\). But the fractions \(\frac{a_{i,1}(z)+a_{i,2}(z)Y}{b_{i,1}(z)+b_{i,2}(z)Y}\) are composition by \(\pi^{\prime}_{|\mathbb{R}}\) of continuous functions on the superior half of the \(z\)-axis in \(V(\mathbb{R})\) and thus it follows from an easy calculus that \(a_{i,2}=b_{i,2}=0\) for \(i=1,\dots,n\). The previous identity becomes impossible and the proof is done. Let us see now that we get this continuity property if and only if some \(p\) and \(q\) appearing in the identities of the second and the third statements of Proposition 5.5 belong to \(\mathcal{C}^{0}\). **Proposition 6.3**.: _Let \(f\in A\). The following properties are equivalent:_ 1. _There exist_ \(p,q\) _in_ \(\mathcal{C}^{0}\) _such that_ \(fq=p+f^{2m}\)_._ 2. _There exist_ \(p,q\in\mathcal{C}^{0}\) _such that_ \(q^{2}f=p\) _and_ \(\mathcal{Z}^{c}(q)\subset\mathcal{Z}^{c}(f)\)_._ 3. \(f\in\mathcal{C}^{0}\)_._ Proof.: Let us show (1) implies (2). Suppose there exist \(p,q\) in \(\mathcal{C}^{0}\) such that \(fq=p+f^{2m}\). One may assume that \(f\neq 0\) and then \(p+f^{2m}\neq 0\) since \(\mathcal{K}(A)\) is formally real. We set \(P=(f^{2}q)(p+f^{2m})\) and \(Q=p+f^{2m}\). Clearly, \(P,Q\in\mathcal{C}^{0}\). Following the proof of Theorem 5.3 and Proposition 5.5 we get \(Q^{2}f=P\) and \(\mathcal{Z}^{c}(Q)\subset\mathcal{Z}^{c}(f)\). We prove (2) implies (3). Assume there exist \(P,Q\in\mathcal{C}^{0}\) such that \(Q^{2}f=P\) and \(\mathcal{Z}^{c}(Q)\subset\mathcal{Z}^{c}(f)\). Hence, one may write \(f=\sum f_{i}^{2}\) where \(f_{i}=\frac{g_{i}}{Q}\) with \(g_{i}\in A\). Clearly \(f_{i}\in\mathcal{K}(A)\). Considering that \(f_{i}(\alpha)=0\) whenever \(Q(\alpha)=0\), and \(f_{i}(\alpha)=\frac{g_{i}(\alpha)}{Q(\alpha)}\) whenever \(Q(\alpha)\neq 0\), then \(f_{i}\) is now defined on \(\operatorname{Spec}_{c}A\). We show now that \(f_{i}\in\mathcal{K}^{0}(\operatorname{Spec}_{c}A)\) and we are left to prove it is continuous on \(\operatorname{Spec}_{c}A\). Let \(\beta\to\alpha\) be a specialization in \(\operatorname{Spec}_{c}(A)\). The case \(Q(\beta)=0\) is trivial. Indeed, \(Q(\alpha)=0\) since \(\beta\) specializes to \(\alpha\) and in that case we have set \(f_{i}(\beta)=0\) and \(f_{i}(\alpha)=0\). Then, obviously \(f_{i}(\beta)\in W_{\beta\alpha}\) and \(\lambda_{\beta\alpha}(f_{i}(\beta))=f_{i}(\alpha)\). Let us assume in the following that \(Q(\beta)\neq 0\). In that case, we have set \(f_{i}(\beta)=\frac{g_{i}(\beta)}{Q(\beta)}\). Since \(W_{\beta\alpha}\) is \(\beta\)-convex and \(f(\beta)=\sum f_{i}^{2}(\beta)\) is in \(W_{\beta\alpha}\), one gets that \(f_{i}^{2}(\beta)\in W_{\beta\alpha}\). Using that a valuation ring is integrally closed, one has \(f_{i}(\beta)\in W_{\beta\alpha}\), the first desired condition. Let us show now the second condition : \(\lambda_{\beta\alpha}(f_{i}(\beta))=f_{i}(\alpha)\). First case: we assume \(Q(\alpha)\neq 0\). Since \(Qf_{i}\in A\), one has \(Q(\beta)f_{i}(\beta)\in W_{\beta\alpha}\), and \(\lambda_{\beta\alpha}(Q(\beta)f_{i}(\beta))=Q(\alpha)f_{i}(\alpha)\) and hence \[\lambda_{\beta\alpha}(Q(\beta))\lambda_{\beta\alpha}(f_{i}(\beta))=Q(\alpha)f _{i}(\alpha)\] Since, \(\lambda_{\beta\alpha}(Q(\beta))=Q(\alpha)\neq 0\), one gets the desired condition \(\lambda_{\beta\alpha}(f_{i}(\beta))=f_{i}(\alpha)\). Second case: assume that \(Q(\alpha)=0\). Since \(\mathcal{Z}^{c}(Q)\subset\mathcal{Z}^{c}(f)\), hence \(f(\alpha)=0\). Since \(f=\sum f_{i}^{2}\), one gets \[0=f(\alpha)=\lambda_{\beta\alpha}(f(\beta))=\sum\lambda_{\beta\alpha}(f_{i}( \beta))^{2}\] This shows that, for any \(i\), \(\lambda_{\beta\alpha}(f_{i}(\beta))=0\), which gives \(\lambda_{\beta\alpha}(f_{i}(\beta))=f_{i}(\alpha)\) and we have proved that (2) implies (3). (3) implies (1). Assume \(f\in\mathcal{C}^{0}\). We set \(q=f\in\mathcal{C}^{0}\) and \(p=0\) and we get the identity of (1) namely \(fq=p+f^{2m}\) for \(m=1\). One sufficient condition to fit in the hypothesis of this proposition is to assume non negativity of our element \(f\) on the whole real spectrum (not only on the central spectrum). Namely, one gets the following version of a central continuous Hilbert 17th property : **Theorem 6.4**.: _Let \(f\in A\). If \(f\geq 0\) on \(\operatorname{Spec}_{r}A\), then \(f\in\mathcal{C}^{0}\)._ Proof.: Assume \(f\geq 0\) on \(\operatorname{Spec}_{r}A\). By the formal positivstellensatz we get an identity \(fq=p+f^{2m}\) with \(p,q\in\sum A^{2}\subset\mathcal{C}^{0}\). We conclude using Proposition 6.3. Using the Artin-Lang property, one may derive a geometric version of Theorem 6.4 (left to the reader) and of Proposition 6.3, namely: **Proposition 6.5**.: _Let \(V\) be an irreducible affine algebraic variety over \(R\) with \(V_{reg}(R)\neq\emptyset\). Let \(f\in R[V]\). The following properties are equivalent:_ 1. _There exist_ \(p,q\) _in_ \(R[V]\cap\sum\mathcal{K}^{0}(\operatorname{Cent}V(R))^{2}\) _such that_ \(fq=p+f^{2m}\) 2. _There exist_ \(P,Q\in R[V]\cap\sum\mathcal{K}^{0}(\operatorname{Cent}V(R))^{2}\) _such that_ \(Q^{2}f=P\) _and_ \(\mathcal{Z}(Q)\cap\operatorname{Cent}V(R)\subset\mathcal{Z}(f)\cap\operatorname{ Cent}V(R)\)_._ 3. \(f\in R[V]\cap\sum\mathcal{K}^{0}(\operatorname{Cent}V(R))^{2}\)_._ Let \(V\) be the Cartan umbrella of coordinate ring \(\mathbb{R}[V]=\mathbb{R}[x,y,z]/(x^{3}-z(x^{2}+y^{2}))\) and \(f=x^{2}+y^{2}-z^{2}\). As already discussed in Example 3.6, there is \(Q=x^{2}+y^{2}\in\mathcal{C}^{0}\) and \(P=3x^{4}y^{2}+3x^{2}y^{4}+y^{6}\in\mathcal{C}^{0}\) such that \(Q^{2}f=P\). Since, \(\mathcal{Z}(Q)\cap\operatorname{Cent}V(\mathbb{R})\subset\mathcal{Z}(f)\cap \operatorname{Cent}V(\mathbb{R})\) one gets that \(f\in\sum\mathcal{K}^{0}(\operatorname{Cent}V(\mathbb{R}))^{2}\). Note that \(f\) is not nonnegative on all \(V(\mathbb{R})\) which says that the given version of central Hilbert 17th property as Theorem 6.4 shall be refined.
2307.07139
Phase-field simulations for dripping-to-jetting transitions: Effects of low interfacial tension and bulk diffusion
The dripping-to-jetting transitions in coaxial flows have been experimentally well studied for systems of high interfacial tension, where the capillary number of the outer fluid and the Weber number of the inner fluid are in control. Recent experiments have shown that in systems of low interfacial tension, the transitions driven by the inner flow are no longer dominated by the inertial force alone, and the viscous drag force due to the inner flow is also quantitatively important. In the present work, we carry out numerical simulations based on the Cahn-Hilliard-Navier-Stokes model, aiming for a more complete and quantitative study that is needed for understanding the effects of interfacial tension when it becomes sufficiently low. The Cahn-Hilliard-Navier-Stokes model is solved by using an accurate and efficient spectral method in a cylindrical domain with axisymmetry, and numerical results obtained for jet and drop radii demonstrate the accuracy of our computation. Plenty of numerical examples are systematically presented to show the dripping-to-jetting transitions driven by the outer flow and inner flow respectively. In particular, for transitions dominated by inner flow, detailed results reveal how the magnitude of interfacial tension quantitatively determines the relative importance of the inertial and viscous forces due to the inner flow at the transition point. Our numerical results are found to be consistent with the experimental observation. Finally, the degree of bulk diffusion is varied to investigate its quantitative effect on the condition for the occurrence of transition. Such effect is expected for systems of ultralow interfacial tension where interfacial motion is more likely to be driven by bulk diffusion.
Fukeng Huang, Weizhu Bao, Tiezheng Qian
2023-07-14T03:35:47Z
http://arxiv.org/abs/2307.07139v1
Phase-Field Simulations for Dripping-to-Jetting Transitions: Effects of Low Interfacial Tension and Bulk Diffusion ###### Abstract The dripping-to-jetting transitions in coaxial flows have been experimentally well studied for systems of high interfacial tension, where the capillary number of the outer fluid and the Weber number of the inner fluid are in control. Recent experiments have shown that in systems of low interfacial tension, the transitions driven by the inner flow are no longer dominated by the inertial force alone, and the viscous drag force due to the inner flow is also quantitatively important. In the present work, we carry out numerical simulations based on the Cahn-Hilliard-Navier-Stokes model, aiming for a more complete and quantitative study to understand the effects of interfacial tension when it becomes sufficiently low. The Cahn-Hilliard-Navier-Stokes model is solved by using an accurate and efficient spectral method in a cylindrical domain with axisymmetry. Plenty of numerical examples are systematically presented to show the dripping-to-jetting transitions driven by the outer flow and inner flow respectively. In particular, for transitions dominated by inner flow, detailed results reveal how the magnitude of interfacial tension quantitatively determines the relative importance of the inertial and viscous forces due to the inner flow at the transition point. Our numerical results are found to be consistent with the experimental observation. Finally, the degree of bulk diffusion is varied to investigate its quantitative effect on the condition for the occurrence of transition. Such effect is expected for systems of ultralow interfacial tension where interfacial motion is more likely to be driven by bulk diffusion. + Footnote †: Corresponding author: [email protected] ## I Introduction Dripping and jetting in coaxial flows of two immiscible fluids refer to the phenomena in which two fluids are forced to flow through a cylindrical conduit, with one fluid, namely the inner fluid, flowing at the center and the other, namely the outer fluid, flowing around it in a coaxial manner. When the flow rates of both fluids are low, dripping occurs due to the capillary instability, with the inner fluid forming discrete drops close to the orifice. On the other hand, when the flow rates are high enough, jetting occurs, with the inner fluid forming a continuous jet that extends out of the orifice and breaks into drops further downstream. The dripping and jetting of coaxial flows of two immiscible fluids have many applications in ink jet printing, biomedical engineering, and materials engineering [1; 2], and the transition from dripping to jetting is of fundamental importance in these applications involving drop formation [3; 4]. Extensive research efforts have been made to study the dripping-to-jetting transition in coaxial geometry [5; 6; 7]. Among these works, Utada _et al._[5] demonstrated that the transitions in coflowing streams can be characterized by the capillary number of the outer fluid and the Weber number of the inner fluid. The dripping-to-jetting transitions have also been investigated in other geometries such as flow-focusing [8; 9] and T-junction [10; 11]. For a review summarizing the main observations and understandings for common device geometries, we refer to [12] and the references therein. To understand the hydrodynamics of the dripping-to-jetting transitions, most of the previous studies have focused on systems, e.g., oil-water ones, of high interfacial tension. For these systems, as demonstrated in [5], the dripping-to-jetting transitions can be described by a state diagram that is controlled by the capillary number of the outer fluid and the Weber number of the inner fluid. This means that for transitions driven by the inner fluid, the effect of the viscous force due to the inner flow is negligible. However, it has been shown experimentally in [13] that when the interfacial tension is sufficiently low, the dripping-to-jetting transitions driven by the inner fluid are no longer dominated by the inertial force alone, and the viscous force due to the inner flow also plays a quantitatively important role. Therefore, for a comprehensive understanding of the dripping-to-jetting transitions, a more complete and quantitative study is needed to investigate the effect of interfacial tension when it becomes sufficiently low. This will help clarify the relative importance of the inertial and viscous forces due to the inner flow in inducing the transitions. In addition, recent observations in aqueous two-phase systems with ultra-low surface tension have revealed novel and interesting pinch-off dynamics dominated by bulk diffusion [14]. To the best of our knowledge, the effect of bulk diffusion on the dripping-to-jetting transitions in systems of ultra-low interfacial tension has never been investigated before. The main purpose of the present work is to investigate the dripping-to-jetting transitions in coaxial flows over a wide range of interfacial tension and with variable bulk diffusion. Firstly, we aim to numerically observe and examine the dripping-to-jetting transitions driven by the outer and inner fluids respectively. This is to provide a crucial indicator to distinguish jetting from dripping and hence locate the point of transition and the critical flow rate. Secondly, regarding the contributions of inertial and viscous forces due to the inner flow, we aim to establish a quantitative relationship between them at the point of transition over a wide range of interfacial tension. Last but not least, we will examine the quantitative effect of bulk diffusion on the critical flow rate at the point of transition. This will also show if bulk diffusion can change the relative importance of the inertial and viscous forces due to the inner flow at the transition point. While numerous experimental studies having been carried out on the dripping-to-jetting transitions in coaxial flows, there have been very few works focusing on numerical simulations. Guillaument _et al._[15] utilized the one-fluid model and the volume of fluid method to simulate segmented micro coflows of CO\({}_{2}\) and water in two dimensions. Lei _et al._[16] employed the phase-field model to investigate two types of transitions driven by the outer flow and the inner flow in two dimensions. Shahin _et al._[17] simulated dripping and jetting in a coflowing system using a one-fluid model in three dimensions and developed a novel algorithm to handle the topological change of the interface mesh. To investigate the dripping-to-jetting transitions in immiscible two-phase flows, we will employ the Cahn-Hilliard-Navier-Stokes (CHNS) model and carry out numerical computation in a cylindrical domain with axisymmetry. The phase-field methods have been widely used in the simulations of interfacial motion in multiphase flows [18; 19; 20; 21; 22; 23] as they avoid the need of interface tracking and can easily and efficiently accommodate topological changes such as pinch-off, a key feature of the dripping and jetting phenomena. To the best of our knowledge, there has been no prior work that investigates the dripping-to-jetting transitions in three dimensions using the phase-field method. For the CHNS model employed here, a characteristic length scale has been introduced in [24] to measure the competition between diffusion and viscous flow in interfacial motion. Parameters involved in defining this length scale can be adjusted to tune the effect of bulk diffusion in the simulated system. Numerically, we solve the CHNS model by using the spectral method [25] for the spatial discretization and the pressure-correction method [26; 27] for the temporal discretization. These methods have been demonstrated to be accurate and efficient in treating the phase-field models in cylindrical domains [23; 27; 28]. This paper is organized as follows. In Sec. II.1, the CHNS model is derived by applying Onsager's variational principle [29; 30; 31]. In Sec. II.2, the dimensionless equation system is presented with important dimensionless parameters associated with the dripping-to-jetting transitions, and the simulated systems are described in a cylindrical domain with necessary boundary conditions for the inner and outer flows with adjustable flow rates. In Sec. III, numerical results are presented to show the distinct between dripping and jetting in the regime dominated by the outer flow and that by the inner flow, respectively. Furthermore, in the regime dominated by the inner flow, the relative importance of the inertial and viscous forces at the transition point is investigated over a wide range of interfacial tension, with numerical results showing agreement with recent experiments. Finally, the quantitative effect of bulk diffusion on the critical flow rates at the transition point is also measured. In Sec. IV, the paper is concluded with a few remarks. ## II Modeling and Simulation for Immiscible Two-Phase Flows ### The Cahn-Hilliard-Navier-Stokes model Consider a multi-component fluid with two co-existing immiscible phases. A diffuse-interface model uses a Ginzberg-Landau-type free energy functional to describe the thermodynamic properties of the fluid. Here we use the Cahn-Hilliard (CH) free energy functional [32] \[F_{\rm CH}[\phi]=\int\left[\frac{K}{2}\left(\nabla\phi\right)^{2}+f(\phi) \right]d{\bf r}, \tag{1}\] in which \(\phi:=\phi({\bf r})\) is the phase-field variable to measure the local relative concentration, \(f(\phi)\) is the Helmholtz free energy density for a homogeneous phase, and \(K\) is a positive material parameter. The free energy density \(f\) is given by \(f(\phi)=-\frac{\alpha}{2}\phi^{2}+\frac{\beta}{4}\phi^{4}\), which has a double-well structure to stabilize the fluid-fluid interface between the two co-existing phases around \(\phi_{\pm}=\pm\phi_{0}=\pm\sqrt{\frac{\alpha}{\beta}}\), where \(\alpha\) and \(\beta\) are two positive parameters. Subject to appropriate boundary conditions, \(F_{\rm CH}[\phi]\) can be minimized to stabilize a flat interface between the two equilibrium phases of \(\phi=\pm\phi_{0}\). The interfacial structure gives the interfacial tension \(\gamma=\frac{2\sqrt{2}\alpha^{2}\xi}{3\beta}\) and the characteristic length scale \(\xi=\sqrt{\frac{\kappa}{\alpha}}\) for the interfacial thickness [29]. Note that in many literatures, \(\phi_{0}\) is made to equal 1 through a rescaling. Here \(\phi_{0}=\sqrt{\frac{\alpha}{\beta}}\) is purposely retained to measure the distance away from the critical point where \(\phi_{0}\) vanishes. For an incompressible fluid, the velocity field \({\bf v}\) is subject to the incompressibility condition \(\nabla\cdot{\bf v}=0\), and the phase field \(\phi\) satisfies the continuity equation \[\frac{\partial\phi}{\partial t}=-\nabla\cdot{\bf J}=-\nabla\cdot\left(\phi{ \bf v}+{\bf j}\right), \tag{2}\] where \({\bf J}=\phi{\bf v}+{\bf j}\) is the total current density, in which \(\phi{\bf v}\) is contributed by the flow and \({\bf j}\) is the diffusive current density contributed by the bulk diffusion. Hydrodynamic equations for immiscible two-phase flows can be derived by applying Onsager's variational principle (cf. appendix A in [24]) as follows. The Rayleighian \(\mathcal{R}\) is given by \(\mathcal{R}=F_{\rm CH}+\Phi\) in the bulk region. Here \(F_{\rm CH}\) is the rate of change of \(F_{\rm CH}[\phi]\), given by \[F_{\rm CH}[\phi]=\int\mu\frac{\partial\phi}{\partial t}d{\bf r}=\int\nabla\mu \cdot\left(\phi{\bf v}+{\bf j}\right)d{\bf r}, \tag{3}\] in which \(\mu=\frac{\delta F_{\rm CH}}{\delta\phi}\) is the chemical potential, given by \(\mu=-K\nabla^{2}\phi+f^{\prime}(\phi)\), and the continuity equation (2) has been used with the impermeability conditions for \({\bf v}\) and \({\bf j}\) at the solid boundary. The other part in \(\mathcal{R}\) is the dissipation functional \(\Phi\), which is half the rate of free energy dissipation and given by \[\Phi=\int\frac{\eta}{4}\left[\nabla{\bf v}+\left(\nabla{\bf v}\right)^{T} \right]^{2}d{\bf r}+\int\frac{{\bf j}^{2}}{2M}d{\bf r}, \tag{4}\] which is contributed by the viscous dissipation, with \(\eta\) being the shear viscosity, and the diffusive dissipation, with \(M\) being the mobility coefficient. Subject to the incompressibility condition, the Rayleighian can be minimized with respect to the rates \(\mathbf{v}\) and \(\mathbf{j}\). This gives the force balance equation \[-\nabla p+\nabla\cdot\mathbf{\sigma}_{\rm visc}-\phi\nabla\mu=0 \tag{5}\] for \(\mathbf{v}\), and the constitutive equation \[\mathbf{j}=-M\nabla\mu \tag{6}\] for \(\mathbf{j}\). Here \(p\) is the pressure, which is the Lagrange multiplier to locally impose \(\nabla\cdot\mathbf{v}=0\), \(\mathbf{\sigma}_{\rm visc}\) is the Newtonian stress tensor given by \(\mathbf{\sigma}_{\rm visc}=\eta\left[\nabla\mathbf{v}+\left(\nabla\mathbf{v} \right)^{T}\right]\). Equation (5) is the Stokes equation with the capillary force density, and it can be readily generalized to the Navier-Stokes equation \[\rho\left[\frac{\partial\mathbf{v}}{\partial t}+\left(\mathbf{v}\cdot\nabla \right)\mathbf{v}\right]=-\nabla p+\nabla\cdot\mathbf{\sigma}_{\rm visc}-\phi\nabla\mu. \tag{7}\] Combining equations (2) and (6) gives the advection-diffusion equation for the phase field \(\phi\): \[\frac{\partial\phi}{\partial t}+\mathbf{v}\cdot\nabla\phi=-\nabla\cdot \mathbf{j}=M\nabla^{2}\mu, \tag{8}\] which is the CH equation for a constant mobility \(M\). Equations (7) and (8) govern the hydrodynamics of immiscible two-phase flows. In the present work, the simplest situation is treated with the two fluids having equal density, equal viscosity and equal mobility. ### Dimensionless equations and simulated systems Numerical simulations are carried out by solving the CHNS system: \[\frac{\partial\phi}{\partial t}+\mathbf{v}\cdot\nabla\phi=M\nabla ^{2}\mu, \tag{9a}\] \[\mu=-K\nabla^{2}\phi-\alpha\phi+\beta\phi^{3},\] (9b) \[\rho\big{(}\frac{\partial\mathbf{v}}{\partial t}+\mathbf{v}\cdot \nabla\mathbf{v}\big{)}=-\nabla p+\eta\nabla^{2}\mathbf{v}+\mu\nabla\phi,\] (9c) \[\nabla\cdot\mathbf{v}=0, \tag{9d}\] in a cylindrical domain \(\Omega=\{\mathbf{r}=(x,y,z):x^{2}+y^{2}<L^{2},z\in(0,H)\}\). Here \(M\), \(K\), \(\alpha\), \(\beta\), \(\rho\), and \(\eta\) are material parameters introduced in Sec. II.1. Note that the pressure \(p\) in equation (9c) is different from that in equation (7), with \(-\phi\nabla\mu\) there being replaced by \(\mu\nabla\phi\) here. The boundary conditions on \(x^{2}+y^{2}=L^{2}\) are \[\frac{\partial\phi}{\partial\mathbf{n}}=0,\quad\frac{\partial\mu}{\partial \mathbf{n}}=0,\quad\mathbf{v}=0. \tag{10}\] In our simulations, two immiscible phases flow into the cylinder on the boundary \(z=0\) and out of the cylinder on the boundary \(z=H\). The boundary conditions there for \(\phi\) and \(\mu\) are given by \[\phi=\tanh\Big{(}\frac{r-R}{\sqrt{2\xi}}\Big{)},\quad\mu=0, \tag{11}\] on \(z=0\), with \(r=\sqrt{x^{2}+y^{2}}\), \(R\) being the radius of the inner tube, and \[\frac{\partial\phi}{\partial\mathbf{n}}=0,\quad\frac{\partial\mu}{\partial \mathbf{n}}=0, \tag{12}\] on \(z=H\). Finally, the boundary conditions for \(\mathbf{v}:=(v_{x},v_{y},v_{z})\) on \(z=0\) and \(z=H\) are given by \[v_{x}=v_{y}=0,\quad v_{z}=\left\{\begin{array}{ll}a(R^{2}-r^{2}),\quad 0<r<R,\\ -b(r^{2}-R^{2})+\frac{b(L^{2}-R^{2})}{\ln R}\ln R\frac{r}{R},\quad R\leq r<L,\end{array}\right. \tag{13}\] on \(z=0\), where \(a\) and \(b\) are the parameters determining the mean velocities (i.e., flow rates) of the inner and outer phases, respectively, and \[v_{x}=v_{y}=0,\quad v_{z}=c(L^{2}-r^{2}), \tag{14}\] on \(z=H\), where \(c\) is the parameter determining the mean velocity of the flow out of the cylinder. Note that these flow profiles are based on the Poiseuille profile, and the parameters \(a\), \(b\) and \(c\) satisfy \[aR^{4}+b(L^{4}-R^{4})-b\frac{(L^{2}-R^{2})^{2}}{\ln\frac{L}{R}}=cL^{4}, \tag{15}\] for the volume conservation. To nondimensionalize the above system, we use the radius \(L\) of the computational domain \(\Omega\) as the length unit, \(u=\frac{T}{\eta}\) as the velocity unit, \(\tau=\frac{L}{u}\) as the time unit, and \(p_{0}=\frac{\eta}{\tau}\) as the pressure unit. We also define the following quantities: * \(\tilde{H}=\frac{H}{L}\) as the dimensionless length of the computational domain, * \(\tilde{R}=\frac{R}{L}\) as the dimensionless radius of the inner tube, * \(\phi_{0}=\sqrt{\frac{\alpha}{\beta}}\), with the two equilibrium phases separated by a flat interface being of \(\phi=\pm\phi_{0}\), * \(\varepsilon=\frac{\xi}{L}=\frac{1}{L}\sqrt{\frac{K}{\alpha}}\) as the dimensionless interfacial thickness of the diffuse interface, * \(D=2M\alpha\) as the diffusion coefficient for \(\phi\) close to \(\pm\phi_{0}\) far away from the interface, * \(l_{c}=\frac{\sqrt{M\eta}}{\phi_{0}}\) as the characteristic length scale, determined from the competition between diffusion and viscous flow [24], * \(\gamma=\frac{2\sqrt{2}}{3}\alpha\phi_{0}^{2}\xi\) as the interfacial tension, * \(Re_{\gamma}=\frac{\rho uL}{\eta}\) as the Reynolds number defined from the velocity unit \(u=\frac{\gamma}{\eta}\) and the length unit \(L\), * \(B=\frac{\eta D}{\alpha\sigma_{0}^{2}L^{2}}=\frac{2l_{c}^{2}}{L^{2}}\) as the dimensionless parameter measuring the characteristic length scale \(l_{c}\) with respect to \(L\). Dimensionless variables, denoted by using overbar, are defined as follows: * \(\bar{\phi}=\frac{\phi}{\phi_{0}},\quad\bar{\mathbf{v}}=\frac{\bar{\mathbf{v}}}{u}, \quad\bar{\mu}=\frac{\mu}{\alpha\phi\varepsilon},\quad\bar{\rho}=\frac{p}{p_{0}}\), and the dimensionless operators: * \(\frac{\partial}{\partial t}=\tau\frac{\partial}{\partial t},\quad\bar{\nabla}=L\nabla\). Using the above definitions, we obtain the dimensionless CHNS system in the cylindrical domain \(\bar{\Omega}=\{(\bar{x},\bar{y},\bar{z}):\bar{x}^{2}+\bar{y}^{2}<1,\bar{z}\in(0,\bar{H})\}\) as \[\frac{\partial\bar{\phi}}{\partial\bar{t}}+\bar{\mathbf{v}}\cdot \bar{\nabla}\bar{\phi}=\frac{3}{4\sqrt{2}}B\bar{\nabla}^{2}\bar{\mu}, \tag{16a}\] \[\bar{\mu}=-\epsilon\bar{\nabla}^{2}\bar{\phi}+\frac{1}{ \epsilon}(-\bar{\phi}+\bar{\phi}^{3}),\] (16b) \[Re_{\tau}\Big{(}\frac{\partial\bar{\psi}}{\partial\bar{t}}+ \bar{\mathbf{v}}\cdot\bar{\nabla}\bar{\mathbf{v}}\Big{)}=-\bar{\nabla}\bar{p }+\bar{\nabla}^{2}\bar{\mathbf{v}}+\frac{3}{2\sqrt{2}}\bar{\mu}\bar{\nabla}\bar {\phi},\] (16c) \[\bar{\nabla}\cdot\bar{\mathbf{v}}=0. \tag{16d}\] The boundary conditions are \[\frac{\partial\bar{\phi}}{\partial\mathbf{n}}=0,\quad\frac{\partial\bar{\mu}} {\partial\mathbf{n}}=0,\quad\bar{\mathbf{v}}=0, \tag{17}\] on \(\bar{x}^{2}+\bar{y}^{2}=1\), \[\bar{\mu}=0,\quad\bar{\phi}=\tanh(\frac{\bar{r}-\bar{R}}{\sqrt{2 \epsilon}}),\] \[\bar{v}_{x}=\bar{v}_{y}=0,\quad\bar{v}_{z}=\left\{\begin{array}[] {ll}\bar{a}(\bar{R}^{2}-\bar{r}^{2}),&0<\bar{r}<\bar{R},\\ -\bar{b}(\bar{r}^{2}-\bar{R}^{2})+\frac{\bar{b}(1-\bar{R}^{2})}{\ln\frac{\bar {r}}{\bar{R}}}\ln\frac{\bar{r}}{\bar{R}},&\bar{R}\leq\bar{r}<1,\end{array}\right. \tag{18}\] on \(\bar{z}=0\) with \(\bar{r}=\sqrt{\bar{x}^{2}+\bar{y}^{2}}\), and \[\begin{array}{ll}\frac{\partial\bar{\phi}}{\partial\mathbf{n}}=0,\quad\frac {\partial\bar{\mu}}{\partial\mathbf{n}}=0,\\ \bar{v}_{x}=\bar{v}_{y}=0,\quad\bar{v}_{z}=\bar{c}(1-\bar{r}^{2}),\end{array} \tag{19}\] on \(\bar{z}=\bar{H}\), with the dimensionless parameters \(\bar{a}\), \(\bar{b}\) and \(\bar{c}\) satisfying \[\bar{a}\bar{R}^{4}+\bar{b}(1-\bar{R}^{4})-\bar{b}\frac{(1-\bar{R}^{2})^{2}}{ \ln\frac{1}{\bar{R}}}=\bar{c}. \tag{20}\] Here the dimensionless \(\bar{a}\), \(\bar{b}\) and \(\bar{c}\) are obtained by multiplying the dimensional ones by \(\frac{\bar{b}^{2}}{u}\). The above dimensionless CHNS system involves the dimensionless parameters \(\epsilon\), \(Re_{\tau}\), \(B\), \(\bar{R}\), \(\bar{H}\), \(\bar{a}\), and \(\bar{b}\). Here \(\epsilon\) is the dimensionless interfacial thickness, which is the smallest length to be resolved, \(Re_{\tau}\) is the Reynolds number defined from the velocity unit \(u=\frac{\gamma}{\eta}\), \(B\) controls the competition between bulk diffusion and viscous flow, \(\bar{R}\) measures the size of the orifice (i.e., the radius of the inner tube), \(\bar{H}\) measures the length of the computational domain, and \(\bar{a}\) and \(\bar{b}\) control the flow rates of the inner and outer fluids. From our three-dimensional (3D) simulations, it is verified that given an axisymmetric initial condition in the cylindrical domain, the axisymmetry can be accurately preserved during the whole dynamic process. Therefore, in the absence of any evidence for non-axisymmetric modes, we treat the axisymmetric 3D problem as a reduced two-dimensional (2D) problem by making use of the cylindrical coordinates to improve the computational efficiency [24]. Technically, we first transform the 3D problem into a 2D problem using the cylindrical coordinates [25]. We then adopt the usual semi-implicit scheme to solve the phase-field variable and the spectral-projection method to solve the velocity and pressure fields for the Navier-Stokes equation in cylindrical geometry [27]. At each time step, we can efficiently solve a series of Poisson-type equations with constant coefficients. ## III Results and discussion With the dimensionless parameters introduced in the previous section, the average velocity of the inner flow \(\bar{v}_{\text{in}}\) and that of the outer flow \(\bar{v}_{\text{out}}\) are given by \[\bar{v}_{\text{in}}=\frac{1}{2}\bar{a}\bar{R}^{2},\quad\bar{v}_{\text{out}}= \frac{\bar{b}}{2}(1+\bar{R}^{2})+\frac{\bar{b}}{2}\frac{1-\bar{R}^{2}}{\ln\bar {R}}. \tag{21}\] Using \(\bar{v}_{\text{in}}\) and \(\bar{v}_{\text{out}}\), the capillary number of the outer flow \(\mathcal{C}_{\text{out}}\) and the Weber number of the inner flow \(\mathcal{W}_{\text{in}}\) can be expressed as \[\mathcal{C}_{\text{out}}=\bar{v}_{\text{out}},\quad\mathcal{W}_{\text{in}}=\bar {v}_{\text{in}}^{2}Re_{\tau}\bar{R}. \tag{22}\] Here \(\mathcal{C}_{\text{out}}\) is defined by \(\mathcal{C}_{\text{out}}=\frac{\eta(\zeta_{\text{out}}u)}{\tau}\), and \(\mathcal{W}_{\text{in}}\) is defined by \(\mathcal{W}_{\text{in}}=\frac{\rho(\bar{v}_{\text{in}}u)^{2}R}{\gamma}\), where \(\bar{v}_{\text{in}}u\) and \(\bar{v}_{\text{out}}u\) are the dimensional average velocities with \(u\) being the velocity unit. Physically, the capillary number measures the viscous drag force, and the Weber number measures the inertial force relative to the interfacial tension force. It has been well established that there are two classes of dripping-to-jetting transitions in coflowing streams [5]. The first one is driven by strong outer flows and will be numerically investigated in Section III.1 by fixing a small \(\bar{v}_{\text{in}}\) and varying \(\bar{v}_{\text{out}}\). The second one is driven by strong inner flows, and will be numerically investigated in Section III.2 by fixing a small \(\bar{v}_{\text{out}}\) and varying \(\bar{v}_{\text{in}}\). In this regime, our numerical results show that in addition to the inertial force measured by \(\mathcal{W}_{\text{in}}\), the viscous force due to the inner flow, measured by the capillary number \(\mathcal{C}_{\text{in}}=\bar{v}_{\text{in}}\), also contributes to the occurrence of dripping-to-jetting transition when the interfacial tension is sufficiently low. This numerical observation is in agreement with recent experiments [13]. Finally, Section III.3 demonstrates the quantitative effect of bulk diffusion on the critical flow rates at the transition point. Such effect is expected for systems of ultralow interfacial tension where interfacial motion is more likely to be driven by bulk diffusion [14]. ### Transitions dominated by outer flows In this subsection, we investigate the first class of dripping-to-jetting transitions driven by strong outer flows. For this purpose, the value of \(\bar{v}_{\rm in}\) is fixed to be small, and the value of \(\bar{v}_{\rm out}\) is increased to induce the transition. We start by demonstrating the dripping-to-jetting transitions in the regime dominated by strong outer flows. Let \(Z_{p}\) denote the distance between the pinch-off position and the boundary of \(\bar{z}=0\) (the orifice). For a slow inner flow with \(\bar{a}=15\) being fixed, \(Z_{p}\) is expected to increase with the increasing outer flow rate, i.e., the increasing \(\bar{b}\). Figure 1 shows two different pinch-off positions for two different outer flow rates. It is clearly observed from figure 2 that \(Z_{p}\) exhibits a sharp increase from \(\bar{b}=0.575\) (figure 1(a)) to \(\bar{b}=0.6\) (figure 1(b)), indicating a transition from a dripping state to a jetting state as \(\mathscr{C}_{\rm out}\) is increased from 0.1450 to 0.2030. This critical magnitude of \(\mathscr{C}_{\rm out}\) is in agreement with the experimental results in [5]. According to the state diagram reported in [5], for the dripping-to-jetting transitions dominated by outer flows, the critical values of \(\mathscr{C}_{\rm out}\) are typically distributed between 0.2 and 0.4. A jetting state maintained by a strong outer flow is characterized by a long, narrow jet and small drops [5]. In fact, a stronger outer flow results in a narrower jet and smaller drops. Here we present some quantitative results on the relationship between the outer flow rate and the size of the corresponding jet, i.e., the radius of the jet. To obtain reliable data, we have ensured that the jets are long and wide enough by using values of \(\bar{b}\) and \(\bar{R}\) that are sufficiently large. Figure 3(a) presents a jetting state obtained from our simulations, and figure 3(b) shows the dependence of the jet radius \(r_{j}\) on the outer flow rate (\(\bar{v}_{\rm out}\propto\bar{b}\)), with the jet radius \(r_{j}\) being measured at the plane of \(\bar{z}=\frac{\bar{a}}{2}\). When \(\bar{a}\) and \(\bar{R}\) are both fixed, the total flux of inner fluid is given, and a faster outer flow (with a larger \(\bar{b}\)) leads to a thinner jet in which the inner fluid flows with a larger average velocity (\(\propto\bar{b}\)). According to mass conservation, \(\bar{b}r_{j}^{2}\) must be a constant in order to maintain the total flux of inner fluid, as shown in figure 3(b). We can measure both the jet diameter \(d_{\rm jet}\) and the drop diameter \(d_{\rm drop}\) in the jetting regime. From these two diameters, we obtain \(\lambda_{f}\), the wavelength of the fastest growing mode of the Rayleigh-Plateau instability, through the relation \(\frac{\pi}{4}d_{\rm jet}^{2}\lambda_{f}=\frac{\pi}{6}d_{\rm drop}^{3}\) for drop volume. Using the simulation results shown in figure 3(a), we obtain \(d_{\rm drop}\approx 2d_{\rm jet}\) and hence \(\lambda\approx 5.3d_{\rm jet}\), which is in the physically reasonable range. It is noted that for dripping-to-jetting transitions dominated by outer flows, \(d_{\rm drop}\approx 2d_{\rm jet}\) has been experimentally observed [5; 13]. ### Transitions dominated by inner flows In this subsection, we investigate the second class of dripping-to-jetting transitions driven by strong inner flows. For this purpose, the value of \(\bar{v}_{\rm out}\) is fixed to be small, and the value of \(\bar{v}_{\rm in}\) is increased to induce the transition. For \(\bar{R}=0.1\), we have \(\bar{v}_{\rm out}=0.29\bar{b}\) and \(\mathscr{C}_{\rm out}=0.029\) for the typical value 0.1 used for \(\bar{b}\). We start from the drop size in the dripping regime. When \(\bar{v}_{\rm in}\) is not large enough, the system is in a dripping state in which drops of the same size are periodically generated at the same pinch-off position. From the periodic dynamics and mass conservation, we obtain \(\frac{4}{3}\pi(\frac{\bar{a}}{2})^{3}=\pi\bar{R}^{2}\bar{v}_{\rm in}t_{p}\), where \(\bar{R}\) is the radius of the orifice, \(\bar{v}_{\rm in}\) is the average velocity of the inner fluid, \(t_{p}\) is the time period of the periodic generation of drops, Figure 1: Two different pinch-off positions for two different outer flow rates. (a) A dripping state for \(\bar{b}=0.575\). (b) A jetting state for \(\bar{b}=0.6\). Other parameter values used in simulations are \(\epsilon=0.01\), \(Re_{\gamma}=500\), \(B=0.0002\), \(\bar{R}=0.1\), \(\bar{H}=6\), and \(\bar{a}=15\). Figure 2: Variation of the pinch-off position \(Z_{p}\) with the parameter \(\bar{b}\) which controls the outer flow rate. A transition is noted to occur between \(\bar{b}=0.575\) (dripping in figure 1(a)) and \(\bar{b}=0.6\) (jetting in figure 1(b)). Other parameter values used in simulations are \(\epsilon=0.01\), \(Re_{\gamma}=500\), \(B=0.0002\), \(\bar{R}=0.1\), \(\bar{H}=6\), and \(\bar{a}=15\). and \(d_{e}\) is the diameter of the drops expected from mass conservation. Figure 4(a) shows a comparison between the expected diameter \(d_{e}\) and the diameter \(d_{m}\) which is measured in our numerical simulations. It is noted that in each simulation, \(d_{m}\) is slightly smaller than \(d_{e}\) expected from mass conservation. This is attributed to the bulk diffusion which continuously reduce the size of drops. To understand how the drop size is controlled by the inner and outer flows, we show that the time period \(t_{p}\) can be related to the drop diameter \(d\) as follows: \[t_{p}\approx\frac{\kappa d}{\bar{v}_{\text{out}}+\nu\frac{R^{2}}{d^{2}}\bar{v} _{\text{in}}} \tag{23}\] where \(\mu\) and \(\nu\) are two adjustable parameters of the order of magnitude of 1, and \(\bar{v}_{\text{in}}\) and \(\bar{v}_{\text{out}}\) have been defined in (21). For \(\bar{R}=0.1\), we have \[t_{p}\approx\frac{\kappa d}{(0.29\bar{b}+\nu\frac{10^{-4}}{2d^{2}}\bar{a})}, \tag{24}\] which has been numerically verified by figure 4(b) in which the measured diameter \(d_{m}\) is used for the drop diameter \(d\). Physically, equation (23) describes the advection of a growing drop, with the advected distance being \(\sim d\) and the velocity being \(\sim\bar{v}_{\text{out}}+\nu\frac{R^{2}}{d^{2}}\bar{v}_{\text{in}}\), in which the contribution of \(\bar{v}_{\text{in}}\) is rescaled by a factor \(\sim\frac{R^{2}}{d^{2}}\). Equation (24) is then obtained by using equation (21) to express \(\bar{v}_{\text{in}}\) and \(\bar{v}_{\text{out}}\) for \(\bar{R}=0.1\). From our simulation results, the data points in figure 4(b) are produced by using optimal values for the adjustable parameters \(\kappa\) and \(\nu\) to best fit the solid line representing equation (24). Furthermore, it is seen from the inset to figure 4(b) that the contribution of \(\bar{v}_{\text{out}}\) is much larger than that of \(\nu\frac{R^{2}}{d^{2}}\bar{v}_{\text{in}}\) in equation (23), i.e., the contribution of \(0.29\bar{b}\) is much larger than that of \(\nu\frac{10^{-4}}{2d^{2}}\bar{a}\) in equation (24) for \(\bar{R}=0.1\). This means that for the advection of a growing drop, the distance is typically \(\sim d\), and the velocity is predominantly \(\sim\bar{v}_{\text{out}}\). It follows that the time period \(t_{p}\) of drop generation is \(\sim\frac{d}{\bar{v}_{\text{out}}}\). Combining \(t_{p}\sim\frac{d}{\bar{v}_{\text{out}}}\) and \(\frac{4}{3}\pi(\frac{d}{2})^{3}=\pi\bar{R}^{2}\bar{v}_{\text{in}}t_{p}\) from mass conservation, we have \(d\sim\bar{R}\sqrt{\frac{\bar{v}_{\text{in}}}{\bar{v}_{\text{out}}}}\), which has been experimentally verified [5]. Now we focus on the dripping-to-jetting transitions dominated by inner flows. Same as done in the previous subsection, we use \(Z_{p}\) to denote the distance between the pinch-off position and the boundary of \(\bar{z}=0\) (the orifice). For a slow outer flow fixed at \(\bar{b}=0.1,Z_{p}\) is expected to increase with the increasing inner flow rate, i.e., the increasing \(\bar{a}\). Figure 5(a) shows the pinch-off position in a dripping state for \(\bar{a}=24\) just before the transition, and figure 5(b) shows the pinch-off position in a jetting state for \(\bar{a}=25\) just after the transition. It is clearly observed that there is a sharp increase of \(Z_{p}\) from figure 5(a) to 5(b), indicating the occurrence of a dripping-to-jetting transition. Here the value of \(Re_{\gamma}\) is 500, and we have \(\mathcal{W}_{\text{in}}=0.72\) and \(\mathcal{C}_{\text{in}}=0.12\) for \(\bar{a}=24\), and \(\mathcal{W}_{\text{in}}=0.781\) and \(\mathcal{C}_{\text{in}}=0.125\) for \(\bar{a}=25\). It is noted that the value of \(Re_{\gamma}\) used here is large enough to let \(\mathcal{W}_{\text{in}}\) be in control, with \(\mathcal{C}_{\text{in}}\) being less important. It is also noted that the critical magnitude of \(\mathcal{W}_{\text{in}}\) is in agreement with the experimental results in [5]. According to the state diagram reported in [5], for the dripping-to-jetting transitions dominated by inner flows, the critical values of \(\mathcal{W}_{\text{in}}\) are typically distributed around 1. To understand the underlying mechanism of the dripping-to-jetting transitions dominated by inner flows, we use figures 5(c) and 5(d) to show the variation of the neck radius \(r_{n}\) with the neck position \(z_{n}\) as time goes on. Note that as the neck radius approaches 0, i.e., \(r_{n}\to 0\), pinch-off occurs with the neck position approaching the pinch-off position, i.e., \(z_{n}\to Z_{p}\). In the dripping regime, it is observed that \(r_{n}\) decreases monotonically to 0 (as shown in figure 5(c)), while in the jetting regime, \(r_{n}\) exhibits a transient increase before it eventually decreases to 0 (as shown in figure 5(d)). It is this transient increase of \(r_{n}\) that leads to a visible jump in the value of \(Z_{p}\) that marks the transition from dripping to jetting. Figure 6 shows the variation of the pinch-off position \(Z_{p}\) with the parameter \(a\) which controls the inner flow rate. For each value of \(Re_{\gamma}\), a transition is noted around a critical value of \(a\). Furthermore, this critical value of \(a\) increases with the Figure 3: (a) A jetting state obtained from our simulations. Note that the radius of the jet \(r_{j}\) is measured at the plane of \(\bar{z}=\frac{\bar{H}}{2}\). (b) Log-log plot of the jet radius \(r_{j}\) versus \(\bar{b}\) which controls the outer flow rate. Here the mass conservation of the inner fluid is ensured by \(\bar{br}_{j}^{2}\approx 0.011\). The jetting state in (a) is obtained for \(\bar{b}=0.5\). Other parameter values used in simulations are \(\epsilon=0.01\), \(Re_{\gamma}=10\), \(B=0.0002\), \(\bar{R}=0.2\), \(\bar{H}=8\), and \(\bar{a}=6\). decreasing \(Re_{\gamma}\). The jump in the pinch-off position is a clear indicator that can be used to locate the dripping-to-jetting transition. In the following, we focus on the critical velocity of the inner flow that is needed to induce the transition, with the interfacial tension \(\gamma\) being varied for nearly two orders of magnitude. The dripping-to-jetting transitions in systems of high interfacial tension have been extensively studied [5]. In particular, when the transition is dominated by the inner flow (with the outer flow rate measured by \(\mathcal{C}_{\text{out}}\) being negligible), the inertial force due to the inner flow, measured by \(\mathcal{W}_{\text{in}}\), plays a dominant role in the transition. Figure 4: (a) A comparison between the diameter \(d_{e}\) expected from mass conservation and the diameter \(d_{m}\) measured in our simulations. Note that \(d_{m}\) is always slightly smaller than \(d_{e}\) due to the bulk diffusion. (b) The relation between the drop diameter \(d_{m}\) and the time period \(t_{p}\) of drop generation. Here the parameters \(\bar{a}\) and \(\bar{b}\), which control \(\bar{v}_{\text{in}}\) and \(\bar{v}_{\text{out}}\), are also involved according to equations (23) and (24), with \(\kappa=3.1\) and \(\nu=3\). The inset shows that the contribution of \(0.29\bar{b}\) is much larger than that of \(\nu\frac{10^{-4}}{2d^{2}}\bar{a}\) in equation (24), indicating that the growing drop is mainly advected by the outer flow. The data are obtained by using \(\epsilon=0.01\), \(B=0.0002\), \(\bar{R}=0.1\), \(\bar{H}=4\), and different combinations of \(Re_{\gamma}\), \(\bar{a}\) and \(\bar{b}\). Figure 5: (a)-(b) Two different pinch-off positions for two different inner flow rates, with \(\bar{a}=24\) for a dripping state in (a) and \(\bar{a}=25\) for a jetting state in (b). (c)-(d) Variation of the neck radius \(r_{n}\) with the neck position \(z_{n}\) as time goes on. Here the third inset to (c) corresponds to (a) for dripping, and the third inset to (d) corresponds to (b) for jetting. It is noted in (d) that before \(r_{n}\) eventually decreases to \(0\), it exhibits a transient increase that leads to a visible jump in the value of \(Z_{p}\). The data are obtained by using \(\epsilon=0.01\), \(Re_{\gamma}=500\), \(B=0.0002\), \(\bar{R}=0.1\), \(\bar{H}=4\), and \(\bar{b}=0.1\). systems of high interfacial tension. However, when the interfacial tension is continuously lowered, the viscous force due to the inner flow, measured by \(\mathcal{C}_{\text{in}}\), becomes more and more important in driving the transition. This trend has been reported experimentally [13], and a theoretical understanding can be described as follows. The Weber number of the inner flow is given by \(\mathcal{W}_{\text{in}}=\tilde{v}_{\text{in}}^{2}Re_{\gamma}\bar{R}\), where the interfacial tension \(\gamma\) is involved in the Reynolds number \(Re_{\gamma}\) defined by \(Re_{\gamma}=\frac{\rho\omega L}{\eta}\), with \(u=\frac{\gamma}{\eta}\) being the velocity unit. Let's suppose that the transition occurs at \(\mathcal{W}_{\text{in}}\approx 1\), with the interfacial tension force being balanced by the inertial force due to the inner flow. If \(Re_{\gamma}\) is made sufficiently small by a sufficiently low interfacial tension, then the value of \(\bar{v}_{\text{in}}\) corresponding to \(\mathcal{W}_{\text{in}}=\tilde{v}_{\text{in}}^{2}Re_{\gamma}\bar{R}\approx 1\) can be made large enough to be comparable to \(\mathcal{W}_{\text{in}}\). Note that the capillary number of the inner flow is given by \(\mathcal{C}_{\text{in}}=\bar{v}_{\text{in}}\). With \(\mathcal{C}_{\text{in}}\) being comparable to \(\mathcal{W}_{\text{in}}\approx 1\) for sufficiently low interfacial tension, it is deduced that the viscous force due to the inner flow is no longer negligible compared to the inertial force in driving the transition in systems of low interfacial tension. Let \(\tilde{V}_{\text{in}}\) denote the critical velocity of the inner flow. In systems of high interfacial tension, the inertial force due to the inner flow is dominant, and hence the transition occurs at \(\tilde{V}_{\text{in}}^{2}Re_{\gamma}\bar{R}\approx 1\) for the critical Weber number \(\mathcal{W}_{\text{in}}\approx 1\). As a result, \(\tilde{V}_{\text{in}}^{2}Re_{\gamma}=\text{const.}\) is expected for large \(Re_{\gamma}\). This is indeed observed in figure 7(a). When \(Re_{\gamma}\) is no longer large enough, deviation from \(\tilde{V}_{\text{in}}^{2}Re_{\gamma}=\text{const.}\) does show up. From figure 7(a), it is seen that toward the low end of the range of \(Re_{\gamma}\), the critical \(\tilde{V}_{\text{in}}\) is actually below that predicted by \(\tilde{V}_{\text{in}}^{2}Re_{\gamma}=\text{const.}\), which only considers the inertial force due to the inner flow. As explained above, when the interfacial tension is low and hence \(Re_{\gamma}\) is small, the value of \(\tilde{V}_{\text{in}}\) predicted by \(\tilde{V}_{\text{in}}^{2}Re_{\gamma}=\text{const.}\) is large. This means a large viscous force due to the inner flow. As a result, the viscous force and inertial force due to the inner flow are added up to jointly balance the interfacial tension force. Consequently, the critical \(\tilde{V}_{\text{in}}\) becomes smaller than that predicted by \(\tilde{V}_{\text{in}}^{2}Re_{\gamma}=\text{const.}\), which only considers the inertial force due to the inner flow. For \(Re_{\gamma}\) being varied between 100 and 4000, numerical simulations have been carried out to determine the critical velocity of the inner flow \(\tilde{V}_{\text{in}}\) at which the transition occurs. The data obtained for \(\tilde{V}_{\text{in}}\) are used to produce a formula that describes the contributions of the Weber number of the inner flow \(\mathcal{W}_{\text{in}}\) and the capillary number of the inner flow \(\mathcal{C}_{\text{in}}\) at the transition. Figure 7(b) shows that \(\mathcal{W}_{\text{in}}\) and \(\mathcal{C}_{\text{in}}\) at the transition satisfy a linear relation given by \(\mathcal{W}_{\text{in}}+2.6\mathcal{C}_{\text{in}}=1.13\) approximately. It is worth emphasizing that this equation holds for the interfacial tension \(\gamma\) being varied for nearly two orders of magnitude. Note that from the upper left to the lower right, the value of \(Re_{\gamma}\) decreases and consequently the relative importance of \(\mathcal{C}_{\text{in}}\) increases. Therefore, it is numerically verified that the viscous force due to the inner flow plays a quantitatively important role in driving the dripping-to-jetting transitions in systems of low interfacial tension. ### Effect of bulk diffusion In this subsection, we investigate the effect of bulk diffusion on the condition for the occurrence of transition. Physically, bulk diffusion is a dissipative process that can lower the interfacial energy and lead to the breakup of a liquid thread [24; 14]. Therefore, adding bulk diffusion to the system will facilitate the pinch-off dynamics and hence hinder the development of jetting state. As a result, a larger critical velocity \(\tilde{V}_{\text{in}}\) is needed to induce the dripping-to-jetting transition. In an earlier work [24], we demonstrate that the effect of bulk diffusion can be enhanced by increasing the characteristic length scale \(l_{c}\), which enters into the dimensionless system through the parameter \(B=\frac{2l_{c}^{2}}{L^{2}}\). Figure 8 shows that at different levels of bulk diffusion controlled by \(B\), \(\mathcal{W}_{\text{in}}\) and \(\mathcal{C}_{\text{in}}\) at the transition always satisfy a linear relation for transitions dominated by inner flow. Two important observations are made from Figure 8. (i) Stronger diffusion indeed necessitates a larger critical velocity \(\tilde{V}_{\text{in}}\) to induce the transition. (ii) The three fitting lines are parallel, showing that the relative contributions of the inertial force and viscous force due to the inner flow remain the same regardless of the variation of bulk diffusion. ## IV Concluding remarks The CHNS model has been solved in a cylindrical domain with axisymmetry to investigate the dripping-to-jetting transitions in coaxial flows of two immiscible fluids. Numerous numerical examples are presented to demonstrate that the distance between the orifice and pinch-off position increases when either the outer or the inner flow rate is enhanced. It is observed that there is an apparent jump in this distance when the outer or the inner flow rate reaches the critical value for the dripping-to-jetting transition to occur. The critical flow rates numerically obtained for both the outer and inner flows Figure 6: Variation of the pinch-off position \(Z_{p}\) with the parameter \(\bar{a}\) which controls the inner flow rate. For each value of \(Re_{\gamma}\), a transition is noted around a critical value of \(\bar{a}\). The data are obtained by using \(\epsilon=0.01\), \(B=0.0002\), \(\bar{R}=0.1\), \(\bar{H}=4\), and \(\bar{b}=0.1\). are consistent with the corresponding experimental results in order of magnitude. For transitions dominated by outer flows, a thin and long jet is generated when jetting occurs, and our numerical results for the jet radius are validated by its dependence on the outer flow rate according to the mass conservation. For transitions dominated by inner flows, the interfacial tension is varied for nearly two orders of magnitude, and a quantitative relation is established between the contributions of the inertial and viscous forces due to the inner flow at the transition point. Finally, the degree of bulk diffusion is varied to show its quantitative effect on the critical flow rate at the transition point. To the best of our knowledge, there has been no prior work that employs a phase-field model to investigate the dripping-to-jetting transitions in three dimensions, with a focus on the effects of low interfacial tension and bulk diffusion. In the present work, we have considered the simplest situation in which the two fluids have equal density, equal viscosity and equal diffusion coefficient. Actually, these restrictions can be lifted in both experiments [5; 13; 14] and numerical simulations [33]. Although the dripping-to-jetting transitions for high interfacial tension have been extensively studied in the past two decades, low interfacial tension and bulk diffusion may inject new ingredients into this classical problem. In this regard, quantitative effects of density ratio, viscosity ratio and diffusivity ratio largely remain to be explored in both experiments and numerical simulations. ###### Acknowledgements. The work of F. Huang and W. Bao was supported by the Ministry of Education of Singapore under its AcRF Tier 2 funding MOE-T2EP20122-0002(A-8000962-00-00), and the work of T. Qian was supported by the Hong Kong RGC grants CRF No. C1006-20WF and GRF No. 16306121. T. Qian was also supported by the Key Project of the National Natural Science Foundation of China (No. 12131010). Part of this work was done when the first two authors were visiting the Institute for Mathematics Sciences at the National University of Singapore in February 2023. Figure 8: The Weber number of the inner flow \(\mathcal{W}_{\text{in}}\) and the capillary number of the inner flow \(\mathcal{C}_{\text{in}}\) at the transition always satisfy a linear relation for transitions dominated by inner flows. In addition, the three fitting lines for three different values of \(B\) are parallel. The data are obtained by using \(\epsilon=0.01\), \(\bar{R}=0.1\), \(\bar{H}=4\), \(\bar{b}=0.1\), with \(Re_{\gamma}\) being varied between 100 and 2000, and \(B=0.0002\), \(0.0005\), and \(0.00075\). The value of \(Re_{\gamma}\) decreases from the upper left to the lower right along each line. Note that \(\bar{b}=0.1\) used here gives \(\mathcal{C}_{\text{out}}=0.029\), which is much smaller than the typical values of \(\mathcal{W}_{\text{in}}\) and \(\mathcal{C}_{\text{in}}\) at the transition. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2306.08123
A Travelling Salesman Paths within nxn (n = 3, 4, 5) Magic Squares
Intriguing symmetries are uncovered regarding all magic squares of orders 3, 4, and 5, with 1, 880, and 275,305,224 distinct configurations, respectively. In analogy with the travelling salesman problem, the distributions of the total topological distances of the paths travelled by passing through all the vertices (matrix elements) only once and spanning all elements of the matrix are analyzed. Symmetries are found to characterise the distributions of the total topological distances in these instances. These results raise open questions about the symmetries found in higher-order magic squares and the formulation of their minimum and maximum total path lengths.
Peyman Fahimi, Walter Trump, Cherif F. Matta, Alireza Ahmadi Baneh
2023-06-13T20:35:20Z
http://arxiv.org/abs/2306.08123v3
**4\(\times\)4 Magic Path** ## Abstract In this study, we embark on a mathematical exploration of 4\(\times\)4 magic squares, delving into the intriguing symmetries hidden within these enigmatic structures. Our focus lies on a traveler's quest to visit cities positioned at the center of each cell in a 4\(\times\)4 square grid, following the sequence dictated by a magic square arrangement. However, the distances between consecutive cities lack the magical properties associated with magic squares. Through meticulous calculations, we analyze the distances between cities, unraveling fascinating symmetrical patterns along the traveler's journey. Our findings shed light on the remarkable symmetries intrinsic to magic squares, transcending disciplines such as mathematics, physics, and computer science. This exploration opens avenues for further investigations into the captivating realm of symmetry, inspiring new discoveries and deepening our understanding of these intriguing mathematical constructs. ## Introduction Magic squares have captivated mathematicians, historians, and enthusiasts for centuries with their intricate patterns and inherent mathematical properties [1, 2, 3, 4, 5, 6, 7]. These intriguing structures, consisting of a square grid filled with distinct integers, possess a remarkable property: the sum of numbers in each row, column, and diagonal is equal, creating an enchanting symmetrical balance. The concept of magic squares dates back thousands of years, with traces found in ancient civilizations such as China [8], India [9], and the Middle East [9]. Over the centuries, magic squares have captured the imagination of mathematicians and inspired their quest for understanding. Scholars from different cultures, for example, including the Persian mathematician Abu al-Wafa' al-Buzjani [9] and the European mathematician Leonard Euler [10], delved into the properties and intricacies of magic squares, contributing to their development and popularization. One fascinating aspect of magic squares is the exploration of their numerous configurations [11, 12, 13, 14]. While a 3\(\times\)3 magic square has only one possible arrangement, the number of magic squares increases exponentially with the size of the grid. For example, a 4\(\times\)4 magic square offers 880 distinct arrangements, each with its own symmetrical properties and number patterns. Magic squares, with their aesthetic appeal and mathematical curiosities, have applications beyond their traditional realm. They find utility in diverse scientific fields, including physics. Magic squares have been employed in classical mechanics [15, 16], electrostatics [17, 18, 19, 20], and even quantum mechanics [21, 22]. In a recent development, Fahimi [17] proposed a method to levitate the electric charge representation of order 4 magic squares quasi-statically, showcasing their potential for practical applications. More over, the representation of magic squares in binary format unveils captivating patterns [23], which hold potential implications in binary systems within the realm of physics, such as the 2D Ising model [24]. In this study, we consider the numbers within a magic square as city numbers, and our objective is to investigate the symmetries in the trajectories of a traveler starting from city 1 to city 16 across the 880 magic squares of order 4. By examining the city trajectories, we aim to uncover and analyze the inherent symmetrical patterns present in these magical arrangements. #### Distance Measurement and Symmetry Analysis of City Trajectories We begin by considering a 3\(\times\)3 magic square. As depicted in Figure 1 (left), we envision a distribution of 9 cities on a 3\(\times\)3 square lattice. Each neighboring pair of cities is separated by a unit horizontal or vertical distance. The arrangement of the cities precisely mirrors the arrangement of the numbers within a 3\({}^{\text{rd}}\) order magic square, resulting in a magical path for the traveler. However, it is important to note that the distances between each pair of consecutive cities do not exhibit the magical properties typically associated with magic squares. This introduces a deliberate irregularity, disrupting the balanced nature of traditional magic squares. We calculate the distances between each pair of consecutive cities and analyze the resulting pattern. For instance, the distance between city 1 and city 2 is \(\sqrt[]{(1^{2}+2^{2})}=\sqrt[]{5}\), while the distance between city 4 and city 5 is \(\sqrt[]{2}\), and so on. As depicted in Figure 1 (right), the distance pattern exhibits reflective symmetry around a hypothetical vertical center line. The total distance of the sum of all paths is approximately 13.77. Next, we focus our attention on analyzing the paths of 4\(\times\)4 magic squares, which present a greater challenge in discerning the symmetries within their distance patterns. Among the 880 order 4 magic squares, we observe that the maximum and minimum values of the total distances for the Figure 1: 3\(\times\)3 magic trajectory of the traveler. sum of all paths are approximately 42.76 and 20.31, respectively. Furthermore, the average total distance across all 880 magic squares is approximately 33.94. Symmetrical Figures 1(a) and 1(b) illustrate the magic squares with the minimum and maximum total distances, respectively, while Figure 1(c) displays the distribution of total distances across all 880 magic squares of order 4. Figure 1(a) represents the least exhaustive pathway for the traveler, while Figure 1(b) depicts the most exhaustive pathway. Additionally, the average trajectory length per individual city for Figures 1(a) and 1(b) indicates how efficiently the distances are distributed in different magic squares, making the trip more appealing. This indicator for the shortest total trajectory is 1.35, while for the longest total trajectory it is 2.85. Figure 2: Panel (a) illustrates the magic square with the shortest total trajectory, panel (b) showcases the magic square with the longest total trajectory, and panel (c) displays the histogram depicting the distribution of total distances across all 880 magic squares of order 4. Among the 880 magic squares of order 4, we observe that 414 magic squares exhibit a reflexive geometrical symmetry in their distance patterns. To uncover the logic behind these reflexive symmetries, we conducted our analysis based on the classification of 4\(\times\)4 magic squares into 12 groups according to Dudeney groups [3, 25]. Our findings reveal that all magic squares belonging to group 3, which consists of 48 associative magic squares with symmetrical number placement around the center point, as well as group 6, comprising 304 semi-pandiagonal and simple magic squares with symmetrical number placement across the center line, demonstrate reflexive symmetry in their distance patterns. Figure 3 showcases a selection of examples from these groups. For a comprehensive collection of all the examples, please refer to the supplementary material accompanying this paper. Please note that out of the 880 distance patterns observed, it is important to highlight that not all of them are distinct. Instead, there are 112 emerging patterns that exhibit repetition, resulting in a total of 768 unique distance patterns. The remaining distance patterns, comprising 466 cases (\(=880-414\)), exhibit various types of symmetries, including local symmetry, periodicity (translational symmetry), and partial symmetry. Local symmetry (which 252 patterns exhibit this type of symmetry) refers to the presence of symmetric patterns or characteristics within a specific region or subset of the distance pattern. It implies that certain portions of the pattern exhibit mirror-like or rotational symmetry within themselves. Periodicity or translational symmetry refers to the recurrence or repetition of certain patterns or motifs at regular intervals within the distance pattern. It indicates the existence of a periodic structure or arrangement in the pattern. Partial symmetry suggests that the distance pattern possesses symmetrical elements or features, but not all aspects of the pat Figure 3: Illustration of symmetrical distance patterns in the traveler’s path, showcasing examples from Dudeney group 3 (panel a) and group 6 (panel b). The purple plots (left plot in each panel) were obtained from the website of Harvey Heinz [25]. indicates the presence of symmetry in some parts or components of the pattern while other parts may lack symmetry. In the majority of cases, achieving reflexive symmetry, local symmetry, or periodicity can be accomplished by repositioning a few points (1 to 3 points) in the distance pattern of partial symmetry. These classifications are primarily based on qualitative assessment and may exhibit overlap in certain cases. Figure 4 provides visual examples showcasing these different types. For a comprehensive list of these categories, please refer to the supplementary material. The presence of partial symmetry in distance patterns, despite the overall symmetry of the distribution of city numbers following the arrangement of numbers in a magic square, can be attributed to the specific arrangement and configuration of the cities within the square. While the city numbers themselves exhibit complete symmetry, the distances between consecutive cities introduce variations that disrupt the overall symmetry. The partial symmetry in distance patterns arises due to the intricate relationships between the positions of the cities and the resulting distances. The arrangement of cities within the magic square creates complex interdependencies, leading to partial symmetric patterns in the distances between certain pairs of cities. These patterns may arise due to the specific positioning of cities along diagonal lines, within certain quadrants of the square, or other geometric relationships. It is important to note that while the city numbers in a magic square exhibit complete symmetry, the distances between cities are determined by geometric considerations and may not adhere to the same level of symmetry. This creates an interesting interplay between the overall symmetric structure of the magic square and the partial symmetries observed in the distance patterns. The presence of partial symmetry adds depth and Figure 4: Examples of distance patterns: local symmetry (a), periodicity (b), partial symmetry (c, d). complexity to the study of magic square trajectories, revealing intricate relationships between the arrangement of cities and the resulting distance patterns. By analyzing and understanding these partial symmetries, we can gain further insights into the underlying principles and properties of magic squares and their associated distance patterns. ## Conclusion In this study, we embarked on a mathematical exploration of 4\(\times\)4 magic squares, specifically focusing on the symmetries hidden within their distance patterns. By considering the numbers within a magic square as city numbers, we investigated the trajectories of a traveler across the 880 magic squares of order 4. Our analysis revealed intriguing symmetrical patterns and shed light on the captivating symmetries intrinsic to magic squares. We started by examining the city trajectories in a 3\(\times\)3 magic square, where the distances between consecutive cities lacked the magical properties associated with magic squares. Despite this deliberate irregularity, we found reflective symmetry in the resulting distance pattern. This irregularity disrupted the balanced nature of traditional magic squares, but it did not render the patterns random or devoid of meaning. Next, we turned our attention to analyzing the paths of 4\(\times\)4 magic squares, which posed a greater challenge in discerning the symmetries within their distance patterns. Among the 880 order 4 magic squares, we identified the magic squares with the maximum and minimum total distances for the sum of all paths. Figures 1(a) and 1(b) showcased these magic squares, representing the least exhaustive and most exhaustive pathways for the traveler, respectively. Additionally, we examined the distribution of total distances across all 880 magic squares, revealing fascinating insights into the efficiency of distance distribution in different magic squares. Furthermore, based on the classification of 4\(\times\)4 magic squares into 12 groups according to Dudeney groups, we found that a significant portion of the magic squares exhibited reflexive geometrical symmetry in their distance patterns. Specifically, all magic squares belonging to group 3 and group 6 demonstrated reflexive symmetry. Figure 3 presented selected examples from these groups, highlighting the symmetrical distance patterns observed in the traveler's path. Lastly, we discussed the remaining distance patterns, which comprised cases exhibiting local symmetry, periodicity, and partial symmetry. In conclusion, our study has focused on the symmetries of 4\(\times\)4 magic squares and has provided valuable insights into the distance patterns along a traveler's path. The exploration of these symmetries goes beyond disciplinary boundaries and paves the way for further investigations into the fascinating realm of symmetry. By deepening our understanding of these intriguing mathematical constructs, we have the potential to inspire new discoveries and expand our knowledge in various fields, including mathematics, physics, and computer science. Furthermore, the trajectory of magic squares holds promise for applications in future studies, such as describing random-type paths or gold-directed paths of microorganisms in search of food [26, 27, 28]. It can be likened to the traveling salesman solution of amoeboid locomotion [29]. By studying magic square trajectories, we may gain insights into the movement patterns and behaviors of organisms, leading to advancements in fields related to biological processes and behavioral analysis. ## Acknowledgment I am grateful to Professor Cherif F. Matta from Mount Saint Vincent University and Professor Thanh-Tung Nguyen-Dang from Universite Laval for their valuable support during the course of this research. ## Conflict of Interest The author has no conflicts of interest to disclose. ## Supplementary Material The supplementary material accompanying this paper provides a comprehensive collection of the trajectory and distance patterns for all 880 magic squares of order 4. It offers a detailed exploration of the symmetries, patterns, and characteristics exhibited by each magic square in terms of the traveler's path.
2304.06777
Online Recognition of Incomplete Gesture Data to Interface Collaborative Robots
Online recognition of gestures is critical for intuitive human-robot interaction (HRI) and further push collaborative robotics into the market, making robots accessible to more people. The problem is that it is difficult to achieve accurate gesture recognition in real unstructured environments, often using distorted and incomplete multisensory data. This paper introduces an HRI framework to classify large vocabularies of interwoven static gestures (SGs) and dynamic gestures (DGs) captured with wearable sensors. DG features are obtained by applying data dimensionality reduction to raw data from sensors (resampling with cubic interpolation and principal component analysis). Experimental tests were conducted using the UC2017 hand gesture dataset with samples from eight different subjects. The classification models show an accuracy of 95.6% for a library of 24 SGs with a random forest and 99.3% for 10 DGs using artificial neural networks. These results compare equally or favorably with different commonly used classifiers. Long short-term memory deep networks achieved similar performance in online frame-by-frame classification using raw incomplete data, performing better in terms of accuracy than static models with specially crafted features, but worse in training and inference time. The recognized gestures are used to teleoperate a robot in a collaborative process that consists in preparing a breakfast meal.
M. A. Simão, O. Gibaru, P. Neto
2023-04-13T18:49:08Z
http://arxiv.org/abs/2304.06777v1
# Online Recognition of Incomplete Gesture Data to Interface Collaborative Robots ###### Abstract Online recognition of gestures is critical for intuitive human-robot interaction (HRI) and further push collaborative robotics into the market, making robots accessible to more people. The problem is that it is difficult to achieve accurate gesture recognition in real unstructured environments, often using distorted and incomplete multi-sensory data. This paper introduces a HRI framework to classify large vocabularies of interwoven static gestures (SGs) and dynamic gestures (DGs) captured with wearable sensors. DG features are obtained by applying data dimensionality reduction (DDR) to raw data from sensors (resampling with cubic interpolation and principal component analysis (PCA)). Experimental tests were conducted using the UC2017 hand gesture dataset with samples from eight different subjects. The classification models show an accuracy of 95.6% for a library of 24 SGs with a random forest (RF) and 99.3% for 10 DGs using artificial neural networks (ANNs). These results compare equally or favourably with different commonly used classifiers. Long Short-Term Memory (LSTM) deep networks achieved similar performance in online frame-by-frame classification using raw incomplete data, performing better in terms of accuracy than static models with specially crafted features, but worse in training and inference time. The recognized gestures are used to teleoperate a robot in a collaborative process that consists in preparing a breakfast meal. Keywords: Human-robot interaction, collaborative robotics, online gesture recognition, neural networks. ## 1 Introduction The paradigm for robot usage has changed in the last few years, from an idea in which robots work with complete autonomy to a scenario in where robots cognitively collaborate with human beings. This brings together the best of each partner, robot and human, by combining the coordination and cognitive capabilities of humans with the robots' accuracy and ability to perform monotonous tasks. Robots and humans have to understand each other and interact in a natural way (using gestures, speech and physical interaction), creating a co-working partnership. This will allow a greater presence of robots in all domains of our society. The problem is that the existing interaction modalities are neither intuitive nor reliable. Instructing and programming an industrial robot by the traditional teaching method is a tedious and time-consuming task that requires technical expertise in robot programming. The collaborative robotics market is rapidly growing and HRI interfaces have a main role in the acceptance of robots as partners. Gestures are an intuitive interface to teleoperate a robot since they intuitive to use and do not require technical skills in robot programming to be used [14]. For instance, a human co-worker can use a DG to indicate a grasping position and use a SG to stop the robot [15]. In this scenario, the human has little or nothing to learn about the interface, focusing instead on the task being performed. The robot assists the human when necessary, thus reducing the exposition to poor ergonomic conditions and injury. This paper proposes an integrated modular gesture-based HRI framework, Fig. 1. Static and dynamic gesture segments, composed of data captured by a data glove and magnetic tracker, are created automatically with a motion detection algorithm applied to a sliding window. Static segments are used as input for SG classifiers. Dynamic segments, which are discriminated by DG classifiers, are subject to data dimensionality reduction (DDR) with resampling based on cubic interpolation (CI) or principal component analysis (PCA). Traditional probabilistic latent variable models, such as PCA, are static linear approaches in which the dynamics and nonlinearities are not properly considered. In [23], the authors propose a weighted linear dynamic system (WLDS) for nonlinear dynamic feature extraction Figure 1: Overview of the proposed gesture-based HRI framework: data acquisition, segmentation, features, classification and the robot interface. At the bottom it is explained the meaning of incomplete data for DG classification. For example a DG can be classified with initial 50% (\(J=0.5\)) of data representing such gesture, i.e., DGs can be classified in anticipation, before the user finishes the gesture in real world. which showed superior prediction accuracy. Nevertheless, the real-time performance is worse when compared to static approaches because it requires more computational time. This is undesirable and limits the online performance of the proposed gesture-based HRI system. The proposed CI and PCA approaches are demonstrated to be computationally inexpensive without sacrificing classification accuracy, even when used with incomplete data. We use large vocabularies of gestures (a total of 34) and a relatively low number of training samples to simplify and expedite the training process. Experiments demonstrated that standard classifiers, such as artificial neural networks (ANNs), are reliable in both SG and DG classification. Furthermore, they compare equal/favorably to deep learning classifiers (LSTM and Convolutional Neural Networks (CNNs)) in both inference time and accuracy. Finally, a robot task manager maps the classified gestures to robot commands. ### Motivation, Challenges and Contributions A major challenge is the continuous, online and reliable recognition of gestures from real-time data streams. Continuous gesture recognition is the natural way used by humans to communicate with each other, in which communicative gestures (the effective SGs and DGs with an explicit meaning) appear intermittently with non-communicative gestures (pauses and movement epenthesis (ME) - inter-gesture transition periods) in a random order [21]. Many studies do not approach gesture classification in this continuous manner, nor address the negative effect of ME. It is also a challenge to recognize gesture patterns from incomplete data, as well as intuitively map the recognized gestures into robot commands for natural and safe HRI. In this context, the motivations behind this study are: 1. Combine and fuse sensor data from multiple wearable devices in order to capture a person's gestures (hand and arms) accurately, without occlusions; 2. Application of proper DDR methods to increase classification accuracy, reduce the training time, reduce the number of samples required to train the classifiers, while allowing online implementation; 3. Achieve high recognition rates (close to 100%) and ensure generalization capability in respect to untrained samples and new users; 4. Classification of DGs from incomplete data, allowing the classification of a gesture while it is being performed by the human; 5. Intuitive and online interfacing with a robot using gestures. The proposed system was evaluated by conducting several experiments using wearable/body-worn sensors (a data glove and a magnetic tracker), resulting in the following contributions: 1. The combination of DDR (CI and PCA) and ANNs for DG classification from wearable sensor data resulted in high classification accuracy that compares favorably with standard classifiers, including deep learning LSTM and CNNs. This method is computationally inexpensive, allowing the gesture-based online interaction with the robot. Gestures are recognized with an accuracy of 95.6% for a library of 24 SGs and 99.3% for 10 DGs. 2. The above results were obtained in continuous data, with multiple subjects (user independence) and applied in an unstructured environment; 3. Sequential classification of DGs showed an accuracy that is higher with incomplete data (50% or 75% of initial frames of data that represent a DG) than with 100% of DG data across different classifiers and users. In this context, DGs can be classified in anticipation, before the user finishes a gesture; 4. Framework tested in real unstructured environment where the recognized gestures serve as an intuitive interface to manage an online collaborative process, in which a robot assists the human in the preparation of a breakfast meal. ### Related Work Gesture-based HRI for collaborative robotics is an emerging and multidisciplinary research field. Communicative gestures provide information that is difficult to convey in speech, i.e., command gestures, pointing, gestures addressed to objects or actions, and mimicking gestures [3, 19]. Gestures have been proven to be one of the most effective and natural mechanisms for reliable HRI [24]. They have been used for robot teleoperation and to coordinate the interactive process of cooperation between human and robot. As stated in [5], an interactive robotic task generally consists of individual actions, operations and motions that are arranged in a hierarchical order so that the process can be managed by simple gestures [9]. An inefficient segmentation process (determining when a gesture starts and ends) results in a classification model that is more likely to fail [17]. The analysis of continuous data streams to solve spatial and temporal segmentation is challenging [1]. The problem is that it is difficult to automate the segmentation process, making gesture recognition in real world scenarios a difficult task [16]. The input features for gesture recognition are normally the hand/arm/body position, orientation and motion [21], often captured from vision sensors. Owing to its naturalness, in opposition to wearable sensors that need to be attached to the human body, vision sensing is the most common interaction technology. Gesture classification from video stream requires large amounts of training data, especially for state-of-the-art deep learning classifiers. Moreover, it is difficult to construct reliable features from only vision sensing due to occlusions, varying light conditions and free movement of the user in the scene [2, 19]. To improve classification reliability, a significant number of studies combine data from vision and wearable sensors. Taking this into account, several approaches to gesture recognition rely on wearable sensors such as data gloves, magnetic tracking sensors, inertial measurement units (IMUs) and electromyography (EMGs), among others. In fact, these interaction technologies have been proven to provide reliable features in unstructured environments. Nevertheless, they also place an added burden on the user since they are worn on the body. Some gestures, although not all, can be defined by their spatial trajectory, e.g., a circle. Burke and Lasenby succeeded on using PCA and Bayesian filtering for the classification of time series gestures [3]. Hidden Markov Models (HMMs) can be used to find time dependencies in skeletal features extracted from image and depth data (RGB-D) with a combination of Deep Belief Networks (DBNs) and 3D CNNs [20]. Deep learning combined with recurrent LSTM networks demonstrated state-of-the-art performance in the classification of human activities from wearable sensors [12]. Features are automatically extracted from raw sensor data, avoiding the need for expert knowledge in feature design. The reported results show that this framework outperforms competing deep non-recurrent networks. Various ANNs in series demonstrated superior performance in the classification of a high number of gesture classes [11]. Field et al. used a Gaussian Mixture Model (GMM) to classify human body postures with previous unsupervised temporal clustering [6]. Switching Gaussian Process Dynamic Models (SGPDM) are proposed to capture motion dynamics and to identify motion classes such as walk or run, and smile or angry [4]. Recognition performance on real videos (comparatively low quality, low frame rates and with pose changes) demonstrated that the SGPDM model can efficiently track composite motions with various dynamics. A framework for dynamic hand gesture recognition using Generalized Time Warping (GTW) for alignment of time series is proposed in [18]. Features are extracted from the aligned sequences of hand gestures based on texture descriptors, and the hand motion recognition is performed by CNNs. Autoencoders and stacked autoencoders (SAE) have been successfully used for feature representation in various applications [22]. Recent studies report state-of-the-art methods for hand detection and gesture classification from RGB-D video using deep learning [10]. Generally speaking, deep learning requires large amounts of training data, the models are computationally expensive to train, and it is challenging to determine good hyperparameters, since deep networks are essentially black boxes (it is difficult to know exactly how and why they output certain values). Boosting methods, based on ensembles of weak classifiers, allow multi-class hand detection [8]. Despite all the proposed solutions, it is still challenging to use of gestures as a reliable interaction modality to control a robot/machine in real-time. The evolution of pattern recognition has been enormous in the last few years. However, many of existing solutions address object or SG classification which is less challenging than sequential classification. Results obtained in well-established datasets have good accuracy in offline classification but are seldom tested online and the processing time is not mentioned. The ability to classify a gesture online is critical to interface with a machine/robot. Finally, no studies approach gesture classification from incomplete data, being normally assumed that more data results in better accuracy. ## 2 Gesture Classification ### Problem Formulation Within a continuous data stream, there may be a sequence of SG and DG with no specific order. As new frames are acquired, they are segmented into static or dynamic frames with a motion detection algorithm [17]. This algorithm identifies motion, or lack thereof, including sudden inversions of movement direction which are common in DGs. This is achieved by the analysis of velocities and accelerations numerically derived from positional data. A genetic algorithm is used to compute motion thresholds from calibration data. As a result, we have static and dynamic blocks of frames contiguous in time. Static blocks are SG candidates and dynamic blocks are DG candidates. Therefore, we propose two independent classifiers, one for the classification of SGs and the other for the classification of DGs. The segmentation function \(\Gamma\) based on a motion-threshold algorithm is applied to a window of a stream of data \(\boldsymbol{S}\), of dimensionality \(d\) and length \(n\): \(\left\{\left(\boldsymbol{S},\Gamma(\boldsymbol{S})\right):\ \boldsymbol{S}\in \mathbb{R}^{d\times n}\text{ and }\Gamma(\boldsymbol{S})\in\left\{0,1\right\}^{n}\right\}\). The static frames indicating no motion (input data for the SG classifier) are defined by \(m_{i}=0\) and the dynamic frames indicating motion (input data for the DG classifier) by \(m_{i}=1\). The dynamic segments are extracted by a search function that finds transitions in \(m\) (from 0 to 1 and 1 to 0). Given two consecutive transitions in the frames \(i\) and \(i+k\) so that \(m_{i-1}=0\), \(m_{i}=1\), \(m_{i+k-1}=1\) and \(m_{i+k}=0\), a DG sample is defined by: \[\mathbf{X}^{D}=\left[S_{\bullet i}\ S_{\bullet i+1}\...\ S_{\bullet i+k-1} \right],\quad\mathbf{X}^{D}\in\mathbb{R}^{d\times k} \tag{1}\] where the \(S_{\bullet i}\) vector is the \(i\)-th column (frame) of the data stream. In terms of matrix notation, being \(\mathbf{A}\in\mathbb{M}^{p\times q}\), \(\mathbf{A}_{ij}\) represents the element of the array \(\mathbf{A}\) with row \(i\) and column \(j\), \(\mathbf{A}_{i\bullet}\equiv\left[\mathbf{A}_{i1}\cdots\mathbf{A}_{in}\right]\) and \(\mathbf{A}_{\bullet j}\equiv\left[\mathbf{A}_{1j}\cdots\mathbf{A}_{nj}\right]^ {T}\), and \(\mathbb{M}\) is the notation for a real-valued matrix. The static gesture samples are considered the first frame after a transition from \(m_{i+k-1}=1\) to \(m_{i+k}=0\): \[\mathbf{X}^{S}=S_{\bullet i+k},\quad\mathbf{X}^{S}\in\mathbb{R}^{d} \tag{2}\] Hereinafter, the notation for a sample independently of its nature (static or dynamic) is \(\mathbf{X}\). Static and dynamic samples are differentiated by their dimensionality. The \(i\)-th sample of a dataset is represented by \(\mathbf{X}^{(i)}\). We represent the feature extraction pipeline by \(\Pi\), which is used to transform the raw data into the predictors \(\mathbf{z}\) that feed the classifiers: \(\left\{\left(\boldsymbol{X},\mathbf{z}=\Pi\left(\boldsymbol{X}\right)\right): \ X\in\mathbb{R}^{d\times n}\text{ and }\mathbf{z}\in\mathbb{R}^{b}\right\}\), where \(d\) is the number of channels of the sample and \(n\) its length. The target vectors are one-hot encoded class indexes. For any given sample, the target class has the index \(o\) and the target vector \(\mathbf{t}^{(o)}\) is defined by \(\mathbf{t}_{j}^{(o)}=\delta_{oj},\ j=1,...,n_{classes}\). Therefore \(\mathbf{t}\in\{0,1\}^{n_{classes}}\), \(\delta\) is the Kronecker delta and \(\mathbf{t}_{j}\) is the \(j\)-th element of \(\mathbf{t}\). For DG, the transformation \(\varPi\) could yield a long vector, which often makes training the classifier more difficult. Therefore, we introduce DDR at the end of the pipeline \(\varPi\), such as PCA and CI. The feature vectors are fed into the respective classifiers. In this study, our aim is to map the classified gestures into robot actions, such as moving to a target or halting movement. However, before issuing an action command, we must exclude poorly classified patterns from the stream. For example, we can exclude classifications by context and by applying a threshold to the classification score: \[\mathbf{o}_{i}=\begin{cases}\text{SG}_{t},&\text{if }p(\mathbf{y}=\mathbf{t}| \mathbf{z})\geq\tau^{S}\wedge m_{i}=0\\ \text{DG}_{t},&\text{if }p(\mathbf{y}=\mathbf{t}|\mathbf{z})\geq\tau^{D} \wedge m_{i}=1\\ 0,&\text{otherwise}\end{cases} \tag{3}\] where \(\mathbf{o}_{i}\) is the output gesture class number, \(p(\mathbf{y}=\mathbf{t}|\mathbf{z})\) is the likelihood of the classifier's output \(\mathbf{y}\) being in the class \(\mathbf{t}\) given the \(\mathbf{z}\) input, \(\tau\) is the likelihood threshold and \(m_{i}\) is the motion variable associated to \(\mathbf{z}\). ### Feature Dimensionality Reduction For SG no DDR is proposed since the feature space is relatively small. On the contrary, DDR is beneficial for DGs feature extraction due to the relatively large feature space and to standardize the variability of DG length (reducing the gesture length to a small fixed size - resampling). We propose two forms of dimensionality reduction, using CI and PCA. CI is used to transform any variable-length DG sample \(\mathbf{X}^{(i)}\in\mathbb{M}^{d\times n}\) into a fixed-dimension sample \(\mathbf{X}^{\prime}\in\mathbb{M}^{d\times k}\), effectively reducing the sample dimensionality if \(k<n\). PCA performs an orthogonal linear transformation of a set of \(n\)\(d\)-dimensional observations, \(\mathbf{X}\in\mathbb{M}^{d\times n}\), into a subspace defined by the principal components (PC). The PC have necessarily a length smaller than or equal to the number of original dimensions, \(d\). The first PC has the largest variance observed in the data. Each of the following PC is orthogonal to the preceding component and has the highest variance possible under this orthogonality constraint. The PC are the eigenvectors of the covariance matrix and its eigenvalues are a measure of the variance in each of the PC. Therefore, PCA can be used for reducing the dimensionality of gesture data by projecting such data into the PC space and truncating the lowest-ranked dimensions. These dimensions have the lowest eigenvalues, so that truncating them retains most of the variance present in the data, i.e., most of the information of the original data is kept in the reduced space. In this study, the singular vectors of the samples across time are calculated and used as features. The first singular vector determines the direction in the PC-space in which there is the most significative variance along a DG. This means that the singular vector is a measure of the relative variance of each variable along time and good features for DG classification are expected. Another advantage is that the PC can be calculated even before a DG is finished (incomplete data), remaining good predictors. As a result, these features are actually time series since we can calculate them on any window of data starting with the first frame and ending with any arbitrary frame of the sample. ### UC2017 Hand Gesture Dataset We introduce the UC2017 static and dynamic gesture dataset. Most researchers use vision-based systems to acquire hand gesture data. Despite that, we believe that more reliable results from more complex gestures can be obtained with wearable sensor systems. There are not many datasets using wearable systems due to the plethora of data gloves in the market and their relative high cost. For these reasons, we opted by creating a new dataset to present and evaluate our gesture recognition framework. The objectives of the dataset are: (1) provide a superset of hand gestures for HRI, (2) have user variability, (3) to be representative of the actual gestures performed in a real-world interaction. We divide the dataset in two types of gestures: SGs and DGs. SGs are described by a single timestep of data that represents a single hand pose and orientation. DGs are variable-length time series of poses and orientations with particular meanings. Some of the gestures of the dataset are correlated with specific interactions in the context of HRI, while the others were arbitrary selected to enrich the dataset and add complexity to the classification problem. The library is composed of 24 SGs and 10 DGs, Fig. 2. The dataset includes SG data from eight subjects with a total of 100 repetitions for each of the 24 classes (2400 samples in total). The DG samples were obtained from six subjects and has cumulatively 131 repetitions of each class (1310 samples in total). All of the subjects are right-handed and performed the Figure 2: Representations of the library of 24 static gestures and 10 dynamic gestures of the UC2017 library. gestures with their left hand. A data glove (CyberGlove II) and a magnetic tracker (Polhemus Liberty) are used to capture the hand shape, position and orientation over time. The glove provides digital signals \(g_{i}\) proportional to the bending angle of each one of the 22 sensors elastically attached to a subset of the hand's joints: 3 flexion sensors per finger, 4 abduction sensors, a palm-arch sensor, and 2 sensors to measure wrist flexion and abduction. These 22 sensors provide a good approximation of the hand's shape. The tracker's sensor is rigidly attached to the glove on the wrist and measures its position in Cartesian space and orientation in respect to a ground-fixed frame (a magnetic source cube defines the reference coordinate system frame). The orientation is the rotation between the fixed frame and the frame of the tracker sensor, given as the intrinsic Euler angles yaw, pitch and roll (ZYX). The Cartesian position \((x,y,z)\) is denominated by \((l_{1},l_{2},l_{3})\) and in terms of orientation the roll angle is denominated by \(l_{4}\), the pitch by \(l_{5}\) and the yaw by \(l_{6}\). Sensor data are fused together online since the sensors have slightly different acquisition rates - 100Hz for the glove and 120Hz for the tracker. Tracker data are under-sampled by gathering only the closest tracker frame in time. A goal was to obtain multiple repetitions of each gesture in the library to build the dataset. We also want the dataset to be representative of real-world conditions, so we must guarantee that the samples are independent. The magnetic tracker reference was fixed in a location free of magnetic interference. The users are then asked to put on the data glove on their own on their left hand, even though all of our test subjects were right-handed. As a result, the sensors are not carefully placed, which should yield a dataset with larger variance. There is no calibration done in this setup. The subjects follow a graphical interface that shows the representation of the gesture to be performed and press a button to save a sample. The order of the gestures was randomized to prevent order dependencies. Furthermore, the subjects were requested to repeat the sampling for two to three different sessions. We have also implemented an online movement detection algorithm to facilitate the labeling of DG, namely the starting and ending frames. A final point should be made about the random sampling of the DG. We have included in the samples the transition between the ending pose of the previous sample and the starting pose of a sample (movement epenthesis). Owing to random sampling, there is a high likelihood that all of the possible transitions were recorded. The dataset and accompanying code are publicly available at GitHub [https://github.com/MiguelSimao/UC2017_](https://github.com/MiguelSimao/UC2017_) Classification. Gesture classification must be independent of the subject's position and orientation in space. Since the user is free to move around in the world reference frame \(\{W\}\), we need to make sure that every gesture sample has its feature data reported to their local reference frame \(\{L\}\). Origin and orientation of \(\{L\}\) are defined in relation to \(\{W\}\) at the instant a gesture begins. The proposed transformation is composed of a 3D translation and a rotation around the vertical axis \(z\). We denote \(l_{i}\) and \(g_{i}\) as the \(i\)-th DOF of the tracker and glove, respectively. At the beginning of a DG sample \(\mathbf{X}^{(i)}\), the yaw \(\psi_{0}\) (\(l_{6}\)) and position \(\mathbf{p}_{0}^{W}\) (\(l_{1}\) through \(l_{3}\)) of the first frame are stored. The yaw angle is used to calculate the rotation matrix for each sample. It allows to consistently distinguish directions, such as right, left, front and back, in respect to the user. The rotation is applied to every frame of a sample so that we can translate the coordinate system to frame \(\{L\}\), Fig. 3: \[\mathbf{p}_{i}^{L}=\,\mathbf{R}_{L}^{W\,T}\left(\psi_{0}\right)\cdot\left( \mathbf{p}_{i}^{W}-\mathbf{p}_{0}^{W}\right) \tag{4}\] where \(\mathbf{p}_{i}^{L}\) is the position of the \(i\)th frame in respect to the local reference frame \(\{L\}\), and \(\mathbf{R}_{L}^{W}(\psi_{0})\) is the rotation matrix that represents the rotation around \(z\) of the world reference frame \(\{W\}\) to \(\{L\}\) by \(\psi_{0}\) degrees. The rotation matrix is given by: \[\mathbf{R}_{z}(\psi)=\left(\begin{array}{ccc}\cos\psi&-\sin\psi&0\\ \sin\psi&\cos\psi&0\\ 0&0&1\end{array}\right) \tag{5}\] After the transformation, the yaw angle is \(\psi_{i}^{L}=\psi_{i}^{W}-\psi_{0}^{W}\). In summary, we have a transformation function \(\Psi\) applied to a DG sample \(\mathbf{X}^{(k)}\in\mathbb{M}^{d\times n}\): \[\begin{array}{ll}\Psi:&\mathbb{R}^{6\times\bullet}&\rightarrow\mathbb{R}^{6 \times\bullet}\\ &\mathbf{X}_{ij}^{(k)}&\rightarrow\left(\mathbf{p}_{1}^{L}\ \mathbf{p}_{2}^{L}\ \mathbf{p}_{3}^{L}\ \mathbf{l}_{4}\ \mathbf{l}_{5}\ \psi^{L}\right)^{(k)\,T}_{\bullet j},&\quad j=1,2,...,n\end{array} \tag{6}\] where \((...)^{(k)}_{\bullet j}\) corresponds to the \(j\)-th timestep of sample \(k\). The dataset is split before feature extraction. It is shuffled and split in three subsets: training (70%), validation (15%) and test (15%). The training set was used to train the classifiers and to obtain feature scaling parameters, such as mean and standard deviation. The classifiers' hyperparameters were optimized for accuracy on the validation set. The generalization capability Figure 3: Representation of the transformation of the world coordinates \(\{W\}\) of a gesture to a local coordinate frame \(\{L\}\). of the model is measured by the accuracy on the test set. The samples of one subject were held-out from the training set to ascertain the performance on new users. Users that trained the system are designated by "trained users" and users that that did not train the system are designated by "untrained users". ### Feature Extraction For SGs the features are all the angles provided by the glove, \(g_{1}\)\(g_{2}\)... \(g_{22}\), and the pitch angle, \(l_{5}\) (to differentiate gestures with similar handshapes and distinct orientation). Thus, the features chosen are simply a subset of the available raw data. Finally, the features are standardized by \(x_{i}^{\prime}=\left(x_{i}-\bar{x}_{i}\right)/s_{i}\), where \(x_{i}^{\prime}\) is the standardized value of feature i, \(x_{i}\) is the value of the feature, \(\bar{x}_{i}\) and \(s_{i}\) are the mean and standard deviation of the feature in the training set. The validation and test sets are standardized by these same means and standard deviations. For DGs we propose three different sets of features. For all sets, data samples are preprocessed according to \(\Psi\), defined in (6): \[\mathbf{X^{\prime}}^{(i)}\mathbf{=}\Psi(\mathbf{X}^{(i)}),\quad\mathbf{X}^{(i )}\in\mathbb{R}^{28\times n} \tag{7}\] where \(n\) is the length of the DG sample. Starting from \(\mathbf{X}^{\prime}\), the first proposed set DG-CI uses the full DG data resized to a fixed length by applying CI. The second set, DG-PV, is based on PCA and represents the extraction of the first principal vector (PV) from DG data. The third set is the preprocessed data, which we call RAW. For DG-CI, given a DG sample \(\mathbf{X}^{(i)}\) with \(n\) frames (\(\mathbf{X}^{(i)}\in\mathbb{R}^{28\times n}\)), the goal is to resample it to a fixed size \(n^{\prime}\). The value for \(n^{\prime}\) can be chosen arbitrarily but higher values have a detrimental effect on the classification accuracy. Training the classifier is faster and often better with less features. For all experiments, the value \(n^{\prime}=20\) was used because the gesture lengths in the dataset vary between 20 and 224 frames. The lowest length was selected. Applying CI, the result is a matrix \(\mathbf{Z}\in\mathbb{R}^{28\times 20}\). By concatenating every frame vertically, \(\mathbf{Z}\) is transformed into a vector \(\mathbf{z}\in\mathbb{R}^{560}\): \[\mathbf{z}^{(i)}=\left(\mathbf{Z_{\bullet 1}^{(i)}}^{T},\mathbf{Z_{\bullet 2}^{ (i)}}^{T},\ldots,\mathbf{Z_{\bullet 20}^{(i)}}^{T}\right)^{T} \tag{8}\] In DG-CI the feature extraction involves the whole DG data, so there is a prediction only after the gesture is complete. However, it is beneficial to have an early classification from incomplete data, i.e., before the full gesture data are available. For DG-PV, PCA allows to obtain features from incomplete gesture data and still obtain time-coherent features. We apply this methodology for each timestep of the gesture. The feature vector for sample \(i\) at timestep \(j\) is calculated by: \[\boldsymbol{z}_{j}^{(i)}=\textit{pv}\left(\left[\mathbf{X^{\prime}}_{\bullet 1}^{(i )}\ \mathbf{X^{\prime}}_{\bullet 2}^{(i)}\ \ldots\ \mathbf{X^{\prime}}_{\bullet j}^{(i)}\right]\right),\quad j>1 \tag{9}\] where _pv_ is a function that extracts the principal vector from its argument. \(\mathbf{X^{\prime}}^{(i)}\) is the standardized sample \(\mathbf{X}^{(i)}\), i.e., with zero mean and unit variance. A single sample may originate multiple feature vectors depending on the timestep \(j\). We used the set \(J^{(i)}=\left\{x:\ 2\leq x\leq n^{(i)}\wedge x\in\mathbb{N}\right\}\) for training and validation of the classifiers, where \(n^{(i)}\) is the sample length. To simplify the display of results, we tested with a subset \(J^{(i)}=\left\{\left\lceil 0.25n\right\rceil,\left\lceil 0.5n\right\rceil, \left\lceil 0.75n\right\rceil,\left\lceil 1.0n\right\rceil\right\}\), where \(\left\lceil\right\rceil\) represents the ceiling function. Simply put, this means that we are testing the features sets extracted from the samples starting at the first timestep and ending at 25%, 50%, 75% and 100% of gesture length. The final step of feature extraction for all features sets is feature scaling, i.e., the standardization of the features as described for SGs. ## 3 Results and Discussion The accuracy of the classifier models on both SG and DG data was obtained considering a segmentation accuracy estimated to be about 98%. The error is mostly oversegmentation, i.e., pauses in the middle of a DG where the subject slows down or hesitates. In this scenario, the classification of the DGs is more likely to fail due to lack of gesture data. ### Static Gestures The accuracy of several classifiers was evaluated on the UC2017 dataset. The objective is to compare the performance of ANNs with other machine learning models: K-Nearest Neighbors (KNN), Support Vector Machines with a Radial Basis Function kernel (RBF SVM), Gaussian Processes (GP), Random Forests (RF), Gaussian Naive Bayes (NB) and Quadratic Discriminant Analysis (QDA). The training time, inference time and accuracy of the models are measured and evaluated as performance parameters. The ANN used, implemented with Keras, is a feed-forward neural network (FFNN). Its architecture and hyperparameters were optimized on the validation dataset using random grid search. Grid search and manual search are the most common techniques for hyperparameter optimization [7]. Grid search generates candidates from a grid of parameter values in which every possible combination of values is tested to optimize hyperparameters (detailed parameters and Python code available in supplementary material). The optimal network has two dense hidden layers of 200 neurons each. Between these layers, there is a Gaussian noise layer with \(\sigma=0.6\). The transfer functions of the dense layers are linear and rectified. A final layer implements the _softmax_ function to obtain the probability distribution over the target classes. For weight regularization, we used the L2 distance with a factor of 0.005 and a weight decay coefficient of \(10^{-7}\). The optimization was done using Stochastic Gradient Descent (SGD) with batches of 32 and a learning rate of 0.001. Furthermore, in order to prevent overfitting, we used early stopping when there is a minimum on the validation loss with a tolerance of 10 epochs. The hyperparameters of the remaining classifiers were optimized using manual search (detailed parameters and Python code available in supplementary material). The performance of the trained classifiers is shown in Table 1. The best performance on the test set was 95.6% on the trained users (92.4% on the untrained), obtained with the RF. The ANN was slightly worse, with 94.6% \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{2}{c}{Time (s)} & \multicolumn{3}{c}{Accuracy (\%)} \\ \cline{2-6} Model & Train & Test & Train & Validation & Test (other) \\ \hline \hline ANN & 127.0 & 0.1 & 97.9 & 94.2 & 94.6 (87.9) \\ QDA & 0.0 & 0.0 & 99.6 & 91.7 & 94.9 (66.7) \\ RBF SVM & 0.1 & 0.2 & 98.0 & 94.2 & 95.2 (83.3) \\ Gaussian Process & 23.8 & 13.1 & 99.8 & 991 & 94.9 (69.7) \\ KNN & 0.0 & 4.2 & 96.0 & 89.2 & 93.9 (53.0) \\ Naive Bayes & 0.0 & 0.0 & 93.4 & 88.3 & 91.2 (69.7) \\ Random Forest & 0.1 & 0.0 & 99.8 & 94.7 & 95.6 (92.4) \\ \hline \hline \end{tabular} \end{table} Table 1: Training and inference times of several classifiers, accuracy on the train, validation and test data subsets for SGs. The test scores are divided into the scores of the trained and untrained users (other). and 87.9% accuracy on the trained and untrained users, respectively, leading to the conclusion that in this case, the RF is generalizing better to new users than the ANN. The next best performance was the SVM, with 95.2% accuracy. The other models performed very well on the trained users, but clearly overfitted the dataset, since their accuracy on untrained users is below 70%. The accuracy of the ANN model for each individual subject varies between 87.9% (untrained user) and 100.0%, while one of the trained users reached only 88.2%. This subject was one of the authors and was involved in the definition of the gesture library, which may have originated samples that are significantly different from those of the other users. The distribution of errors per class was nearly random, with no gestures being mixed consistently. We also present the test results of the feasibility of removing poorly classified gestures by their score, Fig. 4. This analysis was done with the ANN classifier, since it outputs a probability distribution over the classes. We present the score of the winning class \(p(\mathbf{y}^{i}=\mathbf{t}^{i}\mid\mathbf{z}^{i})\), i.e., the probability of the ANN output \(\mathbf{y}^{i}\) being the expected target \(\mathbf{t}^{i}\) given the feature vector \(\mathbf{z}^{i}\). It is impossible to define a score threshold to exclude misclassifications Figure 4: On the left, sorted activation values of the winning class for each sample of the SG test set. The red crosses correspond to classification errors. On the right, the true and false negative ratios (TNR/FNR) when we apply a threshold to discard errors. The horizontal dotted lines correspond to the 0.696 and 0.923 thresholds. without excluding also some good ones. The trade-off is demonstrated in Fig. 4 on the right, via the true and false negative ratio. If we agree that a 5% False Negative Ratio (FNR) is acceptable, the threshold 0.696 reduces miss-classifications by 71%. On the other hand, if we want 95% of miss-classifications to be discarded, we lose 22% of valid classifications with a score threshold of 0.923. The latter is not a particularly good trade-off. In this context, another solution should be found, such as generating false samples so that the classifier learns how to better separate them. ### Dynamic Gestures From the UC2017 dataset, four sets of data contemplating different features were considered for the experiments, Table 2. CI-FULL and PV-FULL output a single classification per DG, while PV-TS, RAW-LSTM and RAW-CNN output a classification for each timestep of a DG. The results for the experiments using CI-FULL features are shown in Table 3. Multiple classifiers were tested and the results report the accuracy achieved by the best ones. The hyperparameters were chosen by manual search for all the classifiers (detailed parameters and Python code available in supplementary material). As an example, the ANN has two hidden layers of 100 and 200 nodes each, their activation function is linear and rectified, and the output is the _softmax_ function. The weights were regularized using the L2 distance. For optimization, SGD was used with a batch size of 128 and a learning rate of 0.01. \begin{table} \begin{tabular}{c c} \hline Feature set & Description \\ \hline \hline CI-FULL & CI applied to raw preprocessed data describing a full \\ & DG sample \\ PV-FULL & PVs applied to raw preprocessed data describing a full \\ & DG sample \\ PV-TS & PVs applied sequentially to a DG sample, starting \\ & from its first frame to an arbitrary timestep \\ RAW-LSTM & Raw preprocessed data classified by LSTMs \\ RAW-CNN & Raw preprocessed data classified by CNNs \\ \hline \end{tabular} \end{table} Table 2: Feature sets considered for the experiments. CI refers to cubic interpolation, PV to principal vectors and RAW to no feature extraction. The accuracy of the classifiers is generally excellent, around 97.0%, in the test set for trained users and up to 96.2% for untrained users. The KNN and RF classifiers did not generalize as well to new users, reaching an accuracy of just 86.8%. This is most likely explained by the size of the feature vector (560) and comparatively low number of samples. On the other hand, the SVM performed nearly as well as the ANN on untrained users (96.2%), but worse on trained users (99.3% vs 97.9%). The results for the experiments using PV-FULL features are shown in Table 3. Fig. 5 shows the ten DGs projected on the plane defined by the first two principal components of the data. We have tested the same classifiers as in CI-FULL, so that we can establish a comparison between the two feature sets. The accuracy is generally and markedly below of those obtained by CI-FULL. The ANN accuracy decreased by 4.9% for trained users, even with an updated architecture composed by two hidden layers with 300 units each. The generalization to untrained users was poor, with just 66.0% accuracy. The RF achieved slightly better results on the training and validation sets than the ANN, but the test accuracy was lower (88.9%). All of the other models performed significantly worse. As a conclusion, the PV features lose more information about the DGs than the CI features, making classification harder. However, the PV features can be calculated with an arbitrary number of frames of data, without the full gesture data, therefore allowing sequential (online) classification. The results for the experiments using PV-TS features are shown in Table 4. For this case we present the results for the best performing models, a feed-forward ANN, KNN, RF and SVM. The classifiers' hyperparameters were \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{3}{c}{CI-FULL} & \multicolumn{3}{c}{PV-FULL} \\ \cline{2-6} Model & Train & Val & Test (other) & Train & Val & Test (other) \\ \hline \hline ANN & 100.0 & 98.5 & 99.3 (96.2) & 99.7 & 91.3 & 94.4 (66.0) \\ LDA & 100.0 & 98.0 & 97.2 (92.5) & 67.2 & 60.7 & 68.8 (50.9) \\ KNN & 97.9 & 94.4 & 96.5 (86.8) & 90.0 & 81.1 & 84.7 (62.3) \\ RF & 100.0 & 98.0 & 97.2 (86.8) & 100.0 & 92.3 & 88.9 (67.9) \\ SVM & 99.8 & 98.5 & 97.9 (96.2) & 87.9 & 82.1 & 79.2 (60.4) \\ \hline \hline \end{tabular} \end{table} Table 3: Classification accuracy for the full DG experiments. The test scores are divided into the scores of the trained and untrained (other) users. Figure 5: DGs projected on the plane defined by the first two principal components of the entire dataset - training, validation and test data). The ten DG classes are represented by different colours and markers. From DG1 to DG6 we can observe a cluster and then for DG7, DG8, DG9 and DG10 we have other clusters. This is because the gestures from DG1 to DG6 are all performed with the hand open (no variations in finger angle data). \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Time (s)} & \multicolumn{3}{c}{Train accuracy (\%)} & \multicolumn{3}{c}{Validation accuracy (\%)} \\ \cline{3-11} & Train & Test & 0.25 & 0.50 & 0.75 & 1.00 & 0.25 & 0.50 & 0.75 & 1.00 \\ \hline \hline ANN & 94.7 & 0.2 & 95.3 & 96.9 & 95.4 & 91.1 & 74.5 & 86.7 & 88.3 & 85.2 \\ KNN & 25.6 & 3.4 & 99.9 & 99.9 & 100.0 & 99.5 & 67.9 & 82.1 & 83.2 & 78.6 \\ RF & 54.6 & 0.2 & 100 & 100.0 & 100.0 & 100.0 & 74.5 & 88.8 & 91.3 & 86.7 \\ SVM & 102.4 & 8.8 & 97.8 & 97.8 & 97.1 & 93.1 & 73.0 & 83.7 & 87.2 & 83.2 \\ \hline LSTM & 390.5 & 69.5 & 87.9 & 96.8 & 99.8 & 99.8 & 78.1 & 92.9 & 97.4 & 98.5 \\ CNN & 66.4 & 2.7 & 86.6 & 96.5 & 97.1 & 88.7 & 81.1 & 92.9 & 97.1 & 88.7 \\ \hline \hline \end{tabular} \end{table} Table 4: DG classification sequential accuracy for the time-series based experiments PV-TS, RAW-CNN and RAW-LSTM at 25, 50, 75 and 100% of DG completion. The test scores are divided into the scores of the trained and untrained (other) users. Figure 6: Plots of the features of the DG training set, with 25%, 50%, 75% and 100% of the data, in a reduced 2D principal component space. Each color represents a different class. Results for each classifier are detailed in Table 4. optimized again by manual search, but they remained nearly the same as for PV-FULL, since the problem is similar. However, the ANN architecture was updated to two hidden layers of 512 and 256 nodes. There is also a 0.1\(\sigma\) Gaussian noise layer after each hidden layer and a dropout layer (50% rate) in between. These noise layers help the network generalize better at all timesteps, since there is now a feature vector for each timestep of each DG sample, which in turn originates much more variability in the features. We present the training time for each model and the total inference time for the whole dataset. The accuracy results for each dataset split are presented and the test split was divided into trained and untrained users. According to our designation for incomplete data, the columns "0.25" correspond to the PV feature vectors calculated with the first 25% of timesteps of each DG sample. The same logic applies to "0.50", "0.75" and "1.00". The column "1.00" corresponds to all of the DG timesteps being used, which is the same case as PV-FULL. However, in PV-TS we use precisely the same classification model for all timesteps. The best classification accuracy on the test set was obtained with the ANN at all timesteps of the test set. The RF and SVM models reached nearly the same accuracy as the ANN, with 90% at the middle of the DGs and 89% accuracy at 75% of DG completion. At the end of the DGs (100% completion), the accuracy is lower for these models than for the ANN (81% vs 85%). The KNN model performed worse at most timesteps, reaching only 83.3%, 81.2% and 81.9% accuracy at 50%, 75% and 100% of DG completion, respectively. Interestingly, for all of the classifiers, the accuracy is better at 50% and 75% of DG completion than at 100%. This is likely due to the way most of the gestures of the library have a motion in one direction and then move back to the initial position. This would mean that the first half of the gesture is a better predictor of the whole gesture than the second half. For example, the second half of DG1 is very similar to the first half of DG2. To aid in visualization, the features are shown in a 2D PC space (for the test set) in Fig. 6. There is considerable noise at \(J_{0.25}\), which is unfavorable for classification. However, after that, we see stable clusters, such as classes 8 and 10. On the other hand, class 9 has many samples flipping their position at \(J_{1.00}\), which may help explain the drop in accuracy. RAW-LSTM experiments use a LSTM recurrent neural network that is composed of cells that have memory which can be kept or forgotten over-time, depending on the sequence of input data. The LSTM structure and hyperparameters were obtained by manual search. There is a densely connected layer with 512 units and a \(0.4\sigma\) Gaussian noise layer, which is followed by a LSTM layer of 256 cells. After that we implement a 50% dropout layer. The output layer has a _softmax_ activation function. All the other layers have the hyperbolic tangent as transfer function. Additionally, we increase the weight of the timesteps after 50% of sample completion, so that the model optimizes accuracy at later stages of the gesture. Detailed parameters and Python code is available in supplementary material. In terms of accuracy on the trained users of the test split, the LSTM outperforms all the other models at 50%, 75% and 100% of gesture completion, with 95.1%, 97.2% and 96.5% accuracy, respectively. In respect to generalization to new users, the LSTM is significantly better than all other models when 75% or more data is available. At 75% and 100%, the accuracy on new users is 87.3% and 89.1%, respectively. It compares favorably to the second best performer (ANN) at the cost of a significant increase in training and inference times, due to the large number of parameters of the LSTM model. We performed the RAW-CNN experiment in the same conditions as the previous RAW-LSTM. The CNN used had an initial dense hidden layer with 512 nodes and a _tanh_ transfer function. Afterwards, there are two convolutional (1D) layers with 100 filters each, windows of 5 timesteps and rectified linear units as transfer function. Each convolutional layer is followed by Gaussian noise layers of strength \(0.2\sigma\). The model is trained by SGD with a learning rate of 0.001. While oftentimes a CNN can be used to model sequential data with performance similar to that of a LSTM, in this case it was worse in accuracy at the later stages of the DGs, and also in terms of generalization to new users. Considering 50% of gesture data, the accuracy on trained users (94.4%) is close to the LSTM's 95.1%. The CNN also generalizes better at that timestep than the LSTM (84.9% vs 74.5%). However, for 75% of gesture completion, it is significantly worse than the LSTM for all users, and at 100% it is worse than the feed-forward ANN with just 81.9% Figure 7: The human collaborates with the robot to prepare the breakfast meal. The video is available in supplementary material. and 54.7% accuracy for trained and untrained users, respectively. A decrease of this magnitude was unexpected and it is possibly due to the data padding that occurs during the convolution operations. The last timestep of a DG is evaluated by a convolution operation on a window centered on that timestep. However, since the window has length 5, that means that 2 timesteps correspond to CNN padding, which is generated, unreliable data, leading to the low score. The LSTM and CNN approaches have the advantage that the raw sensor data can be fed directly into the model after scaling, unlike all of the others, which require carefully chosen feature extraction methods. For LSTMs, the disadvantage is that the training and inference time are one to two orders of magnitude higher than the other classifiers, due to model complexity. The latter is more concerning because classification must be done online for an efficient human-robot interactive process. The experimental setup demonstrated that the inference time per frame for the LSTM model is about 0.16 ms (about 6300 Hz) on a GPU, so it could be an issue for its implementation in embedded systems. ### Robot Interface We have implemented a gesture-based human-robot interface composed by gestures from the UC2017 dataset. Since there are delays in data acquisition, data stream segmentation, candidate sample preprocessing, classification, decision making and robot communication, we estimate that the total delay between the end of a gesture performed by the human and the robot reaction is about 300 ms. The collaborative robot is a 7 DOF KUKA iiwa equipped with the Sunrise controller and interfaced using the KUKA Sunrise Toolbox for MATLAB [13]. The attempted collaborative robotic task consists in preparing a breakfast meal composed by subtasks such as grasping a cereal box, a yogurt bottle, and pouring the contents into a bowl, Fig. 7. These tasks were performed by direct robot teleoperation, being the robot actions controlled online by the human gestures and the collaborative process managed by a collaborative robot task manager [9]. The task manager can be setup with a number of required validations so that when a gesture is wrongly classified the system actuates to avoid any potential danger for the human and/or the equipment. From the library of 24 SGs and 10 DGs, three subjects were taught the mapping between gestures and specific robot commands, such as: stop motion, move along X, Y or Z in Cartesian space, rotate the robot end-effector in turn of X, Y or Z, open/close the gripper, and teleoperate the robot in joystic mode. Anytime the user is not performing a given gesture the system is paused. All users indicated that the interaction process is very natural, since they can easily select the desired operation modes and the system is intuitive to use. During the interactive process, the reached target points can be saved and used in future robot operations. The impedance controlled robot compensates positioning inaccuracies, i.e., the users can physically interact with the robot to adjust positioning. Concerning safety, the subjects indicated that they feel safe in interacting with this robot due to the fact that the KUKA iiwa is a sensitive robot that is able to stop its motion when a pre-defined contact force is reached. ## 4 Conclusion and Future Work This paper presented an online static and dynamic gesture recognition framework for HRI. Experimental results using the UC2017 dataset showed a relatively high classification accuracy on SGs without feature extraction. For DGs, the use of CI features resulted in high offline classification accuracy using a regular ANN model. The achieved accuracy compares favourably with standard classifiers and with deep learning LSTM on online classification with PCA features. The LSTM uses scaled raw data, therefore being more easily extendable to new datasets. Nevertheless, The LSTM degrades when we compare the training and inference time, which is critical for online implementation. The sequential classification of DGs with either PCA features or raw data showed an accuracy that is higher with partial gesture data (50% or 75% of the initial frames of a DG sample) than with the full DG data. In this context, DGs can be accurately classified in anticipation, even before the user finishes the gesture in real world, thus allowing faster and more efficient gesture-based control of a robot by cutting the processing time overheads. The human-robot interactive process demonstrated that is feasible to associate the recognized gesture patterns to robot commands and by this way teleoperate the collaborative robot in an intuitive fashion. Future work will be dedicated to testing the proposed solution with other interaction technologies (vision, IMUs, and EMG). The promising results obtained with the classification from incomplete data will be explored as a way to anticipate robot reaction to human commands.
2308.09949
Scene-Aware Feature Matching
Current feature matching methods focus on point-level matching, pursuing better representation learning of individual features, but lacking further understanding of the scene. This results in significant performance degradation when handling challenging scenes such as scenes with large viewpoint and illumination changes. To tackle this problem, we propose a novel model named SAM, which applies attentional grouping to guide Scene-Aware feature Matching. SAM handles multi-level features, i.e., image tokens and group tokens, with attention layers, and groups the image tokens with the proposed token grouping module. Our model can be trained by ground-truth matches only and produce reasonable grouping results. With the sense-aware grouping guidance, SAM is not only more accurate and robust but also more interpretable than conventional feature matching models. Sufficient experiments on various applications, including homography estimation, pose estimation, and image matching, demonstrate that our model achieves state-of-the-art performance.
Xiaoyong Lu, Yaping Yan, Tong Wei, Songlin Du
2023-08-19T08:56:35Z
http://arxiv.org/abs/2308.09949v2
# Scene-Aware Feature Matching ###### Abstract Current feature matching methods focus on point-level matching, pursuing better representation learning of individual features, but lacking further understanding of the scene. This results in significant performance degradation when handling challenging scenes such as scenes with large viewpoint and illumination changes. To tackle this problem, we propose a novel model named SAM, which applies attentional grouping to guide Scene-Aware feature Matching. SAM handles multi-level features, i.e., image tokens and group tokens, with attention layers, and groups the image tokens with the proposed token grouping module. Our model can be trained by ground-truth matches only and produce reasonable grouping results. With the sense-aware grouping guidance, SAM is not only more accurate and robust but also more interpretable than conventional feature matching models. Sufficient experiments on various applications, including homography estimation, pose estimation, and image matching, demonstrate that our model achieves state-of-the-art performance. ## 1 Introduction Feature matching, which refers to finding the correct correspondence between two sets of features, is a fundamental problem for many computer vision tasks, such as object recognition [17], structure from motion (SFM) [28], and simultaneous localization and mapping (SLAM) [9]. But with illumination changes, viewpoint changes, motion blur and occlusion, it is challenging to find invariance and obtain robust matches from two images. The classic image matching pipeline generally consists of four parts: (1) feature detection (2) feature description (3) feature matching (4) outlier filtering. For feature detection, keypoints that have distinguishable features to facilitate matching are detected. For feature description, descriptors are extracted based on keypoints and their neighborhoods. The keypoint positions and the corresponding descriptors are employed as features of the image. Then the feature matching algorithm is applied to find the correct correspondence in two sets of extracted features. Finally, the outlier filtering algorithm is applied to identify and reject outlier matches based on the obtained matches. The current dominant approaches for image matching are learning-based descriptors with attention-based feature matching models. Learning-based descriptors extract local features with better representation capabilities by convolutional neural networks (CNN). Attention-based networks can enhance local features by perceiving global information and modeling contextual relationships between local features. While the above feature matching pipeline is the dominant method, the model performance still degrades significantly when dealing with extreme cases, such as scenes with large viewpoint changes and illumination changes. Because current methods only find correspondences at the low level, i.e., point-level textures, and do not incorporate scene-aware information, such as grouping information, semantic information, etc. Therefore, intra- and inter-image grouping is introduced to SAM to guide point-level attention-based matching. In this work, we take point-level descriptors as image Figure 1: An illustration of the grouping (top) and matching (bottom) result of our proposed method. Points in corresponding regions in the two images are correctly assigned to the same group, and the grouping information provides scene-aware guidance for point-level feature matching. tokens and additionally introduce the concept of group tokens, which are selected from image tokens by the proposed group token selection module. Each group token represents a group of features shared by two images, and we intend to assign the corresponding points in both images to the same group, while the points that do not correspond to each other are assigned to different groups. We apply Transformer to model the relationship between image tokens and group tokens for intra- and inter-images, and propose a token grouping module to assign image tokens to different groups based on similarity. A novel multi-level score strategy is proposed to utilize the scene-aware grouping information to give guidance on point-level features, and to obtain reasonable grouping results relying only on ground-truth match supervision. In summary, the contributions of this paper include: * We propose a novel feature matching model SAM, which allows feature matching to rely on more than point-level textures by introducing group tokens to construct scene-aware features. * The multi-level feature attention encoder and token grouping module are proposed to enable image tokens and group tokens to perceive global context information and assign image tokens to different groups. * We are the first to utilize only ground-truth match supervision to enable the feature matching model to perform the scene-aware grouping and matching through the proposed multi-level score. * SAM achieves state-of-the-art performance in multiple experiments while demonstrating outstanding robustness and interpretability. ## 2 Related Work **Local Feature Matching.** For feature detection and description, there are many well-known works on handcrafted methods, such as SIFT [18], SURF [2], BRIEF [3] and ORB [26], which have good performance and are widely used in 3D computer vision tasks. With the rise of deep learning, many learning-based detectors have been proposed to further improve the robustness of descriptors under illumination changes and viewpoint changes, such as R2D2 [24], SuperPoint [7], D2-Net [8] and LF-Net [21]. In addition to detectors, other works have focused on how to get better matches with the obtained local features. SuperGlue [27] is the pioneering work to propose an attention-based feature matching network, where the self-attention layer utilizes global contextual information in one image to enhance features and the cross-attention layer finds potential matches in two images and performs information interaction on potential matching features. OETR [4] further detects the commonly visible region with an object detection algorithm to limit the attention-based feature matching to the overlap region. Besides matching the sparse descriptors generated by the detector, LoFTR [30] applies self- and cross-attention directly to the feature maps extracted by the convolutional neural network and generates matches in a coarse to fine manner. MatchFormer [33] further abandons the CNN backbone network and adopts a pure attention architecture, which can perform both feature extraction and feature matching. There are also methods named cost-volume-based methods [25, 15, 6], which find matches through 4D convolutional neural networks. The above methods model the relationship between features at the point level and do not utilize higher-level scene information, resulting in non-robustness and significant performance degradation when handling large viewpoint changes and illumination changes. By introducing the concept of group tokens, we group the two image features based on the scene-aware information and construct multi-level features to make the model more robust when dealing with challenging scenes. **Vision Transformer.** Inspired by the great success of Transformers in the field of Natural Language Processing, researchers have attempted to apply Transformer to Computer Vision. A. Dosovitskiy et al. [32] first proposed a pure Transformer named ViT directly to sequences of image patches for image classification. Many variants are subsequently proposed to solve various tasks. For feature matching, SuperGlue [27] and LoFTR [30] are the first sparse and dense matching methods to apply Transformer. SuperGlue applies classical scaled dot product attention, and LoFTR applies linear attention [29] to reduce runtime and memory consumption. Our model is also an application of Transformer for the feature matching task. The global perceptual field of the attention mechanism facilitates the local features within and between images to perceive the global contextual information, making the matching more robust. And attention operations between image tokens and group tokens enable features to learn scene-aware grouping information. **Grouping.** Most learning-based grouping models follow a pipeline of first learning representations with deep neural networks and then applying the grouping algorithms. For representation learning networks, common types of networks include multi-layer perceptron (MLP) [11], CNN [20] and Variational Autoencoder (VAE) [13]. Our model, on the other hand, applies Transformer as the representation learning network, globally perceiving image tokens and group tokens to learn deep representations. For supervision, there are several commonly used grouping losses, which are K-means loss [37], Cluster assignment hardening [35], Cluster classification loss [10] and Locality preserving loss [12]. Our grouping algorithm relies only on ground-truth matches to encourage the corresponding points of two images to be assigned to the same group. ## 3 Methodology Assume that two sets of features \(d_{s}\in\mathbb{R}^{M\times C}\), \(d_{t}\in\mathbb{R}^{N\times C}\) need to be matched which are descriptors extracted from two images. Subscripts \(s\) and \(t\) stand for the source image and target image, respectively. \(M\), \(N\) are the number of descriptors, and \(C\) is the channels of descriptors. The keypoint positions are denoted as \(p_{s}\in\mathbb{R}^{M\times 3}\), \(p_{t}\in\mathbb{R}^{N\times 3}\), which consist of two coordinates and detection confidence. Our objective is to find the correct correspondence between the two images utilizing descriptors and position information. An overview of the network architecture is shown in Figure 2. The position information is first embedded into descriptors by the wave position encoder [19]. The position-aware descriptors are named image tokens, which are concatenated with the selected group tokens to form multi-level features. The self-attention layer and cross-attention layer are applied for \(L\) times to utilize the global context to enhance multi-level features, which are then re-split into image tokens and group tokens. The token grouping module is applied to assign image tokens to different groups based on similarity. Finally, the multi-level score is constructed from point-level features and group-level features to perform scene-aware matching. ### Group Token Selection The importance of clustering centers has been demonstrated by many clustering algorithms, without which the performance of the algorithm degrades or even appears as a trivial solution. To achieve the best scene-aware grouping effect, the group token selection module is proposed to select the proper group token among the image tokens. As shown in Figure 3, we first apply a linear layer to compute the image tokens \(f\) as group scores \(s\), which measure how effective each image token is as a group token. Then \(k\) image tokens with the highest group scores are selected as the group tokens \(\tilde{g}\). The number of groups \(k\) is set to \(2\), representing overlapping and non-overlapping regions. To enable end-to-end training of the group token selection module, we apply the \(\mathrm{sigmoid}\) function to calculate Figure 3: **Group token selection module. The proper group tokens are chosen from image tokens based on the score projected by image tokens. And the gate signal computed from the score enables the block to be trained end-to-end.** Figure 2: **Proposed architecture. SAM consists of three parts, namely multi-level feature initialization, attention layers, and multi-level score construction. SAM first applies the position encoder to obtain position-aware descriptors, i.e. image tokens, and then selects group tokens to form multi-level features with the image tokens. The multi-level features are processed through attention layers, then the image tokens are assigned to different groups through the token grouping module, and the grouping information is employed to guide point-level matching by constructing multi-level scores.** the group score \(s\) as a gating signal and multiply it with the selected image tokens to get the final group tokens \(g\). The group token selection module is denoted as \[\begin{split}& s=\mathrm{Linear}(f),\\ & idx=\mathrm{rank}(s,k),\\ &\tilde{g}=f(idx,:),\\ & gate=\mathrm{sigmoid}(s(idx)),\\ & g=\tilde{g}\odot gate,\end{split} \tag{1}\] where \(\odot\) represents the element-wise matrix multiplication. ### Multi-level Feature Attention To perform information propagation between image tokens and group tokens, we concatenate them, i.e., \(f_{s}\) and \(g_{s}\), \(f_{t}\) and \(g_{t}\), to form the multi-level features \(\hat{f}_{s}\), \(\hat{f}_{t}\), which are processed by the self-attention layer and the cross-attention layer. Specifically, the two sets of multi-level features are projected into two sets of \(Q,K,V\) by three linear layers, and we compute attention as \[\begin{split}&\mathcal{SA}_{s,t}=\mathrm{Softmax}(Q_{s,t}K_{s,t}^{T }/\sqrt{d})V_{s,t},\\ &\mathcal{CA}_{s}=\mathrm{Softmax}(Q_{s}K_{t}^{T}/\sqrt{d})V_{t},\\ &\mathcal{CA}_{t}=\mathrm{Softmax}((Q_{s}K_{t}^{T})^{T}/\sqrt{d})V_ {s},\end{split} \tag{2}\] where \(\mathcal{SA}\) and \(\mathcal{CA}\) denote self-attention and cross-attention output, and \(d\) is the feature dimension. The inputs of self-attention come from the feature of one image, such as \((Q_{s},K_{s},V_{s})\) or \((Q_{t},K_{t},V_{t})\), while the inputs of the cross-attention come from features of different images. To save computational costs and memory consumption, \(Q_{t}K_{s}^{T}\) is replaced by \((Q_{s}K_{t}^{T})^{T}\), since \(Q_{t}K_{s}^{T}\) and \(Q_{s}K_{t}^{T}\) are highly correlated. Two MLPs are applied to adaptively fuse the self- and cross-attention outputs, and the fusion outputs are used to update the features as the input of the next attention layer. \[\begin{split}\hat{f}_{s}^{l+1}&=\hat{f}_{s}^{l}+ \mathrm{MLP}_{s}([\hat{f}_{s}^{l}|\mathcal{SA}_{s}^{l}|\mathcal{CA}_{s}^{l}]), \\ \hat{f}_{t}^{l+1}&=\hat{f}_{t}^{l}+\mathrm{MLP}_{t}([ \hat{f}_{t}^{l}|\mathcal{SA}_{t}^{l}|\mathcal{CA}_{t}^{l}]).\end{split} \tag{3}\] We stack \(L=9\) multi-level feature attention layers, which model the relationship between image tokens, and between image tokens and group tokens of two sets of features. Attention between image tokens finds potential matches at the point level between features, while attention between group tokens and image tokens allows group tokens to perceive global context and facilitates subsequent token grouping module. ### Token Grouping Module As shown in Figure 4, to assign image tokens to different groups based on similarity with group tokens in the embedding space, four parts are employed to form the token grouping module, namely spatial MLP, pre-attention layer, assign-attention layer and channel MLP. Spatial MLP and channel MLP are introduced to the token grouping module to enhance the module capacity, which contain two fully-connected layers and an element-wise nonlinearity. Specifically, spatial MLP is applied to the transposed input \(g^{T}\in\mathbb{R}^{C\times 2}\), mapping \(\mathbb{R}_{2}\mapsto\mathbb{R}_{2}\) to interact between group tokens. And channel MLP is applied to \(g\in\mathbb{R}^{2\times C}\), mapping \(\mathbb{R}_{C}\mapsto\mathbb{R}_{C}\) to interact between channels. \[\begin{split}\mathrm{spatial\ MLP}:& O_{\star,i}=I_{ \star,i}+W_{2}^{s}\sigma(W_{1}^{s}I_{\star,i}),\\ \mathrm{channel\ MLP}:& O_{j,\star}=I_{j,\star}+W_{2 }^{c}\sigma(W_{1}^{c}I_{j,\star}),\end{split} \tag{4}\] where \(W_{1}^{s}\), \(W_{2}^{s},W_{1}^{c}\), \(W_{2}^{c}\) are learnable weight matrices, and \(\sigma\) is the \(\mathrm{GELU}\) function. As core parts of the token grouping module, the pre-attention layer and assign-attention layer perform the information propagation between image tokens and group tokens, and assign image tokens based on similarity, respectively. Both pre-attention layer and assign-attention layer apply three linear layers projecting group tokens as \(Q\) and image tokens as \(K,V\). The difference between the two attention is that \(\mathrm{Softmax}\) is applied for pre-attention layer, and \(\mathrm{Gumbel-Softmax}\) is applied for assign-attention layer. Figure 4: **Token Grouping Module.** The pre-attention layer is employed to establish the relationship between image tokens and group tokens before assignment. The assign-attention layer is employed to assign image tokens to different groups based on the similarity between image tokens and group tokens. The pre-attention and the assign-attention apply \(\mathrm{Softmax}\) and \(\mathrm{Gumbel-Softmax}\) functions respectively. Soft attention weight \(A\) for pre-attention layer and assign attention weight \(\tilde{A}\) for assign-attention layer are denoted as \[\begin{split}& A=\mathrm{Softmax}(QK^{T}),\\ &\tilde{A}_{i,j}=\frac{\exp(Q_{i}K_{j}^{T}+G_{i})}{\sum_{k=1}^{2} \exp(Q_{k}K_{j}^{T}+G_{k})}.\end{split} \tag{5}\] \(G\) are i.i.d. samples drawn from \(\mathrm{Gumbel}(0,1)\) distribution. After getting the assign attention weight, we decide the group to which the image tokens are assigned by the \(\mathrm{argmax}\) over all group tokens. Since the \(\mathrm{argmax}\) operation is not differentiable, the straight-through trick in [36] is applied to compute the assignment matrix as \[\begin{split}\hat{A}=\tilde{A}_{argmax}+\tilde{A}-\mathrm{sg}( \tilde{A}),\end{split} \tag{6}\] where \(\mathrm{sg}(\cdot)\) is the stop gradient operation. By the straight-through trick, assignment matrix \(\hat{A}\) is numerically equal to the one-hot matrix \(\tilde{A}_{argmax}\), and the gradient of \(\hat{A}\) is equal to the gradient of assign attention weight \(\tilde{A}\). Based on the assignment matrix \(\tilde{A}\), all the image tokens of the same group are weighted summed to update the group tokens. ### Multi-level Score Conventional feature matching methods compute the dot product of two sets of features as the point-level score matrix, and then select matches based on it. We compute both point-level score matrix \(S^{f}\in\mathbb{R}^{M\times N}\) and group-level score matrix \(S^{g}\in\mathbb{R}^{2\times 2}\) for the two sets of features based on image tokens \(f\) and group tokens \(g\). \[\begin{split} S^{f}_{i,j}=<f^{s}_{i},f^{t}_{j}>,\\ S^{g}_{i,j}=<g^{s}_{i},g^{t}_{j}>.\end{split} \tag{7}\] To utilize the group information to guide the point-level matching, the group-level score matrix is expanded to point-level \(\hat{S}^{g}\in\mathbb{R}^{M\times N}\) with the soft attention weights \(A_{s}\in\mathbb{R}^{M\times 2},A_{t}\in\mathbb{R}^{N\times 2}\) of the two sets of features. The point-level score and the group-level score are weighted summed to obtain the multi-level score matrix \(S\), \[\begin{split}\hat{S}^{g}=A_{s}S^{g}A_{t}^{T},\\ S=\alpha S^{f}+\beta\hat{S}^{g},\end{split} \tag{8}\] where \(\alpha\) and \(\beta\) are learnable parameters. The multi-level score matrix \(S\) is taken as the cost matrix of the optimal transport problem. The Sinkhorn algorithm [5] is applied to iteratively obtain the optimal partial assignment matrix \(P\). Based on \(P\), matches with \(P_{ij}\) less than the matching threshold \(\theta\) are filtered first, then the mutual nearest neighbor criterion is employed to select the final matches \(M\). For supervision, ground-truth matches \(M_{gt}\) and non-matching point sets \(I,J\) are computed from homography or camera pose and depth map. The first loss is the negative log-likelihood loss \(Loss_{m}\) over the optimal partial assignment matrix \(P\), which is denoted as \[\begin{split} Loss_{m}=&-\frac{1}{|M_{gt}|}\sum_{(i,j)\in M_{gt}}\log P_{i,j}\\ &-\frac{1}{|I|}\sum_{i\in I}\log P_{i,M+1}-\frac{1}{|J|}\sum_{j \in J}\log P_{N+1,j}.\end{split} \tag{9}\] \(P_{\mathrm{s},M+1}\) and \(P_{N+1,*}\) are learnable dustbins to which non-matching points are assigned. \(Loss_{m}\) provides supervision for matching and implicit supervision for grouping, as it encourages the assignment of corresponding points in two images to the same group, and the non-corresponding points to a different group. To further enhance the ability to group precisely, we apply explicit supervision to assign attention weight \(\tilde{A}\) with ground-truth point sets \(M_{gt}\), \(I\) and \(J\). \[\begin{split} Loss_{g}=&-\frac{1}{|M_{gt}|}\sum_{(i,j)\in M_{gt}}(\log\tilde{A}^{s}_{i,0}+\log\tilde{A}^{t}_{j,0})\\ &-\frac{1}{|I|}\sum_{i\in I}\log\tilde{A}^{s}_{i,1}-\frac{1}{|J|} \sum_{j\in J}\log\tilde{A}^{t}_{j,1}.\end{split} \tag{10}\] ## 4 Experiments ### Implementation Details SAM is trained on the Oxford100k dataset [22] for homography estimation experiments and on the MegaDepth dataset [16] for pose estimation and image matching experiments. Our PyTorch implementation of SAM involves \(L=9\) attention layers, and all intermediate features have the same dimension \(C=256\). The matching threshold \(\theta\) is set to 0.2. For the homography estimation experiment, we employ the AdamW [14] optimizer for 10 epochs using the cosine decay learning rate scheduler and 1 epoch of linear warm-up. A batch size of 8 and an initial learning rate of 0.0001 are used. For the outdoor pose estimation experiment, we use the same AdamW optimizer for 30 epochs using the same learning rate scheduler and linear warm-up. A batch size of 2 and an initial learning rate of 0.0001 are used. All experiments are conducted on a single NVIDIA GeForce RTX 2060 SUPER GPU, 16GB RAM and Intel Core i7-10700K CPU. ### Homography Estimation **Dataset.** For the homography estimation, SAM is trained on the Oxford100k dataset [22] and evaluated on the \(\mathcal{R}\)1M dataset [23]. To perform self-supervised training, we randomly sample ground-truth homography by limited parameters to generate image pairs. We resize images to 640\(\times\)480 pixels and detect 512 keypoints in the image. When the detected keypoints are not enough, random keypoints are added for efficient batching. **Baselines.** We employ the Nearest Neighbor (NN) matcher, NN matcher with outlier filtering methods [38, 39], and attention-based matcher SuperGlue [27] as baseline matchers. All matchers in Table 1 apply SuperPoint [7] as input descriptors for a fair comparison. The results of SuperGlue are from our implementation. **Metrics.** The ground-truth matches are computed from the generated homography and the keypoint coordinates of the two images. A match is considered correct if the reprojection error is less than \(3\) pixels. We evaluate the precision and recall based on the ground-truth matches and compute the F1-score. We calculate reprojection error with the estimated homography and report the area under the cumulative error curve (AUC) up to 10 pixels. **Results.** As shown in Table 1, SAM achieved the best performance in the homography estimation experiment. Benefiting from the powerful modeling capability of Transformer, the attention-based matcher is significantly ahead of other methods. Compared to the state-of-the-art outlier filtering method OANet, SAM achieves a \(+21\%\) lead on F1-score. Both as attention-based methods, SAM is ahead of SuperGlue in both precision and recall because grouping information is introduced in addition to point-level matching, eliminating unreasonable matches and strengthening matches in the same group based on the grouping information. It ends up with a \(+1.92\%\) advantage over SuperGlue on the F1-score. Qualitative results of matching and grouping are shown in Figure 6. ### Outdoor Pose Estimation **Dataset.** For the outdoor pose estimation experiment, the model is trained on the MegaDepth dataset [16] and evaluated on the YFCC100M dataset [31]. For training, 200 pairs of images in each scene are randomly sampled for each epoch. For evaluation, the YFCC100M image pairs and ground truth poses provided by SuperGlue are used. For training on the MegaDepth dataset, we resize the images to 960\(\times\)720 pixels and detect 1024 keypoints. **Baselines.** The baseline method contains NN matchers with outlier filtering method [39] and attention-based matcher SuperGlue [27]. All matchers in Table 2 apply SuperPoint [7] as input descriptors for a fair comparison. The results of SuperGlue are from our implementation. **Metrics.** The AUC of pose errors at the thresholds (\(5^{\circ}\), \(10^{\circ}\), \(20^{\circ}\)) are reported. Both approximate AUC [39] and exact AUC [27] are evaluated for a fair comparison. **Results.** As shown in Table 2, for the outdoor pose estimation experiments, SAM achieves the best performance at all thresholds in both approximate AUC and exact AUC, demonstrating the robustness of our models. Compared to the attention-based matcher SuperGlue, which only considers point-level matching, our model can bring \((+3.00\%,+3.67\%,+3.12\%)\) improvement on exact AUC and \((+4.98\%,+4.12\%,+3.45\%)\) improvement on approximate AUC at three thresholds of \((5^{\circ},10^{\circ},20^{\circ})\), respectively. In outdoor scenes where large viewpoint changes and occlusions often occur, SAM provides scene understanding information for feature matching by utilizing grouping information to block out irrelevant interference. ### Image Matching **Dataset.** For the image matching experiment, the same model in the outdoor experiment is used. We follow the evaluation protocol as in D2-Net [8] and evaluate models on 108 HPatches [1] sequences, which contain 52 sequences with illumination changes and 56 sequences with viewpoint changes. **Baselines.** Our model is compared with learning-based descriptors SuperPoint [7], D2Net [8], R2D2 [24] and advanced matchers SuperGlue [27], LoFTR [30], Patch2Pix [40], and CAPS [34]. **Metrics.** We compute the reprojection error from the ground truth homography and vary the matching threshold to plot the mean matching accuracy (MMA) curve, which is the average percentage of correct matches for each image. **Results.** As shown in Figure 5, SAM achieves the best overall performance at reprojection error thresholds of 5 \begin{table} \begin{tabular}{l c c c c} \hline \hline Matcher & AUC & Precision & Recall & F1-score \\ \hline NN & 39.47 & 21.7 & 65.4 & 32.59 \\ NN + mutual & 42.45 & 43.8 & 56.5 & 49.35 \\ NN + PointCN & 43.02 & 76.2 & 64.2 & 69.69 \\ NN + OANet & 44.55 & 82.8 & 64.7 & 72.64 \\ SuperGlue & 51.94 & 86.2 & 98.0 & 91.72 \\ **SAM** & **53.80** & **89.54** & **98.13** & **93.64** \\ \hline \hline \end{tabular} \end{table} Table 1: **Homography estimation on \(\mathcal{R}\)1M dataset. AUC @10 pixels is reported. The best method is highlighted in bold.** \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Matcher} & \multicolumn{3}{c}{Exact AUC} & \multicolumn{3}{c}{Approx. AUC} \\ \cline{2-7} & @\(5^{\circ}\) & @\(10^{\circ}\) & @\(20^{\circ}\) & @\(5^{\circ}\) & @\(10^{\circ}\) & @\(20^{\circ}\) \\ \hline NN + mutual & 16.94 & 30.39 & 45.72 & 35.00 & 43.12 & 54.25 \\ NN + OANet & 26.82 & 45.04 & 62.17 & 50.94 & 61.41 & 71.77 \\ SuperGlue & 28.45 & 48.6 & 67.19 & 55.67 & 66.83 & 74.58 \\ **SAM** & **31.45** & **52.27** & **70.31** & **60.65** & **70.95** & **78.03** \\ \hline \hline \end{tabular} \end{table} Table 2: **Pose estimation on YFCC100M dataset. Our model lead other methods at all thresholds.** and above, demonstrating the robustness of our model in response to illumination changes and viewpoint changes. Experiments find that detector-based matching methods such as SuperGlue work well in viewpoint change scenes, while detector-free matching methods such as LoFTR work well in illumination change scenes. SAM performs well as a detector-based method in illumination change scenes, achieving an MMA of \(100\%\) at reprojection error thresholds of 8 and above, ahead of the detector-free matchers LoFTR and Patch2Pix and substantially ahead of the detector-based matcher SuperGlue. For viewpoint change scenes, our model is ahead of LoFTR at error thresholds of 6 and above. ### Ablation Study Comprehensive ablation experiments are conducted on the Oxford100k dataset to prove the validity of our designs. **Group Token Selection.** As shown in Table 3, we conduct ablation experiments on the methods of generating group tokens, containing random selection, learnable parameters as group tokens, and group token selection module. Firstly, as methods that can be learned end-to-end, learnable parameters group tokens and selection module outperform un-learnable random selection method, demonstrating the importance of proper group tokens. The token selection module performs better than the learnable parameter tokens because the token selection module can be adaptively selected based on the input image token whereas the learnable parameter tokens are static for all the images, so learnable parameter tokens have weaker generalizability when dealing with diverse scenes such as outdoor, indoor scenes. **Token Grouping Module.** As shown in Table 4, without the spatial MLP or channel MLP, the model performance degrades because MLP provides non-linearity and higher dimensional mapping enabling larger model capacity. And the performance also degrades without the pre-attention layer because the image tokens and the group tokens lose information propagation before assignment. When replacing hard attention with soft attention in the assign-attention, that is, replacing the one-hot assignment matrix with \(A\) obtained by \(\mathrm{Softmax}\), the performance of the token grouping module degrades because soft attention introduces more ambiguity to the group token while hard assignment makes the group tokens more separated. Stronger explicitness and weaker ambiguity in grouping make each group contain less information about weakly correlated image tokens. **Multi-level Score.** Experiments on the multi-level score are conducted to demonstrate the guidance of group-level \begin{table} \begin{tabular}{l c c} \hline \hline Methods & \#Params (M) & Runtime (ms) \\ \hline SuperGlue \(@2048\) keypoints & 12.0 & 104.37 \\ LoFTR \(@800\times 800\) resolution & 11.6 & 373.50 \\ ASpanFormer \(@800\times 800\) resolution & 15.8 & 500.28 \\ SAM \(@2048\) keypoints & 14.8 & 111.42 \\ \hline \hline \end{tabular} \end{table} Table 6: **Efficiency analysis.** Figure 5: **Image Matching on HPatches dataset. MMA curves are plotted by changing the reprojection error threshold. Our model achieves the best overall performance at reprojection error thresholds of 5 and above.** \begin{table} \begin{tabular}{l c c c} \hline \hline Methods & Precision & Recall & F1-score \\ \hline (i) w/o pre-attention & 80.85 & 95.84 & 87.71 \\ (ii) w/o spatial MLP & 81.35 & 96.16 & 88.14 \\ (iii) w soft attention & 81.60 & 96.15 & 88.28 \\ (iv) w/o channel MLP & 81.69 & 96.05 & 88.29 \\ (v) **full** & **81.74** & **96.36** & **88.45** \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation study on token grouping module.** \begin{table} \begin{tabular}{l c c c} \hline \hline Methods & Precision & Recall & F1-score \\ \hline (i) random & 80.88 & 92.48 & 86.29 \\ (ii) learnable parameters & 81.22 & 95.10 & 87.61 \\ (iii) **selection** & **81.74** & **96.36** & **88.45** \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablation study on group token.** score to point-level matching. When the group-level term in the multi-level score is removed, our matcher degenerates to a conventional point-level matcher, and the performance degrades because the model loses the scene-aware information brought by grouping, demonstrating the effectiveness of our multi-level score design. Without the group-level term in score, the token grouping module also loses the supervision brought by ground-truth matches, so the token grouping module cannot be trained to produce reasonable grouping results. ## 5 Limitation and Discussion Firstly, due to the additional group tokens and token grouping module, the computational complexity of SAM increases compared to point-level matchers, but only two groups do not harm the real-time capability of our model. Since the token grouping module only utilizes ground-truth matching supervision, the trained model explicitly assigns overlapping regions in two images to one group. The model is unable to recognize the semantics of buildings or objects in the two scenes to guide matching. We believe that the model is able to group more complex semantic information if appropriate supervision is provided. ## 6 Conclusion In this paper, we present a novel attention-based matcher SAM, which incorporates scene-aware group guidance. Compared to matching features at the point level, we introduce group tokens on the basis of the image tokens. Group tokens and image tokens are concatenated to model the global relationship through attention layers, and the token grouping module assigns image tokens to scene-aware groups. We build multi-level score which utilizes point-level and group-level information to generate matching. Benefiting from the scene-aware information, our model achieves state-of-the-art performance. SAM is also more interpretable than current feature matching models since grouping information can be visualized. ## 7 Acknowledgments The authors would like to thank Prof. Min-Ling Zhang for insightful suggestions and fruitful discussions. This work was jointly supported by the National Natural Science Foundation of China under grants 62001110 and 62201142, the Natural Science Foundation of Jiangsu Province under grant BK20200353, and the Shenzhen Science and Technology Program under grant JCYJ20220530160415035. Figure 6: **Qualitative results for grouping and matching. SAM can effectively assign the corresponding points of two images into the same group when dealing with indoor and outdoor scenes, thus guiding a more robust and accurate point-level matching.**
2304.09444
Rank-Based Learning and Local Model Based Evolutionary Algorithm for High-Dimensional Expensive Multi-Objective Problems
Surrogate-assisted evolutionary algorithms have been widely developed to solve complex and computationally expensive multi-objective optimization problems in recent years. However, when dealing with high-dimensional optimization problems, the performance of these surrogate-assisted multi-objective evolutionary algorithms deteriorate drastically. In this work, a novel Classifier-assisted rank-based learning and Local Model based multi-objective Evolutionary Algorithm (CLMEA) is proposed for high-dimensional expensive multi-objective optimization problems. The proposed algorithm consists of three parts: classifier-assisted rank-based learning, hypervolume-based non-dominated search, and local search in the relatively sparse objective space. Specifically, a probabilistic neural network is built as classifier to divide the offspring into a number of ranks. The offspring in different ranks uses rank-based learning strategy to generate more promising and informative candidates for real function evaluations. Then, radial basis function networks are built as surrogates to approximate the objective functions. After searching non-dominated solutions assisted by the surrogate model, the candidates with higher hypervolume improvement are selected for real evaluations. Subsequently, in order to maintain the diversity of solutions, the most uncertain sample point from the non-dominated solutions measured by the crowding distance is selected as the guided parent to further infill in the uncertain region of the front. The experimental results of benchmark problems and a real-world application on geothermal reservoir heat extraction optimization demonstrate that the proposed algorithm shows superior performance compared with the state-of-the-art surrogate-assisted multi-objective evolutionary algorithms. The source code for this work is available at https://github.com/JellyChen7/CLMEA.
Guodong Chen, Jiu Jimmy Jiao, Xiaoming Xue, Zhongzheng Wang
2023-04-19T06:25:04Z
http://arxiv.org/abs/2304.09444v4
Rank-Based Learning and Local Model Based Evolutionary Algorithm for High-Dimensional Expensive Multi-Objective Problems ###### Abstract Surrogate-assisted evolutionary algorithms have been widely developed to solve complex and computationally expensive multi-objective optimization problems in recent years. However, when dealing with high-dimensional optimization problems, the performance of these surrogate-assisted multi-objective evolutionary algorithms deteriorate drastically. In this work, a novel Classifier-assisted rank-based learning and Local Model based multi-objective Evolutionary Algorithm (CLMEA) is proposed for high-dimensional expensive multi-objective optimization problems. The proposed algorithm consists of three parts: classifier-assisted rank-based learning, hypervolume-based non-dominated search, and local search in the relatively sparse objective space. Specifically, a probabilistic neural network is built as classifier to divide the offspring into a number of ranks. The offspring in different ranks uses rank-based learning strategy to generate more promising and informative candidates for real function evaluations. Then, radial basis function networks are built as surrogates to approximate the objective functions. After searching non-dominated solutions assisted by the surrogate model, the candidates with higher hypervolume improvement are selected for real evaluations. Subsequently, in order to maintain the diversity of solutions, the most uncertain sample point from the non-dominated solutions measured by the crowding distance is selected as the guided parent to further infill in the uncertain region of the front. The experimental results of benchmark problems and a real-world application on geothermal reservoir heat extraction optimization demonstrate that the proposed algorithm shows superior performance compared with the state-of-the-art surrogate-assisted multi-objective evolutionary algorithms. The source code for this work is available at [https://github.com/JellyChen7/CLMEA](https://github.com/JellyChen7/CLMEA). Classifier-assisted optimization, expensive optimization, high-dimensional multi-objective optimization, rank-based learning, surrogate-assisted evolutionary algorithm. ## I Introduction Multi-objective optimization has attracted extensive research concerns in recent years since many real-world optimization problems contain several conflicting objectives [1]. Multi-objective evolutionary algorithms (MOEAs), inspired by the evolution mechanism of organisms, provide powerful black-box optimizers to search for the optimal Pareto solutions [2-4], and have been widely applied to deal with many complex practical optimization problems. Most existing frameworks of the MOEAs can mainly be classified into three types [5]: dominance-based MOEAs, indicator-based MOEAs, and decomposition-based MOEAs. Dominance-based MOEAs use Pareto dominance to differentiate and sort solutions, such as NSGA-II [6] and MOPSO [7]. Indicator-based MOEAs utilize the performance indicators, e.g., hypervolume (HV) [8] and inverted generational distance (IGD) [9], as the infill criteria. Decomposition-based MOEAs, such as MOEA/D [10], decompose a MOP into a set of sub-problems and optimize them simultaneously. Nevertheless, many real-world engineering problems involve time- or resource-intensive evaluations, e.g., computational fluid dynamics simulations, thermal-hydraulic-mechanical coupling simulations, and physical experiments. Evolutionary algorithms become computationally prohibitive due to the large number of function evaluations (FEs) required before convergence when tackling expensive multi-objective optimization problems (EMOPs). Machine learning techniques have made remarkable progress in solving EMOPs [1, 11, 12], which can be mainly classified into three categories from the purpose of the models: surrogate-assisted methods, estimation of Pareto front distributions, and learning-assisted offspring generation. Table I summarizes representative machine learning models employed in surrogate-assisted or learning-assisted MOEAs and corresponding experimental settings and test suites. Surrogate-assisted evolutionary algorithms (SAEAs) have shown their effectiveness in reducing the number of FEs during the optimization process, and have been extensively developed in addressing EMOPs in the past decade [11]. According to the output of surrogates, SAEAs can be roughly divided into approximation-based surrogates and classification-based surrogates. Approximation-based surrogates, including polynomial response surface [13], Gaussian process (also known as Kriging) [14-16], radial basis function network (RBF) [17-19], artificial neural network [20] and support vector regression (SVR) [21-23], build computationally efficient mathematical models to approximate optimization landscapes of interest. It is not trivial to decide the surrogate type without any prior information since it is problem-dependent [24]. Wang _et al._[24] combined polynomial response surface, Kriging, and RBF as the ensemble surrogate to solve single-objective expensive optimization problems. MOEA/D-EGO [15] decomposed a MOP into several sub-problems, and employed kriging model to maximize the expected improvement metric for each sub-problem. K-RVEA [25] used Kriging to approximate objective functions, and generated reference vectors to guide the evolutionary search. NSGAIII-EHVI [26] combined the framework of NSGA-III and Kriging surrogate model, infilling with expected HV improvement criterion, to solve many-objective problems. KTA2 [27] introduced an adaptive infill criterion to identify the most important requirement on convergence, diversity, or uncertainty to deal with EMOPs. RVMM [28] employed an adaptive model management strategy assisted by two sets of reference vectors to calculate an amplified upper confidence bound. MASTO [29] used an adaptive technique to dynamically establish the most promising RBF and Kriging surrogates, and employed multiple infill criteria to solve MOPs. EMMOEA [30] constructed surrogate model with Kriging model for each objective and developed a new performance indicator that balances the diversity and convergence of the algorithm to select promising solutions for real FEs. Nevertheless, most Kriging-assisted evolutionary algorithms is computationally intensive and can mainly be applied to problems with less than 15 decision variables [31, 32, 33]. END-ARMOEA [20] was proposed using dropout neural network to replace Gaussian process as surrogate towards high-dimensional many-objective optimization problems. Classification-based surrogates, such as support vector classifier [22, 34], gradient boosting classifier [35], artificial neural network [36, 37], and k-nearest neighbor (KNN) [38], learn the dominance relationship between the sample points to select promising solutions. If the dominance relationships between the Pareto solutions and the offspring candidates are known, fitness values would be less useful, and environment selection can be performed efficiently. CSEA [36] employed feedforward neural network (FNN) to predict the dominance relationship between the selected reference solutions and candidate solutions. PARETO-SVM [39] used SVM to predict the dominance relationship and tightly characterize the current Pareto set and the dominated region. In contrast, MOEA/D-SVM [40] constructed a classifier according to the value of the scalarization function for each sub-problem, and pre-select new generated solutions for real FEs. CPS-MOEA [38] built a KNN model to classify non-dominated solutions as good solutions, and pre-screened candidate solutions to perform real FEs. MCEA/D [34] employed multiple local classifiers as scalarization function, obtained from decomposition for high-dimensional MOPs with dimensions varying from 50 to 150. Nevertheless, the pre-screening ability of classifiers is relatively poor, since the prediction is less informative in comparison with approximation methods [34]. Estimation of Pareto front distributions is able to mitigate the adverse risk of sparse area in Pareto front and improve the diversity of the Pareto solutions [41, 42]. Commonly used methods to approximate the Pareto set are Bayesian network [43], regression decision tree [44], generative adversarial network [45], and manifold learning [41]. Tian _et al._[46] developed a Pareto front estimation method based on local models to guide the search direction of evolutionary algorithms. Li and Kwong [41] applied a principal curve algorithm to obtain an approximation of the Pareto solutions manifold in solving EMOPs. Li _et al._[42] introduced a Pareto front model-based local search method called SMEA-PF. This method built a Pareto front model with current optimal non-dominated solutions, and selected some sparse points to preform local surrogate assisted search. Learning-assisted offspring generation learns and explores the landscape information of problems in the design space to accelerate evolutionary search and converge into promising area efficiently [47]. Liu _et al._[2] systematically overviewed recent advances of learnable MOEAs and summarized the attractive new directions from a unique perspective. He _et al._[45] developed GMOEA driven by generative adversarial networks to generate high-quality solutions in high-dimensional decision space. Liu _et al._[48] proposed an accelerated evolutionary search algorithm ALMOEA, where a multilayer perceptron is adopted to learn a gradient-descent-like direction vector for each solution to reproduce promising solutions. Zhan _et al._[49] developed a learning-aided evolutionary optimization framework that integrates evolution knowledge learned by neural network from the evolution process to accelerate the convergence. Tian _et al._[50] proposed an operator selection method based on reinforcement learning for evolutionary multi-objective optimization. Zhen _et al._[51] also used reinforcement learning to select surrogate-assisted sampling strategy in solving expensive single-objective optimization problems. Wang _et al._[52] developed deep reinforcement learning for generalizable oil reservoir well-control optimization problems. The aforementioned reinforcement learning algorithms are valuable explorations in learnable optimization framework. However, they highly rely on the evolutionary operators or surrogate-assisted sampling, which cannot ideally evolve and generate the solutions independently. How to design a more intelligent and learnable optimization framework still requires more efforts. To further accelerate the convergence of optimizers on high-dimensional EMOPs, a novel classifier-assisted rank-based learning and local model-based multi-objective evolutionary algorithm namely CLMEA is proposed. The proposed algorithm consists of three parts: classifier-assisted rank-based learning pre-screening, HV-based non-dominated search, and local search in the sparse objective space. The main contributions of this paper can be summarized as follows: 1) Three novel infill sampling strategies are developed in CLMEA. In contrast to conventional classification-based algorithms, CLMEA presents a classifier-assisted rank-based learning strategy that generates promising offspring solutions under the guidance of classifier rather than generating offspring and then pre-screening promising offspring solutions with a classifier. Rank-based learning strategy can enhance the generation of elite solutions, and the selection of real FEs used the uncertainty of solutions in decision space, which is able to balance the exploration and exploitation. Besides, HV-based non-dominated search is developed to enhance the efficiency in the early optimization period. Hyper-volume based non-dominated search employs the non-dominated search for the surrogate, and pre-screens final non-dominated front by HV improvement. In addition, local surrogates centered at the sparse point of current front selected by the crowding distance are constructed to infill solutions at the sparse non-dominated front, thus improving the divergence of the solutions. The most uncertain non-dominated solutions are selected to conduct real FEs. The cooperation of the three infill sampling strategies can maintain the diversity and convergence of the obtained non-dominated solutions. 2) To the best of our knowledge, this is the first time to introduce probabilistic neural network model as a classifier to rank the candidate solutions, and guide the further offspring generation. Considering the classifier is not enough to generate strong active offspring evolution pressure, rank-based learning strategy is developed to generate more promising and informative candidates, and the most uncertain first-ranking solution in the decision space is selected to conduct real FEs. 3) CLMEA is compared with five state-of-the-art surrogate-assisted MOEAs on high-dimensional MOPs. Experimental results on DTLZ and ZDT problems with dimensions varying from 30 to 200 and a real-world application on heat extraction optimization of fractured geothermal reservoir demonstrate that the proposed algorithm shows better convergence and diversity performance than other state-of-the-art MOEAs. The rest of this article is organized as follows. Section II presents preliminaries of this article. The proposed classifier and local model based evolutionary algorithm is described in Section III. Experimental results and analysis are illustrated in Section IV. Conclusions and discussions are provided in Section V. ## II Preliminaries This section first presents a brief problem definition, then introduces probabilistic neural network and radial basis function surrogates. Finally, the motivation of this work is briefly described. ### _Problem Definition_ Without loss of generality, an unconstrained MOP can be mathematically modeled as \[\begin{split}\min_{\mathbf{x}}\,\mathbf{F}(\mathbf{x})\!=\!(f_{1}(\mathbf{x}),f_ {2}(\mathbf{x}),...,f_{m}(\mathbf{x}))\\ s.t.\,\mathbf{x}\!\in\!\Omega\end{split} \tag{1}\] where \(\mathbf{x}\!=\!(x_{1},\,x_{2},...,\,x_{d})\) is the decision vector with \(d\) variables in a feasible region \(\Omega\), and \(\mathbf{F}\) is a set of \(m\) objective functions. Fig. 1 presents the schematic diagram of MOPs and related concepts, with the following definitions involved: **Definition 1** (Pareto dominance): For any two solutions \(\mathbf{x}_{i},\mathbf{x}_{i}\in\Omega\), \(\mathbf{x}_{i}\) dominates \(\mathbf{x}_{z}\) iff \(f_{i}(\mathbf{x}_{i})\leqslant f_{i}(\mathbf{x}_{z})\) for \(\forall i\in\{1,2,...,m\}\) and \(f_{j}(\mathbf{x}_{i})\!<\!f_{j}(\mathbf{x}_{z})\) for \(\exists\,j\!\in\!\{1,2,...,m\}\), referred to as \(\mathbf{x}_{i}\!\succ\!\mathbf{x}_{z}\). For instance, in Fig. 1, \(\mathbf{x}_{0}\) dominates \(\mathbf{x}_{c}\), while \(\mathbf{x}_{A}\) and \(\mathbf{x}_{0}\) are mutually non-dominated. **Definition 2** (Pareto optimal solutions and Pareto front): Any solution \(\mathbf{x}\!\in\!\Omega\) is said to be Pareto optimal solution iff there is no other solution dominating it. Pareto optimal solutions in a set is known as Pareto set. The projection of the Pareto set into the objective space is known as Pareto front. When the number of objectives \(\mathbf{m}\) is larger than 3, the problems are known as many objective problems (MaOPs) [53]. The performance of MOEAs deteriorate dramatically when solving MaOPs since the non-dominated solutions increase exponentially with the number of objectives, causing dominance-based MOEAs fail to identify the solutions [36]. Note that expensive optimization problems denote that the objective calculation involves time-consuming numerical simulations or resource-intensive experiments. In the context of expensive optimization problems, dimensions above 30 are considered high-dimensional expensive problems. Most existing SAEAs were developed to solve EMOPs with dimensions less than 30. ### _Probabilistic Neural Network_ Probabilistic neural network (PNN) [54], developed from Bayesian decision strategy, is a feed-forward neural network that is able to map the input pattern into any number of categories. PNN is composed of four layers, an input layer, a pattern layer, a summation layer and an output layer. The input layer receives the input vector with \(d\) variables. In the case of Gaussian kernel, the pattern layer computes the probability density estimation from the input data to the kernel center as: \[\varphi_{\mathrm{i}}(\mathbf{x})\!=\!\frac{1}{2\,\pi^{\mathrm{i}\!\!\!/\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, thus improving the divergence of the non-dominated front. The cooperation of the three infill sampling strategies can maintain the diversity and convergence of the obtained non-dominated solutions. ## III Classifier-assisted rank-based learning and local model based evolutionary algorithm In this section, the framework of the proposed CLMEA is introduced in detail. Classifier-assisted rank-based learning pre-screening, HV-based non-dominated search, and local search in the sparse objective space are subsequently described. ### _Framework_ The pseudo-code and framework diagram of the proposed CLMEA are presented in Algorithm 1 and Fig. 2, respectively. ``` Input: The maximum number of function evaluations \(maxFE\)s; the number of initial sample points \(N\); the population size \(NP\); the number of infill solutions \(n\) Output: Non-dominated solutions 1) Generate \(N\) initial sample points \(\{\mathbf{x}_{i},...,\mathbf{x}_{s}\}\) with LHS 2) Evaluate real function values of \(\{\mathbf{x}_{i},...,\mathbf{x}_{s}\}\) 3) \(FE\s\gets N\) 4) Initialize \(\mathbf{A}\leftarrow\{(\mathbf{x}_{i},\mathbf{y}_{i}),...,(\mathbf{x}_{s},\mathbf{y}_{s})\}\) 5) For i = 1:\(m\) 6) \(\mathbf{\hat{x}}_{\text{atoms}}\leftarrow\argmin\left(f_{i}(x)\right)\) 7) Conduct real function evaluation on \(\hat{\mathbf{x}}_{\text{atoms}}\) 8) \(FE\s\gets FE\s+1\); \(\mathbf{A}\leftarrow\mathbf{A}\cup(\hat{\mathbf{x}}_{\text{atoms}},\hat{\mathbf{y}}_{\text{ Montana}})\) 9) End for 10) While \(FE\s<maxFE\)s 11) \(\{\hat{\mathbf{x}}_{1},...,\hat{\mathbf{x}}_{s}\}\leftarrow\) Perform classifier-assisted rank-based learning pre-screening // Algorithm 2 12) Conduct real function evaluations on \(\{\hat{\mathbf{x}}_{1},...,\hat{\mathbf{x}}_{s}\}\) 13) \(FE\s\gets FE\s+n\); \(\mathbf{A}\leftarrow\mathbf{A}\cup(\{\hat{\mathbf{x}}_{1},\mathbf{y}_{1}),...,(\hat{\mathbf{x}}_{ s},\mathbf{y}_{s})\}\) 14) \(\{\hat{\mathbf{x}}_{1},...,\hat{\mathbf{x}}_{s}\}\leftarrow\) Execute HV-based non-dominated search // Algorithm 3 15) Evaluate the function values of \(\{\hat{\mathbf{x}}_{1},...,\hat{\mathbf{x}}_{s}\}\) 16) \(FE\s\gets FE\s+n\); \(\mathbf{A}\leftarrow\mathbf{A}\cup(\{\hat{\mathbf{x}}_{1},\mathbf{y}_{1}),...,(\hat{\mathbf{x}}_{ s},\mathbf{y}_{s})\}\) 17) \(\{\hat{\mathbf{x}}_{1},...,\hat{\mathbf{x}}_{s}\}\leftarrow\) Conduct local search in the sparse objective space // Algorithm 4 18) Evaluate the function values of \(\{\hat{\mathbf{x}}_{1},...,\hat{\mathbf{x}}_{s}\}\) 19) \(FE\s\gets FE\s+n\); \(\mathbf{A}\leftarrow\mathbf{A}\cup(\{\hat{\mathbf{x}}_{1},\mathbf{y}_{1}),...,(\hat{\mathbf{x}}_{ s},\mathbf{y}_{s})\}\) 20) End while ``` **Algorithm 1** Proposed CLMEA Initially, the sample points are generated using Latin hypercube sampling (LHS) from the decision space, and the real function evaluations are performed. Then add all evaluated sample points into the archive \(\mathbf{A}\!=\!\{\mathbf{x}_{i},\mathbf{y}_{i}\}\,\). Classifier-assisted rank-based learning pre-screening is first developed to generate more promising and informative candidates, which uses non-dominated rank and distance information, for real function evaluations. Subsequently, a HV-based non-dominated search is performed to efficiently speed up the convergence. After searching the non-dominated solutions of the surrogate model, the candidates with higher HV improvement are selected for real FEs. Furthermore, in order to keep the diversity of the solutions, the most uncertain sample points from the non-dominated solutions measured by crowding distance is selected as the guided parent for the generation of promising candidate solutions. Then the neighbor points in the objective space are adopted as the parents for further offspring generation. The non-dominated solutions are pre-screened by the surrogate, and the sparse solutions of the current non-dominated solutions are selected for real FEs to further infill in the sparse region of the current non-dominated front. The optimization process continues until the maximum number of FEs is reached. ### _Classifier-Assisted Rank-Based Learning Pre-screening_ Classifier-assisted rank-based learning pre-screening is Fig. 3: The process of classifier-assisted rank-based learning pre-screening Fig. 2: Framework diagram of the proposed CLMEA consisting of three sub-loops. In the main loop, the candidate sample points are selected to perform real function evaluations, while in the sub-loop, the candidate solutions are selected based on the prediction of the surrogates. mainly used to enhance the exploitation of Pareto front, and also promote the exploration of sparse and promising areas. The detailed pseudo-code and algorithm process of classifier-assisted rank-based learning pre-screening strategy is presented in Algorithm 2 and Fig. 3, respectively. Specifically, non-dominated sorting is executed for the solutions in the archive \(\mathbf{A}\), and top \(\mathit{NP}\) solutions are selected as the population \(\mathbf{P}\leftarrow\{\mathbf{x}_{1},...,\mathbf{x}_{\mathit{NP}}\}\) based on the non-dominated sorting and crowding distance metrics. Solutions to the first level are labeled 1, solutions to the second level are labeled 2, until all the solutions in \(\mathbf{P}\) are ranked \(\{l_{i},...,l_{\mathit{NP}}\}\). After that, a PNN is constructed as classifier to predict the offspring. Rank-based learning operator is then developed to generate more promising and informative candidates: \[\mathbf{v}_{i}=\mathbf{x}_{c_{i}}+\mathit{Mu}\times(\mathbf{x}_{c_{i}}-\mathbf{x}_{c_{i}}) \tag{6}\] where \(\mathbf{x}_{c_{i}}\) and \(\mathbf{x}_{c_{i}}\) are individuals randomly selected from the first level, \(\mathbf{x}_{c_{i}}\) is an individual randomly selected from the first or the second level. Subsequently, crossover operation and polynomial mutation are conducted to generate offspring \(\mathbf{u}\). PNN is then employed to rank the offspring candidates. The loop continues until the proportion of offspring belonging to the first level is higher than 0.9. After the loop ends, the uncertainty of each offspring candidate is calculated based on the Euclidean distance to the nearest evaluated sample points: \[g(\mathbf{u}_{i})=\min_{\mathbf{x}\bullet\mathbf{A}}\left\langle\mathit{dis}\left(\mathbf{u}_ {i},\mathbf{x}\right)\right\rangle \tag{7}\] where \(\mathit{dis}\left(\mathbf{u}_{i},\mathbf{x}\right)\) is the Euclidean distance between the offspring \(\mathbf{u}_{i}\) and evaluated sample points \(\mathbf{x}\) in \(\mathbf{A}\). The most uncertain offspring is selected as follows: \[\hat{\mathbf{x}}_{c}=\underset{\mathbf{u}\bullet\mathbf{P}}{arg\max}g(\mathbf{u}_{i}) \tag{8}\] where \(\hat{\mathbf{x}}_{c}\) is the selected offspring candidates to be evaluated. ### _Hypervolume-Based Non-Dominated Search_ HV-based non-dominated search mainly targets at accelerating the convergence of the optimization process, and is also able to enhance the diversity of the solutions. The pseudo-code of HV-based non-dominated search is shown in Algorithm 3. Concretely, RBF surrogates are constructed for all the objectives. Non-dominated sorting DE is employed to search for the Pareto front of the surrogates. Polynomial mutation operator and binary crossover operator are used to generate the offspring, and surrogates are employed to predict the objective value for the environmental selection of new populations \(\mathbf{P}\). The evolution of populations ends after \(\mathit{max\_gen}_{i}\) generations. After that, HV criterion is adopted to select the most promising solutions. Schematic diagram for the infill criterion of HV -based non-dominated search is shown in Fig. 4. The blue shaded area indicates the HV improvement of candidate solutions predicted by the surrogates. The HV improvement of each candidate in \(\mathbf{P}\) can be calculated using the existing Pareto front as follows: \[\hat{\mathbf{x}}_{h}=\underset{\mathbf{x}_{h}\bullet\mathbf{P}}{arg\max}\left[\mathit{HV} (\mathbf{x}_{\mathbf{A}_{1}}\cup\hat{\mathbf{x}}_{h})-\mathit{HV}(\mathbf{x}_{\mathbf{A}_{1}})\right] \tag{9}\] Fig. 4: Schematic diagram for the infill criterion of HV-based non-dominated search Fig. 5: Schematic diagram for the infill criterion of local search in the sparse objective space where \(\hat{\mathbf{x}}_{i}\) is the selected offspring candidates to be evaluated for HV-based non-dominated search strategy, \(\mathit{HV}\) denotes the HV calculation, and \(\mathbf{x}_{\mathit{a}_{i}}\) is the first rank solutions in archive \(\mathbf{A}\). ### _Local Search in the Sparse Objective Space_ Local search in the sparse objective space aims to identify and refine the sparse region in the current non-dominated front. The sparse point in the non-dominated front can be identified and further refined by generating promising solutions nearby with the help of the local surrogates. Algorithm 4 presents the procedure of the proposed infill criterion. A simple schematic diagram is illustrated in Fig. 5 to facilitate the understanding of the local search in the sparse objective space. Specifically, the sparsest points at the current Pareto front are selected based on non-dominated sorting and crowding distance of each solution in current Pareto front. Subsequently, local surrogates around the sparse points are constructed. The sparse points are adopted as the guided parent to further infill in the sparse region. After evolving the populations guided by local surrogates, the first rank candidate solutions are pre-screened using the prediction of surrogates. After that, the uncertainty of each candidate solutions in the objective space is calculated: \[l(\hat{\mathbf{x}}_{i})\!=\!\min_{\mathbf{x}\in\mathbf{A}}\left\{\mathit{dis}(\mathbf{f}(\hat {\mathbf{x}}_{i}),\mathbf{f}(\mathbf{x}))\right\} \tag{10}\] where \(\hat{\mathbf{x}}_{i}\) is the _ith_ candidate solution in the population, and \(\mathbf{f}\) is the objective function vector of a solution. The most uncertain offspring in objective space is selected as follows: \[\hat{\mathbf{x}}_{i}\!=\!\underset{\mathbf{x}\in\mathbf{f}}{\arg\max}g(\hat{\mathbf{x}}_{i}) \tag{11}\] This sampling strategy can further infill in the uncertain region of the non-dominated front and maintain the diversity of the final optimal solutions. ``` Input: The maximum number of evolution of generations \(max\_gen_{2}\), the number of infill solutions \(n\), the population size \(\mathit{NP}\) Output: The selected offspring candidates \(\{\hat{\mathbf{x}}_{1},\dots,\hat{\mathbf{x}}_{n}\}\). 1) Conduct non-dominated sorting and calculate the crowding distance of each solution in current Pareto front 2) Select top \(n\) solutions except the endpoints of each objective. 3) For i = 1: \(n\) 4) Choose \(\mathit{NP}\) nearest points to the i\({}^{\text{th}}\) sparse point in the objective space 5) Construct local RBF surrogates for all objectives 6) For i = 1: \(max\_gen_{2}\) 7) \(\mathbf{Q}\leftarrow\) Reproduce the i\({}^{\text{th}}\) sparse point using binary and polynomial mutation operator and crossover operator 8) Predict the objective function values with RBF surrogates 9) \(\mathbf{P}\leftarrow\) Conduct environmental selection 10) End for 11) Perform non-dominated sorting with the current Pareto front and \(\mathbf{P}\) 12) If \(\mathbf{\exists}\hat{\mathbf{x}}\!\in\!\mathbf{P}\) in the first sort 13) Calculate the uncertainty of each candidate solutions 14) Choose the most uncertain solution in objective space as \(\hat{\mathbf{x}}_{i}\) 15) End for ``` **Algorithm 4** Local Search in the Sparse Objective Space ## IV Experimental Studies In this section, the performance of the proposed CLMEA is first investigated regarding the efficacy of classifier-assisted rank-based learning pre-screening, HV-based non-dominated search, and local search in sparse object space. CLMEA is also compared with five state-of-the-art algorithms (CPS-MOEA [38], K-RVEA [42], CSEA [36], END-ARMOEA [20], and MCEA/D [34]) on DTLZ and ZDT benchmark suites (detailed characteristics of the benchmark problems in Table S-I in the supplementary material). The source code of CLMEA is publicly available in MATLAB to help readers reproduce the results 1. Besides, all the experiments are conducted on evolutionary multi-objective optimization platform PlatEMO2[55]. In addition, the computational complexity of the algorithms is analyzed and compared. Finally, a real-world heat extraction optimization of geothermal reservoir is also employed to further test the performance of the proposed CLMEA. Footnote 1: [https://github.com/JellyChen7/CLMEA](https://github.com/JellyChen7/CLMEA) ### _Parameter Settings_ Seven DTLZ (DTLZ 1-7) and five ZDT (ZDT 1-4, 6) benchmark suites are employed in this work. Since this work targets high-dimensional multi-objective expensive problems, the number of objective functions and dimension of variables are set to \(M\!=\!\{2,3\}\) and \(D\!=\!\{30,50,100,200\}\), respectively. For all algorithms, the initial number of samples is set to 100 when the dimension is less than 100, and to 200 when the dimension is greater than or equal to 100. The termination criterion is the predefined FEs. In the experiments, the maximum number of FEs is set to 300 for all benchmarks. The performance of the algorithms is evaluated by IGD metric on the benchmark suites, while by HV metric on the real-world application. Wilcoxon signed-rank test is used with a significance probability \(\alpha=0\,.\,05\) to compare the algorithms. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. Offsprings are generated by DE mutation, polynomial mutation and binary crossover. The maximum number of evolutionary generations in the HV-based non-dominated search strategy is set to 50. The number of points for building local surrogate is set to 100 for problems with variables less than 100, and to 200 with variables greater than or equal to 100. The inner parameter settings of the compared algorithms keep unchanged except for the initial sampling number. For the parameter of CLMEA, the population size is set to 50. The number of infill solutions is set to 1 for each infill strategy. The infill number can be larger if the decision maker wishes to make use of parallel computing power. The maximum number of evolution of generations for local search in the sparse objective space is set to 10. Besides, all the experiments are performed on MATLAB R2021a. ### _Performance metrics_ The HV indicator and inverted generational distance (IGD) are the most frequently applied performance metrics to quantitatively assess the convergence and diversity of the final population. IGD calculates the average distance from each Pareto optimal solution to the nearest solution. Suppose \(\boldsymbol{P}^{*}\) is a set of evenly distributed reference solutions on the Pareto front, and \(\boldsymbol{Q}\) is a set of non-dominated solutions. The IGD is mathematically defined as: \[\mathit{IGD}(\boldsymbol{P}^{*},\boldsymbol{Q})=\frac{\sum\nolimits_{ \boldsymbol{p}\boldsymbol{\ast}P^{*}}\mathrm{dis}\left(\boldsymbol{p}, \boldsymbol{Q}\right)}{|\boldsymbol{P}^{*}|} \tag{12}\] where \(\mathrm{dis}\left(\boldsymbol{p},\boldsymbol{Q}\right)\) denotes the minimum Euclidean distance between \(\boldsymbol{p}\) and solutions in \(\boldsymbol{Q}\) provided by MOEAs, and \(|\boldsymbol{P}^{*}|\) denotes the number of reference points. The smaller IGD value indicates better approximation to the true Pareto front. The calculation of IGD value requires the distribution of Pareto front. Nevertheless, for many real-world scientific and engineering problems, the explicit information of Pareto front is not available. HV indicator [56] is able to measure the dominated volume of the populations in the objective space without the prior knowledge of Pareto front distribution. The HV improvement can be adopted as the performance metric [57] and the infill criterion [22, 26] of SAEAs for MOPs. Suppose that reference vector \(\boldsymbol{z}=[z_{\star 1},...,z_{\star m}]\) is a worst point dominated by all the Pareto optimal objective vectors, the HV can be calculated as: \[\mathit{HV}(\boldsymbol{Q},\boldsymbol{z})=L\left(\{\boldsymbol{z}, \boldsymbol{z}_{\star}|\}\cup...\cup[\boldsymbol{z},\boldsymbol{z}_{\star}]\right) \tag{13}\] where \(L\) is the usual Lebesgue measure, and \(\{\boldsymbol{x}_{\star},...,\boldsymbol{x}_{\star}\}\) are the solutions in \(\boldsymbol{Q}\) provided by MOEAs. The larger HV indicates better approximation to the true Pareto front. The drawback of HV indicator is the computational cost especially on MaOPs [58]. Fig. 8: Non-dominated front by CLMEA and compared algorithms on bi-objective benchmark problems Fig. 7: Violin plots for the performance ranking on the benchmark functions ### _Effectiveness of the Three Sampling Strategies_ CLMEA is first compared with its variants CLMEA-s1, CLMEA-s2, and CLMEA-s3, which use classifier-assisted rank-based learning pre-screening, HV-based non-dominated search, and local search in the sparse objective space, respectively. Average IGD values of CLMEA and its variants on bi-objective DTLZ problems, with dimensions 30, 50, 100 and 200, are shown in Table II. The results on bi-objective ZDT and 3-objective DTLZ problems are listed in Table S-II and S-III respectively. All the results of benchmark functions are replicated 20 independent runs for statistical analysis: '+', '-', and '\(\,\approx\,\)' indicate that the result is statistically significantly better, worse and comparable to CLMEA, respectively. From Table II, CLMEA outperforms CLMEA-s1 and CLMEA-s3 on all benchmark functions, while worse than CLMEA-s2 on 7 out of 28 benchmark problems. HV-based non-dominated search shows promising convergence speed in the early optimization period. However, it is prone to trap into local optima, resulting in unstable optimization performance. Classifier-assisted rank-based learning pre-screening is able to infill the non-dominated front to enhance the diversity of the non-dominated front, while local search in the sparse objective space is able to explore the uncertain but promising front. Although CLMEA-s1 and CLMEA-s3 cannot achieve promising results, the diversity of but promising area are significant for the optimization even with limited computational budgets. In comparison with CLMEA-s2, CLMEA allocates more computational resources on exploring uncertain but promising area and maintaining the diversity of the solutions, which explains why CLMEA performs better than CLMEA-s2 on most benchmark functions. ### _Experimental Results on Bi-Objective DTLZ Problems_ CLMEA is compared with 5 state-of-the-art surrogate-assisted MOEAs (i.e., CPS-MOEA, K-RVEA, CSEA, END-ARMOEA, and MCEA/D) to test its performance on bi-objective DTLZ problems with dimension 30, 50, 100, and 200. Table III presents the average IGD results of 20 independent runs on bi-objective DTLZ problems, with the best results highlighted. Fig. 6 shows the convergence curves of CLMEA and compared algorithms on bi-objective 100D DTLZ problems. The convergence curves on 30D, 50D, and 200D DTLZ problems can be found in supplementary materials (Fig. S-1, S-2, and S-3). The filled color areas indicate the variance of the algorithm. Fig. 7 (a) presents the algorithm rankings for the bi-objective DTLZ problems with dimensions ranging from 30D to 200D. It can be observed that CLMEA achieves best optimization result on 25 out of 28 DTLZ problems except for 30-100D DTLZ7 problems (slightly worse than K-RVEA). The non-dominated solutions and the exploration of the uncertain PF of DTLZ7 is discontinuous, and it is hard to maintain the Fig. 9: Convergence curves of the six compared algorithms on 2-objective 100D ZDT problems diversity of non-dominated solutions. When dealing with 200D DTLZ7 problem, the performance of CLMEA becomes better than K-RVEA. MCEA/D also shows promising performance except on DTLZ7 problems. MCEA/D shows promising performance except for DTLZ7 problems, while MCEA/D is only better than CPS-MOEA on DTLZ7 problems due to the discontinuous PF. K-RVEA achieves the best results on 30D-100D DTLZ7 problems, yet performs worse than other algorithms on most of the other DTLZ7 benchmark functions. For a better illustration, non-dominated front by CLMEA and compared algorithms on DTLZ2 and DTLZ5 problems are shown in Fig. 8. It can be obviously observed that CLMEA outperforms the other surrogate-assisted MOEAs. Note that CLMEA captures the whole PF of 30D DTLZ2 and DTLZ5 problems. As the dimension increases, the non-dominated front becomes farther away from PF. The performance of MCEA/D is also promising in comparison with CPS-MOEA, K-RVEA, CSEA and EDN-ARMOEA, although not converged due to the limited computational budget. ### _Experimental Results on Bi-Objective ZDT Problems_ ZDT is then employed to further test the performance of CLMEA and the compared algorithms. Table IV shows the average IGD results of 20 independent runs on bi-objective ZDT problems, with the best results highlighted. Fig. 9 presents the convergence curves of CLMEA and compared algorithms on bi-objective 100D DTLZ problems. The convergence curves on 30D, 50D, and 200D ZDT problems can be found in supplementary materials (Fig. S-4, S-5, and S-6). CLMEA performs better than other algorithms on most benchmark functions except ZDT3, since ZDT3 is a multimodal problem and the PF is discrete. The performance of CLMEA on high-dimensional ZDT1 and ZDT4 is better, in comparison with other algorithms. Despite poor performance on ZDT4, K-RVEA also shows promising performance on ZDT problems. To take a further look at the final solution distributions, Fig. 8 illustrates the final non-dominated front of ZDT2 from 30D to 200D. CLMEA captures the entire PF on 30D ZDT2 problem. The solutions of CLMEA on PF become sparse when the dimension increases to 50D and 100D. When the dimension increases to 200, the non-dominated solutions of CLMEA become far from the PF. Fig. 7 (b) presents the violin plot of algorithm rankings for the bi-objective DTLZ problems with dimensions ranging from 30D to 200D. From the algorithm rankings, CLMEA achieves the best performance on most ZDT problems. CPS-MOEA and MCEA/D obtain worse performance than the other algorithms on ZDT problems. ### _Experimental Results on 3-Objective DTLZ Problems_ Results on 3-objective ZDT problems are present in Table V. Fig. 10 illustrates convergence curves of CLMEA and compared algorithms on 3-objective 50D ZDT problems (The convergence curves on 30D, 100D, and 200D DTLZ problems can be found in Fig. S-7, S-8, and S-9 of the supplementary materials). CLMEA achieves the best average IGD value on 23 out of 28 problems in comparison with other 5 algorithms. Since the PF of DTLZ7 is discontinuous, and it is hard to maintain the diversity of non-dominated solutions, resulting in CLMEA worse than K-RVEA. K-RVEA only shows best performance on 30D, 50D, and 100D DTLZ7 problems. Fig. 11 shows the final solution distributions of the algorithms on 3-objective DTLZ problems. For problems from 30 to 50D, CLMEA provides non-dominated solutions close to PF with great diversity. When the dimension increases to 100 and 200D, the non-dominated solutions become relatively sparse and far to PF. MCEA/D also presents promising non-dominated front that is closer to PF than other compared algorithms except CLMEA. The violin plot for the algorithm rankings is shown in Fig. 7 (c). It can be observed that K-RVEA and EDN-ARMOEA present worst algorithm rankings, while CLMEA and MCEAD show promising search abilities. ### _Computational Complexity Analysis_ Computational time for training surrogates and selecting infill sample candidates varies with surrogate-assisted methods. The entire time for the optimization process includes the computational time on training the surrogate, selecting the infill candidates, and conducting real FEs. The computational complexity for training RBF and PNN is on the order of \(O(mn^{2}D)\), and for each prediction is \(O(mnD)\), where \(m\) is the number of objectives, \(n\) is the number of training samples, and \(D\) is the dimension of the optimization problem. For training Fig. 12: Runtime of CLMEA and compared algorithms for 300 FEs on bi-objective 200D ZDT1 problems Fig. 11: Convergence curves of the six compared algorithms on 3-objective DTLZ problems Kriging model, the computational complexity is \(O(mn^{3}D)\)[25], which is time consuming and sensitive to the number of training sample points. To investigate the computational complexity of CLMEA and compared algorithms, computational times of all the six algorithms on bi-objective 200D DTLZ1 with 300 FEs are compared in Fig. 12. The computational times for CPS-MOEA and MCEA/D are negligible, which are 0.05s and 6.48s, respectively. CLMEA takes slightly less time to run than CSEA. K-RVEA and END-ARMOEA consume the most runtime due to the costly training of the Kriging and dropout neural network. Since the real-world simulation time is hours even days, the computational time for training surrogates and selecting infill samples can be negligible. When each FE is not that expensive, MCEA/D is a good choice due to the promising optimization ability and low computational complexity. ### _Application on Geothermal Development System_ One of the potential application scenarios of the proposed CLMEA algorithm is expected to be the field of the complex engineering design that involving a series of objectives and numerical simulations. As a reliable renewable and sustainable energy, geothermal resource development plays a significant role in the transition from fossil fuels. A heat extraction optimization problem of enhanced geothermal system (EGS) is employed to further test the efficacy of the proposed algorithm. Geothermal resources development aims to maximize the heat extraction by injecting cold water and producing heated water. The decision variables to be optimized are the water-injection rates and water-production rates (or bottom hole pressure) of wells. Maximizing only the long-term revenue may takes risks due to geological uncertainties. To reduce the risks in the reservoir management, short-term potential should also be considered. For a given heat extraction optimization problem, the goal is to maximize the long- and short-term life-cycle net present value: \[\max_{\mathbf{s}}\ f_{i}(\mathbf{x},\mathbf{z}_{i}) \tag{14}\] \[\max_{\mathbf{s}}\ f_{i}(\mathbf{x},\mathbf{z}_{i})\] s.t. \[\mathbf{lb\leq x\leq ub,\ x\in\mathbb{R}^{d}} \tag{15}\] \[f(\mathbf{x},\mathbf{z})=\ r_{c}\textit{CTEP}(\mathbf{x},\mathbf{z})-r_{c}\textit{CWI}(\mathbf{x},\mathbf{z})-r_{p}\textit{CWP}(\mathbf{x},\mathbf{z}) \tag{16}\] where \(\mathbf{z}\) is the decision vector to be optimized, \(\mathbf{z}\) is the state vector (i.e., the temperature and pressure of the fractures at each grid block) with subscript \(l\) representing long-term and \(s\) representing short-term, \(r_{c}\),\(r_{i}\) and \(r_{p}\) represent price of thermal energy, the water-injection cost and water-production cost respectively, and _CTEP_, _CWI_ and _CWP_ denote the cumulative thermal energy production, cumulative water injection, and cumulative water production respectively. More details about the problem can be found in [59, 60]. In this case, the geological model contains large number of fractures, and the simulation for corresponding numerical model is time-consuming. The discrete fracture network and well-placement distribution of the field-scale EGS is present in Fig. 13. This problem contains three injection wells and five production wells, as indicated in Fig. 13. The life time of the project is 6000 days, and the time-step length is set to 300 days. Thus, the problem totally involves 160 decision variables and two objective functions. Fig. 14 shows non-dominated solutions and corresponding HV values obtained by different algorithms on real-world heat extraction optimization of EGS. The gray square points are the evaluated points of CLMEA which can reveal the convergence and exploration property of the algorithm. Notably, CLMEA achieves best non-dominated solutions and holds best HV value after 300 simulation evaluations. K-RVEA also shows promising HV value. Nevertheless, the diversity of K-RVEA is low, with only one final non-dominated solution. For MCEA/D, since the ideal point for real-world case is unknown, it is hard to distribute the weighted vector, resulting in poor performance on this case. ## V Conclusion In this paper, a classifier-assisted rank-based learning and local model based multi-objective evolutionary algorithm, called CLMEA, has been proposed to solve high-dimensional expensive multi-objective optimization problems. CLMEA contains three sampling strategies: classifier-assisted rank-based learning pre-screening, HV-based non-dominated search, and local search in the sparse objective space. classifier-assisted rank-based learning pre-screening strategy adopts a PNN as classifier to divide the offspring into a number of ranks. The offspring uses rank-based learning strategy to generate more promising and informative candidates for real FEs. HV-based Fig. 14: Non-dominated solutions and corresponding HV values obtained by different algorithms on real-world heat extraction optimization of EGS. Fig. 13: Discrete fracture network and well-placement distribution of the field-scale EGS. non-dominated search strategy employs RBF as surrogate. After searching Pareto solutions of surrogate model with evolutionary algorithm, the candidates with higher HV improvement are selected for real FEs. To keep the diversity of solutions, the sparse sample points from the Pareto solutions is selected as the guided parent and center of the local surrogate to further infill in the uncertain region of the Pareto front. CLMEA was compared with five state-of-the-art algorithms, i.e., CPSMOEA, KRVEA, END-ARMOEA, CSEA, and MCEA/D. The results show its superiority on various test suites, i.e., DTLZ and ZDT benchmark problems, in comparison with state-of-the-art algorithms. In addition, the proposed algorithm also shows promising results in practical application related to geothermal reservoir heat extraction optimization. In our future work, learning-aided offspring generation will be studied, aiming to construct an effective actor to generate elite offspring and extend to single-objective and multi-modal problems. Since many real-world applications and engineering design involve computationally expensive simulation, work will primarily focus on optimization with limited computational budgets. ## References * [1] Y. Jin and B. Sendhoff, "Pareto-based multiobjective machine learning: An overview and case studies," _IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)_, vol. 38, no. 3, pp. 397-415, 2008. * [2] S. Liu, Q. Lin, J. Li, and K. C. Tan, "A survey on learnable evolutionary algorithms for scalable multiobjective optimization," _IEEE Transactions on Evolutionary Computation_, 2023. * [3] B. Li, J. Li, K. Tang, and X. Yao, "Many-objective evolutionary algorithms: A survey," _ACM Computing Surveys (CSUR)_, vol. 48, no. 1, pp. 1-35, 2015. * [4] C. A. C. Coello, "Evolutionary multi-objective optimization: a historical view of the field," _IEEE computational intelligence magazine_, vol. 1, no. 1, pp. 28-36, 2006. * [5] H. Wang, L. Jiao, and X. Yao, "Two_Arch2: An improved two-archive algorithm for many-objective optimization," _IEEE transactions on evolutionary computation_, vol. 19, no. 4, pp. 524-541, 2014. * [6] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, "A fast and elitist multiobjective genetic algorithm: NSGA-II," _IEEE Transactions on Evolutionary Computation_, vol. 6, no. 2, pp. 182-197, 2002. * [7] M. Reyes-Siera and C. A. C. Coello, "Multi-objective particle swarm optimizers: A survey of the state-of-the-art," _International journal of computational intelligence research_, vol. 2, no. 3, pp. 287-308, 2006. * [8] E. Zitzler and L. Thiele, "Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach," _IEEE transactions on Evolutionary Computation_, vol. 3, no. 4, pp. 257-271, 1999. * [9] Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari, "Multiobjective optimization test instances for the CEC 2009 special session and competition," _University of Essex, Colchester, UK and Nanyang technological University, Singapore, special session on performance assessment of multi-objective optimization algorithms, technical report_, vol. 264, pp. 1-30, 2008. * [10] Q. Zhang and H. Li, "MOEA/D: A multiobjective evolutionary algorithm based on decomposition," _IEEE Transactions on evolutionary computation_, vol. 11, no. 6, pp. 712-731, 2007. * [11] Y. Jin, "Surrogate-assisted evolutionary computation: Recent advances and future challenges," _Swam and Evolutionary Computation_, vol. 1, no. 2, pp. 61-70, 2011. * [12] Y. Jin, H. Wang, T. Chugh, D. Guo, and K. Miettinen, "Data-driven evolutionary optimization: An overview and case studies," _IEEE Transactions on Evolutionary Computation_, vol. 23, no. 3, pp. 442-458, 2018. * [13] M. Babaei and I. Pan, "Performance comparison of several response surface surrogate models and ensemble methods for water injection optimization under uncertainty," _Computers & Geosciences_, vol. 91, pp. 19-32, 2016. * [14] B. Liu, Q. Zhang, and G. G. E. Gielen, "A Gaussian process surrogate model assisted evolutionary algorithm for medium scale expensive optimization problems," _IEEE Transactions on Evolutionary Computation_, vol. 18, no. 2, pp. 180-192, 2013. * [15] Q. Zhang, W. Liu, E. Tsang, and B. Virginas, "Expensive multiobjective optimization by MOEA/D with Gaussian process model," _IEEE Transactions on Evolutionary Computation_, vol. 14, no. 3, pp. 456-474, 2009. * [16] Q. Lin, X. Wu, L. Ma, J. Li, M. Gong, and C. A. C. Coello, "An ensemble surrogate-based framework for expensive multiobjective evolutionary optimization," _IEEE Transactions on Evolutionary Computation_, vol. 26, no. 4, pp. 631-645, 2021. * [17] C. Sun, Y. Jin, R. Cheng, J. Ding, and J. Zeng, "Surrogate-assisted cooperative swarm optimization of high-dimensional expensive problems," _IEEE Transactions on Evolutionary Computation_, vol. 21, no. 4, pp. 644-660, 2017. * [18] G. Chen _et al._, "Efficient hierarchical surrogate-assisted differential evolution for high-dimensional expensive optimization," _Information Sciences_, vol. 542, pp. 228-246, 2021. * [19] X. Zhao _et al._, "Surrogate-assisted differential evolution for production optimization with nonlinear state constraints," _Journal of Petroleum Science and Engineering_, vol. 194, p. 107441, 2020. * [20] D. Guo, X. Wang, K. Gao, Y. Jin, J. Ding, and T. Chai, "Evolutionary optimization of high-dimensional multiobjective and many-objective expensive problems assisted by a dropout neural network," _IEEE transactions on systems, man, and cybernetics: systems_, vol. 52, no. 4, pp. 2084-2097, 2021. * [21] A. Rosales-Perez, C. A. C. Coello, J. A. Gonzalez, C. A. Reyes-Garcia, and H. J. Escalante, "A hybrid surrogate-based approach for evolutionary multi-objective optimization," pp. 2548-2555: IEEE. * [22] M. Zhao _et al._, "A classification-based surrogate-assisted multiobjective evolutionary algorithm for production optimization under geological uncertainty," _SPE Journal_, vol. 25, no. 05, pp. 2450-2469, 2020. * [23] K. Zhang _et al._, "A double-model differential evolution for constrained waterfeeding production optimization," _Journal of Petroleum Science and Engineering_, vol. 207, p. 109059, 2021. * [24] H. Wang, Y. Jin, and J. Doherty, "Committee-based active learning for surrogate-assisted particle swarm optimisation of expensive problems," _IEEE transactions on cybernetics_, vol. 47, no. 9, pp. 2664-2677, 2017. * [25] T. Chugh, Y. Jin, K. Miettinen, J. Hakanen, and K. Sindhya, "A surrogate-assisted reference vector guided evolutionary algorithm for computationally expensive many-objective optimization," _IEEE Transactions on Evolutionary Computation_, vol. 22, no. 1, pp. 129-142, 2016. * [26] Y. Pang, Y. Wang, S. Zhang, X. Lai, W. Sun, and X. Song, "An Expensive Many-Objective Optimization Algorithm Based on Efficient Expected Hypervolume Improvement," _IEEE Transactions on Evolutionary Computation_, 2022. * [27] Z. Song, H. Wang, C. He, and Y. Jin, "A Kriging-assisted two-archive evolutionary algorithm for expensive many-objective optimization," _IEEE Transactions on Evolutionary Computation_, vol. 25, no. 6, pp. 1013-1027, 2021. * [28] Q. Lin, R. Cheng, Y. Jin, M. Heiderich, and T. Rodemann, "Reference Vector-Assisted Adaptive Model Management for Surrogate-Assisted Many-Objective Optimization," _IEEE Transactions on Systems, Man, and Cybernetics: Systems_, 2022. * [29] M. Wu, L. Wang, J. Xu, P. Hu, and P. Xu, "Adaptive surrogate-assisted multi-objective evolutionary algorithm using an efficient infill technique," _Swam and Evolutionary Computation_, vol. 75, p. 101170, 2022. * [30] S. Qin, C. Sun, Q. Liu, and Y. Jin, "A Performance Indicator Based Infill Criterion for Expensive Multi-Many-objective Optimization," _IEEE Transactions on Evolutionary Computation_, pp. 1-1, 2023. * [31] D. Guo, Y. Jin, J. Ding, and T. Chai, "Heterogeneous ensemble-based infill criterion for evolutionary multiobjective optimization of expensive problems," _IEEE transactions on cybernetics_, vol. 49, no. 3, pp. 1012-1025, 2018. * [32] R. G. Regis, "Evolutionary programming for high-dimensional constrained expensive black-box optimization using radial basis functions," _IEEE Transactions on Evolutionary Computation_, vol. 18, no. 3, pp. 326-347, 2013. * [33] M. Zhao _et al._, "A surrogate-assisted multi-objective evolutionary algorithm with dimension-reduction for production optimization," _Journal of Petroleum Science and Engineering_, vol. 192, p. 107192, 2020. * [34] T. Sonoda and M. Nakata, "Multiple Classifiers-Assisted Evolutionary Algorithm Based on Decomposition for High-Dimensional Multi-Objective Problems," _IEEE Transactions on Evolutionary Computation_, 2022. * [35] F.-F. Wei _et al._, "A classifier-assisted level-based learning swarm optimizer for expensive optimization," _IEEE Transactions on Evolutionary Computation_, vol. 25, no. 2, pp. 219-233, 2020. * [36] L. Pan, C. He, Y. Tian, H. Wang, X. Zhang, and Y. Jin, "A classification-based surrogate-assisted evolutionary algorithm for expensive many-objective optimization," _IEEE Transactions on Evolutionary Computation_, vol. 23, no. 1, pp. 74-88, 2018. * [37] G. Chen, X. Luo, J. J. Jiao, and X. Xue, "Data-driven evolutionary algorithm for oil reservoir well-placement and control optimization," _Fuel_, vol. 326, p. 125125, 2022. * [38] J. Zhang, A. Zhou, and G. Zhang, "A classification and Pareto domination based multiobjective evolutionary algorithm," in _2015 IEEE Congress on Evolutionary Computation (CEC)_, 2015, pp. 2883-2890. * [39] I. Loshchilov, M. Schoenauer, and M. Sebag, "A mono surrogate for multiobjective optimization," pp. 471-478. * [40] X. Lin, Q. Zhang, and S. Kwong, "A decomposition based multiobjective evolutionary algorithm with classification," pp. 3292-3299: IEEE. * [41] K. Li and S. Kwong, "A general framework for evolutionary multiobjective optimization via manifold learning," _Neurocomputing_, vol. 146, pp. 65-74, 2014. * [42] F. Li, L. Gao, and W. Shen, "Surrogate-Assisted Multi-Objective Evolutionary Optimization With Pareto Front Model-Based Local Search Method," _IEEE Transactions on Cybernetics_, 2022. * [43] M. Laumanns and J. Oceansek, "Bayesian optimization algorithms for multi-objective optimization," pp. 298-307: Springer. * [44] X. Zhong and W. Li, "A decision-tree-based multi-objective estimation of distribution algorithm," pp. 114-11: IEEE. * [45] C. He, S. Huang, R. Cheng, K. C. Tan, and Y. Jin, "Evolutionary multiobjective optimization driven by generative adversarial networks (GANs)," _IEEE transactions on cybernetics_, vol. 51, no. 6, pp. 3129-3142, 2020. * [46] Y. Tian, L. Si, X. Zhang, K. C. Tan, and Y. Jin, "Local Model-Based Pareto Front Estimation for Multiobjective Optimization," _IEEE Transactions on Systems, Man, and Cybernetics: Systems_, 2022. * [47] X. Lin, Z. Yang, X. Zhang, and Q. Zhang, "Pareto Set Learning for Expensive Multi-Objective Optimization," _arXiv preprint arXiv:2210.08495_, 2022. * [48] S. Liu, J. Li, Q. Lin, Y. Tian, and K. C. Tan, "Learning to Accelerate Evolutionary Search for Large-Scale Multiobjective Optimization," _IEEE Transactions on Evolutionary Computation_, 2022. * [49] Z. H. Zhan, J. Y. Li, S. Kwong, and J. Zhang, "Learning-aided Evolution for Optimization," _IEEE Transactions on Evolutionary Computation_, pp. 1-1, 2022. * [50] Y. Tian, X. Li, H. Ma, X. Zhang, K. C. Tan, and Y. Jin, "Deep reinforcement learning based adaptive operator selection for evolutionary multi-objective optimization," _IEEE Transactions on Emerging Topics in Computational Intelligence_, 2022. * [51] H. Zhen, W. Gong, and L. Wang, "Evolutionary Sampling Agent for Expensive Problems," _IEEE Transactions on Evolutionary Computation_, 2022. * [52] Z. Wang _et al._, "Deep reinforcement learning and adaptive policy transfer for generalizable well control optimization," _Journal of Petroleum Science and Engineering_, vol. 217, p. 11088, 2022. * [53] K. Li, R. Chen, G. Fu, and X. Yao, "Two-archive evolutionary algorithm for constrained multiobjective optimization," _IEEE Transactions on Evolutionary Computation_, vol. 23, no. 2, pp. 303-315, 2018. * [54] D. F. Specht, "Probabilistic neural networks," _Neural networks_, vol. 3, no. 1, pp. 109-118, 1990. * [55] Y. Tian, R. Cheng, X. Zhang, and Y. Jin, "PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum]," _IEEE Computational Intelligence Magazine_, vol. 12, no. 4, pp. 73-87, 2017. * [56] L. While, P. Hingston, L. Barone, and S. Huband, "A faster algorithm for calculating hypervolume," _IEEE transactions on evolutionary computation_, vol. 10, no. 1, pp. 29-38, 2006. * [57] J. Bader and E. Zitzler, "HypE: An algorithm for fast hypervolume-based many-objective optimization," _Evolutionary computation_, vol. 19, no. 1, pp. 45-76, 2011. * [58] K. Shang, H. Ishibuchi, L. He, and L. M. Pang, "A survey on the hypervolume indicator in evolutionary multiobjective optimization," _IEEE Transactions on Evolutionary Computation_, vol. 25, no. 1, pp. 1-20, 2020. * [59] G. Chen _et al._, "Global and local surrogate-model-assisted differential evolution for waterfoding production optimization," _SPE Journal_, vol. 25, no. 01, pp. 105-118, 2020. * [60] X. Gao, Y. Zhang, Y. Cheng, Z. Yu, Z. Hu, and Y. Huang, "Heat extraction performance of fractured geothermal reservoirs considering aperture variability," _Energy_, vol. 269, p. 126806, 2023. Supplementary material
2306.07210
Muon accelerators -- Muon lifetime measurements as window to Planck scale physics
A prominent effective description of particles interacting with the quantum properties of gravity is through modifications of the general relativistic dispersion relation. Such modified dispersion relations lead to modifications in the relativistic time dilation. A perfect probe for this effect, which goes with the particle energy cubed $E^3$ over the quantum gravity scale $E_{\text{QG}}$ and the square of the particle mass $M^2$ would be a very light unstable particle for which one can detect the lifetime in the laboratory as a function of its energy to very high precision. In this article we conjecture that a muon collider or accelerator would be a perfect tool to investigate the existence of an anomalous time dilation, and with it the fundamental structure of spacetime at the Planck scale.
Iarley P. Lobo, Christian Pfeifer
2023-06-12T16:14:25Z
http://arxiv.org/abs/2306.07210v2
# Muon accelerators - Muon lifetime measurements as window to Planck scale physics ###### Abstract A prominent effective description of particles interacting with the quantum properties of gravity is through modifications of the general relativistic dispersion relation. Such modified dispersion relations lead to modifications in the relativistic time dilation. A perfect probe for this effect, which goes with the particle energy cubed \(E^{3}\) over the quantum gravity scale \(E_{\rm QG}\) and the square of the particle mass \(M^{2}\) would be a very light unstable particle for which one can detect the lifetime in the laboratory as a function of its energy to very high precision. In this article we conjecture that a muon collider or accelerator would be a perfect tool to investigate the existence of an anomalous time dilation, and with it the fundamental structure of spacetime at the Planck scale. ## I Introduction A self consistent theory of quantum gravity is still elusive and the theoretical predictions, as well as experimental searches for traces of quantum gravity are part of the fundamental ongoing endeavours in fundamental physics [1]. The main caveat to find quantum gravity effects is that these, if they become relevant at the Planck scale, are highly suppressed: in terms of length by \(\ell_{\rm Pl}\sim 10^{-35}\,\rm m\) or, in terms of energy by \(E_{\rm Pl}\approx 1.2\times 10^{19}\,\rm GeV\). Until now, no dedicated quantum gravity effects have been detected, and the chances are low to detect them by accident. Therefore, to increase the chances for a detection, mainly two things are needed: First, a promising, rigorously derived prediction from a fundamental or phenomenological model of quantum gravity, in order to know where to look for the effect; second, an amplification mechanism which brings the predicted effect into the range of nowadays measurement precision. The most prominent amplifiers in the literature are appear in the context of cosmic astrophysical or cosmological systems. In the search for quantum gravity induced time delays of high energetic particles (photons or neutrinos), possible tiny modified dispersion relation (MDR) effects should accumulate over the enormous cosmological travel distance of the particles, and become visible in gamma-ray and neutrino telescopes [2; 3; 4]. For quantum gravity modified particle interactions and threshold effects, the amplifier is the power of the energy of the parent particle. In ultra-high energetic cosmic rays the highest energetic particles could trigger such modified interactions and which the become visible in the observation of the rays and the products of the interaction of the rays with the particles of the atmosphere [5; 6]. Despite these high-energy, large distance amplifiers in cosmic systems, there also exist, maybe a bit unexpectedly, amplifiers, which open up a window to Planck scale physics in Earth bound, local physical systems. Prominent examples here are cold atoms experiments [7], or manifestations of generalized uncertainty principles and minimal length scenarios [8; 9]. Recently [10], we found an unexpected amplifier in the relativistic lifetime of elementary particles. We investigated how modified dispersion relations, lead to a modified time dilation factor between the laboratory frame and the rest frame of the particle, by applying the clock postulate rigorously to Planck scale modified dispersion relations. The result was, that for certain modified dispersion relations, the time dilation contains a modification which scales with the energy of the particle to the third power, which would lead to a detectable modification for high energetic particles, even if the effect is suppressed by the Planck scale. In this letter we outline how such a modified time dilation factor could in principle be detected (or excluded) in dedicated accelerator experiments on Earth. The advantage of Earth, or Earth orbit, based experiments over the observation of cosmic messengers is that the setup and the initial conditions of the system under observation can be highly controlled. Moreover, usually the measurement precision and measurement time is usually higher. The downside is that the energies which can be achieved on Earth, or Earth orbit, are not as high as the ones of cosmic messengers. Time dilation from modified dispersion relations The time dilation effect in Special Relativity is a consequence of any of two features: 1. the Lorentz transformation between frames, 2. the so called clock postulate, which states that the proper time that an observer measures between two events on spacetime is given by the length of its trajectory between the two events. This second approach has a conceptual advantage in comparison to the former, as it does not rely on Lorentz transformations as symmetry of spacetime and can thus be applied to curved spacetimes as well and includes gravitational time dilations or redshifts. Moreover, the clock postulate can be extended, even to spacetime geometries beyond Riemannian spacetime geometry, which makes this approach most suitable to study time dilations in the absence of a spacetime metric. The only geometric ingredient needed on spacetime to employ the clock postulate is a geometric length measure for curves. The most general spacetimes with this property are so called Finsler spacetimes [11]. Hence, to determine the time dilation induced by Planck scale modified dispersion relations, we need to derive the corresponding length measure/clock for massive particle trajectories. ### The modified time dilation formula In [10] we performed this derivation in all detail for general modifications of the general relativistic dispersion relation. Assuming isotropy, i.e. that the dispersion relation only depends on the norm of the spatial momentum of the particle, \(p=|\vec{p}|\), the MDR can be displayed as \[M^{2}=E^{2}-p^{2}+\frac{1}{E_{\rm QG}}h(E,p)\,, \tag{1}\] where \(E\) is the energy of the particle and \(E_{\rm QG}\) is the energy scale from which quantum gravity effects become apparent, which is supposedly of the order of the Planck energy and suppresses deviations from special/general relativistic expressions. Introducing the zeroth order, special relativistic \(\gamma=(1-v^{2})^{-1/2}=E/M\) factor the resulting time dilation becomes \[\tau=\frac{t}{\gamma}\left(1+\frac{1}{E_{QG}}f(\gamma)\right)\,, \tag{2}\] where \(f(\gamma)\) is determined from the function \(h(E,p)\) which defines the MDR, \(\tau\) is the proper time in the particle rest frame and \(t\) is the laboratory time. A prominent model is the \(\kappa\)-Poincare dispersion relation in the bicrossproduct basis in which \(h(E,p)=-Ep^{2}\). The resulting time dilation, expressed in terms of the energy \(E\) of the particle, contains a term \(E^{3}\) \[t(E,m)=\gamma\tau\left(1-\frac{M}{2E_{QG}}(\gamma^{2}-1)\right)=t_{\rm SR} \left(1+\frac{M}{2E_{QG}}\left(\frac{M}{E}-2\frac{E}{M}+\frac{E^{3}}{M^{3}} \right)\right)\,, \tag{3}\] where \(t_{\rm SR}=E\tau/M\) is the result from Special Relativity. Thus, even if \(E_{\rm QG}\) is the Planck energy, measuring the lifetime of particles of reasonably high energy \(E\) can lead to an observable deviation from the special relativistic prediction. Such an amplification effect does not only emerge in this example MDR, but for many models with polynomial modifications. Therefore, we conjecture to devise a dedicated experiment to measure \(t(E,M)\) for unstable particles, since these measurements are a window to new physics and capable to constrain, or find evidence for, deviations from local Lorentz invariance. The special relativistic term, the linear term in \(E\) of \(t(E,M)\) might just be the first order approximation, to a more complex dependency of the dilated lifetime of particles with their energy. ### The actual observable In actual experiments like accelerators, one usually does not consider \(t(E,M)\), when one analyses the lifetime of fundamental particles. The quantity considered is the the lifetime of the particle at rest \(\tau\) as a function of its mass defined by the particle data group [12]\(M_{\rm PDG}\), its transverse momentum \(p_{\rm T}\) and the decay distance of the \(L_{xy}\) in the laboratory, [13; 14], \[\tau=\tau(M_{\rm PDG},L_{xy},p_{\rm T})\,. \tag{4}\] We consider a simplifyied situation of motion in 1+1 D, such that for the modified dispersion relations, we identify \(M_{\rm PDG}=M\) and \(p_{\rm T}=p=|\vec{p}|\), so that the only variable left to be identified is \(L_{xy}\). Considering (1) as Hamilton function which determines the motion of particles \(M=H(x,p)\). The Hamilton equations of motion lead to energy and momentum conservation, since we are considering translation invariant dispersion relations \(H(x,p)=H(p)\), \[\dot{E}=-\partial_{t}H=0\,,\quad\dot{p}=-\partial_{x}H=0\,, \tag{5}\] and determine the worldlines of the particles, here for the \(\kappa\)-Poincare model \(h(E,p)=-Ep^{2}\), \[\dot{t}=\partial_{E}H=2E-\frac{1}{E_{\rm QG}}p^{2}\,,\quad\dot{x}=\partial_{p }H=-2p\left(1+\frac{1}{E_{\rm QG}}E\right)\,. \tag{6}\] Hence for the motion of the particle in the lab frame we find, using the dispersion relation \(M^{2}=E^{2}-p^{2}-\frac{1}{E_{\rm QG}}Ep^{2}\) to replace \(E\) in terms of \(p\) and \(M\), \[v=\frac{dx}{dt}=\frac{\dot{x}}{\dot{t}}=-\frac{p}{\sqrt{M^{2}+p^{2}}}-\frac{p} {E_{\rm QG}}\Rightarrow\quad x(t)=x(0)-t\left(\frac{1}{\sqrt{M^{2}+p^{2}}}+ \frac{p}{E_{\rm QG}}\right)\,. \tag{7}\] Fixing the boundary conditions such that the particle gets created at \(x(0)\) and decays at \(x(t)\), we identify its decay length as \(L_{xy}=|x(t)-x(0)|\). Using (3) to express the decay length as function of the proper lifetime \(\tau\) of the particle and solving for \(\tau\) yields \[\tau=\frac{L_{xy}M}{p}\left(1-\frac{1}{E_{\rm QG}}\frac{\sqrt{M^{2}+p^{2}}(2M ^{2}+p^{2})}{2M^{2}}\right)\,. \tag{8}\] Our prediction is that if this quantity is calculated using data from particle accelerators with higher and higher energies, discrepancies should eventually emerge when we compare this result with the lower energies measurements, if the determination of the lifetime in accelerators is done assuming the special relativistic expression \(L_{xy}M/p\). We conjecture to perform searches for deviations from Lorentz invariance this way. ### Towards detecting anomalous time delays in particle decays Our discussion in the previous section suggests that the special relativistic time dilation might just be the first order of a Taylor series of a more complicated time dilation mechanism, whose coefficients could be determined measuring \(t(E,M)\) to high precision. Any deviation from the linear relation would indicated a modification of local Lorentz invariance. The question is, how could one measure the time dilation to the required precision? What would be good search strategies? In the following we discuss some ideas, with the aim to start a discussion about a dedicated measurement of \(t(E,M)\). #### ii.3.1 Measuring \(t(E,M)\) in different experiments, which are working at different energy scales. This strategy is similar, in a certain sense, to the one that has been carried out recently concerning time delays of photons emitted from gamma ray bursts (GRBs) [15; 16; 17], where an ensemble of astrophysical events is considered in the same analysis and deviations from a horizontal line (in which there is no time delay) would be a signature of a deformed kinematics of photons propagating in a quantum spacetime. Unfortunately, there are some difficulties for this approach: although we are capable of reaching controllable energies at the TeV scale with current experiments, like those performed at the LHC (and possibly for the future the FCC), this is only valid for the hadron beams. The produced particles have energies (or transverse momenta) distributed along a range of a few GeV [18] (the issue on the necessary energy for reaching Planck scale sensitivity will be discussed below). This is known from dedicated analyses of the distribution of produced particles per energy [19] in such experiments, and this range is considered as an input in the likelihood analysis carried out for the determination of particles' lifetimes. Therefore, we do not have the direct information regarding the duo "unstable-particle dilated lifetime" and "unstable-particle energy" and even if we had it, the energies/momentum involved are some orders of magnitude below the TeV scale of the beam. #### ii.1.2 Search for a momentum dependence of \(L_{xy}M/p\) A closer look on (8) shows that we can reinterpret the corrections to the proper time of the particle as a momentum dependence of the quantity \(L_{xy}M/p\). Special Relativity predicts that the ratio of the the particle's travel distance until it decays and its momentum always gives a constant number \(\tau\), as can be seen in (8) for \(E_{\rm QG}\to\infty\). Therefore, if one is able to consider ranges with higher energies and lengths, we should expect a departure of the constancy of this ratio. In fact, if the effect that we are describing in this paper exists, we predict the momentum dependence for the \(\kappa\)-Poincare dispersion relation to be \[\frac{L_{xy}M}{p}=\tau\left(1+\frac{1}{E_{\rm QG}}\frac{\sqrt{M^{2}+p^{2}}(2M ^{2}+p^{2})}{2M^{2}}\right)\,. \tag{9}\] Therefore, one should observe a shift in the likelihood fit of such ratio at higher momentum/length ranges. An actual discrepancy would just be actually perceivable when the relative uncertainty of measurements of this quantity matches the dimensionless correction of Eq. (9). We like to point out that for small masses the effect becomes larger, as the correction term diverges for \(M\to 0\). Let us discuss the recent measurement of the lifetime of the \(\Lambda\) hyperon performed at ALICE [14] in this context. From this we can get an estimate between the order of magnitude of the momentum of the decaying particle and the magnitude of the uncertainty in the measurement. This particle was chosen not just because its lifetime encompasses a very recent control of uncertainties, but also because it is the lightest hyperon and we see from (9) that the lighter the particle, the stronger is the effect. The \(\Lambda\) hyperon has a an average PDG mass of \(M_{\Lambda}=1115.683\,\)MeV [12] and two two-body decay channels \(\Lambda\to p+\pi^{-}\) and \(\bar{\Lambda}\to\bar{p}+\pi^{+}\). Its lifetime has been reconstructed at ALICE as \(\tau_{\Lambda}=[261.07\pm 0.37\,(\text{stat.})\pm 0.72\,(\text{syst.})]\,\)ps, and the particles were produced from a Pb-Pb beam collision with a center of mass energy of \(\sqrt{s_{\rm NN}}=5.02\,\)TeV. This setup gives a relative uncertainty of the order \(\sigma\sim 0.1\%\), as can be seen if we express the result of the measurement as \(\tau_{\Lambda}=261.07[1\pm 0.14\%\,(\text{stat.})\pm 0.28\%\,(\text{syst.})]\,\)ps, and we estimate \(\tau_{\Lambda}\sim 261.07[1\pm\sigma]\,\)ps. In Fig.1, we plotted the dimensionless correction in Eq.(9) as a function of the momentum, assuming \(E_{\rm QG}=E_{\rm Pl}=1.2\times 10^{19}\,\)GeV which the Planck energy. We see that in order to achieve Planck scale sensitivity by performing experiments of the lifetime with nowadays uncertainty of \(0.1\%\), we would need a momentum range of the order \(300\,\)TeV (blue, dashed line). Furthermore, even if we somehow managed to improve the measurement precision in one order of magnitude, going to \(\sigma\sim 0.01\%\) (red, dashed line), or two orders of magnitude, going to \(\sigma\sim 0.001\%\) (purple, dashed line), we would still need a momentum range of the order \(150\,\)TeV or \(\sim 60\,\)TeV, respectively. This scenario is unachievable in the foreseen future regarding the energy of the hadron beam (for the first two uncertainties) and even more regarding the momentum range of the produced \(\Lambda\) hyperon. As can be seen in (9), a way to improve this effect is to consider light particles, however, they cannot be so light that it has a too long dilated propagation distance such that the result of its decay is produced beyond the detector. For example, the experiments carried out at the LHC is uncapable of detecting the product of the decay of the muon, which is a light particle with an average PDG mass \(M_{\pi}=105.658\,\mathrm{MeV}\). If the lifetime of the muon could be measured at such accelerators, we would be able to reduce the necessary energy for reaching the Planck scale as can be seen in Fig.2. This figure shows that for nowadays control of uncertainties (blue, dashed line), we would need muons with momenta of the order \(65\,\mathrm{TeV}\). Each order of magnitude in improvement in the precision of the measurement reduces in half the necessary momentum for reaching Planck scale sensitivity, where we reach LHC-like energies only for a relative uncertainty of the order \(10^{-6}\). In any of these cases, we still have the problem for hadron colliders in which only the hadrons beam would achieve such energies, and the energy of the unstable particle itself would be just a fraction of it, which does not help us in scrutinizing the Planck scale. Figure 1: The dimensionless correction of the lifetime as function of the momentum (solid, black line) for \(E_{\mathrm{QG}}=E_{\mathrm{Pl}}=1.2\times 10^{19}\,\mathrm{GeV}\) (Planck energy). We considered the \(\Lambda\) hyperon mass \(M=1115.7\,\mathrm{MeV}\), whose lifetime has been measured in [14] with relative uncertainty \(\sigma=0.1\%\) (blue, dashed line), where we see the need for \(|p|\sim 300\,\mathrm{TeV}\) to achieve Planck scale sensitivity. We also described hypothetical relative uncertainties of \(\sigma=0.01\%\) (red, dashed line) and \(\sigma=0.001\%\) (purple, dashed line), where we see the need for \(|p|\sim 150\,\mathrm{TeV}\) and \(\sim 60\,\mathrm{TeV}\) to achieve Planck scale sensitivity. We highlighted the precision achieved nowadays for an actual measurement of the \(\Lambda\) hyperon lifetime. Figure 2: The dimensionless correction of the lifetime as function of the momentum (solid, black line) for \(E_{\mathrm{QG}}=E_{\mathrm{Pl}}=1.2\times 10^{19}\,\mathrm{GeV}\) (Planck energy). We considered the muon mass \(M=105.7\,\mathrm{MeV}\). The horizontal lines represent some possible relative uncertainties: \(\sigma=0.1\%\) (blue, dashed line), where one would need for \(|p|\sim 65\,\mathrm{TeV}\); \(\sigma=0.01\%\) (red, dashed line), where one would need for \(|p|\sim 30\,\mathrm{TeV}\); \(\sigma=0.001\%\) (purple, dashed line), where one would need for \(|p|\sim 14\,\mathrm{TeV}\) to achieve Planck scale sensitivity. We highlighted nowadays precision in lifetime measurements in accelerators. Accelerating light unstable particles: muon accelerators The main lesson from the two previous subsections is that it is difficult to use hadron colliders to measure the lifetime, or travel distance of unstable particles to a precision needed to detect Planck-scale induced deviations from Lorentz invariance. What is needed are light unstable particles which can be accelerated to energies which are achievable in hadron accelerators like the LHC or FCC. _The most natural candidate for such an undertaking which satisfies this requirement is to study the lifetime of muons and to build a muon accelerator/collider._ A muon collider [20] is a proposal that has recently gained traction in the community, including some Snowmass papers dedicated to its idealization, summarized in [21] and the establishment of the International Muon Collider Collaboration [21]. The main objective of these efforts is the construction of a 10+TeV accelerator capable of colliding muons, which would have the advantage of allowing the exploration of a leptonic environment at energies that are higher than those achievable for electron-positron colliders [21]. Besides that, it would provide a cleaner environment for collecting data on for such unstable particle physics in comparison to hadron colliders. The cleanness of the environment is also fundamental to compare bounds found for deviations of special relativity dilated lifetimes from cosmic rays data [22], where the analysis of particles at extensive air showers heavily suffers from uncertainties [1] such that it becomes hard to find smoking guns that could point out to new physics from the presence of anomalies in this scenario, like explaining the recently found muon deficit [23]. The idea of having a muon beam produced and accelerated in a ring by a magnetic field and measuring the result of its decay in order to verify its dilated lifetime is not new and has been used in the past to verify the time dilation prediction of Special Relativity [24] at 0.1% relative uncertainty. In this case, the apparatus that served to verify such dilation was the same as the one used to measure the anomalous magnetic moment of the muon at CERN. Although this kind of experiment served well to verify the standard time dilation, it would not be suitable to test Planck scale corrections due to the needs for muon \(g-2\) measurements, where it is necessary to have a fixed and low (for our standards) "magic" Lorentz factor \(\gamma\approx 29.3\) in order to remove the contribution of the stabilizing quadrupole electrostatic field from the muon's relation between the angular frequency and the electromagnetic field, named Thomas-Bargmann-Michel-Telegdi equation [25; 26]. We imagine that if a similar scenario could be realized, but for a 10+TeV muon accelerator in the future, this would lead to fundamental new insights on the validity of Lorentz invariance as fundamental symmetry of nature at these high energy scales. A most spectacular outcome of such a measurement would be the evidence that indeed Lorentz invariance is just the low energy approximation of a more fundamental symmetry, the less spectacular, but not lesser important outcome would be to constrain models of quantum gravity. ## III Conclusion Recently, it has been suggested that the way accelerated particles lifetimes is dilated due to relativistic effects could present reminiscent signatures of an underlying quantum spacetime structure. Such corrections would present an amplifier of the order of square of the ratio between the energy of the particle and its mass, which represents a great opportunity for phenomenological analyses carried out using light and high energetic unstable particles. Preliminary studies have been performed in the environment of extensive air showers from cosmic rays, but with a drawback that such environment is very polluted with uncertainties, making it hard to find a smoking gun, i.e., an unambiguous signal that could just be explained by quantum gravity. In order to clean up such environment and aid in the search for such effect, we suggest that particle accelerators could indeed serve to scrutinize this scenario, at least at first order perturbation in the supposedly Planckian scale. In this paper, we demonstrated some difficulties that one would face when investigating this effect even at a next generation hadron collider, that could reach energies 50+TeV, like the FCC. The main difficulty concerns the fact that the produced, unstable particles would present just a fraction of the energy of the primary beam at the order of tens of GeVs, and as we verified, one would need to reach energies beyond the TeV scale in order to scrutinize this effect with Planck scale sensitivity. The most prominent solution to this technological problem is actually the development of an accelerator capable of accelerating a light unstable particle to such scale. This candidate very naturally turns out to be a _muon collider or accelerator_ capable of reaching 10+TeV, that is currently under discussion for the next twenty years. Therefore, this paper adds an extra brick to the set of possible achievements of such apparatus, for the research on quantum gravity and fundamental spacetime symmetries. ###### Acknowledgements. I. P. L. was partially supported by the National Council for Scientific and Technological Development - CNPq grant 306414/2020-1 and by the grant 3197/2021, Paraiba State Research Foundation (FAPESQ). The authors would like to acknowledge networking support by the COST Action QGMM (CA18108), supported by COST (European Cooperation in Science and Technology). C.P. was funded by the cluster of excellence Quantum Frontiers funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2123 QuantumFrontiers - 390837967.
2306.17003
A survey on algebraic dilatations
In this text, we wish to provide the reader with a short guide to recent works on the theory of dilatations in Commutative Algebra and Algebraic Geometry. These works fall naturally into two categories: one emphasises foundational and theoretical aspects and the other applications to existing theories.
Adrien Dubouloz, Arnaud Mayeux, João Pedro dos Santos
2023-06-29T15:01:49Z
http://arxiv.org/abs/2306.17003v2
# A survey on algebraic dilatations ###### Abstract. In this text, we wish to provide the reader with a short guide to recent works on the theory of dilatations in Commutative Algebra and Algebraic Geometry. These works fall naturally into two categories: one emphasises _foundational and theoretical_ aspects and the other _applications_ to existing theories. **Key words:** survey, algebraic geometry, commutative algebra, Grothendieck schemes, group schemes, algebraic dilatations, dilatations of schemes, multi-centered dilatations, localizations of rings, mono-centered dilatations, affine modifications, affine blowups, formal blowups, Kaliman-Zaidenberg modifications, Neron blowups, Tannakian groups, differential Galois groups, congruent isomorphisms, Moy-Prasad isomorphism, representations of \(p\)-adic groups, torsors, level structures, shtukas, affine geometry, \(\mathbb{A}^{1}\)-homotopy theory ## Introduction _What is the concept of algebraic dilatations about?_ Dilatation of rings is a basic construction of commutative algebra, like localization or tensor product. It can be globalized so that it also make sense on schemes or algebraic spaces. In fact dilatations generalize localizations. Let \(A\) be a ring and let \(S\) be a multiplicative subset of \(A\). Recall that the localization \(S^{-1}A\) is an \(A\)-algebra such that for any \(A\)-algebra \(A\to B\) such that the image of \(s\) is invertible for any \(s\in S\), then \(A\to B\) factors through \(A\to S^{-1}A\). Intuitively, \(S^{-1}A\) is the \(A\)-algebra obtained from \(A\) adding all fractions \(\frac{a}{s}\) with \(a\in A\) and \(s\in S\). Formally, \(S^{-1}A\) is made of classes of fractions \(\frac{a}{s}\) where \(a\in A\) and \(s\in S\) (two representative \(\frac{a}{s}\) and \(\frac{b}{t}\) are identified if \(atr=bsr\) for some \(r\in S\)), addition and multiplication are given by usual formulas. Now let us give for any element \(s\in S\) an ideal \(M_{s}\) of \(A\) containing \(s\). The dilatation of \(A\) relatively to the data \(S,\{M_{s}\}_{s\in S}\) is an \(A\)-algebra \(A^{\prime}\) obtained intuitively by adding to \(A\) only the fractions \(\frac{m}{s}\) with \(s\in S\) and \(m\in M_{s}\). The dilatation \(A^{\prime}\) satisfies that for any \(s\in S\), we have \(sA^{\prime}=M_{s}A^{\prime}\) (intuitively any \(m\in M_{s}\) belongs to \(sA^{\prime}\), i.e. becomes a multiple of \(s\), so that we have an element \(\frac{m}{s}\) such that \(m=s\frac{m}{s}\)). As a consequence of the construction, the elements \(s\in S\) become non-zero-divisor in \(A^{\prime}\) so that \(\frac{m}{s}\) is well-defined (i.e. unique). It turns out that it is convenient, with dilatations of schemes in mind, to make a bit more flexible the above framework, namely to remove the conditions that \(S\) is multiplicative and that \(s\in M_{s}\), so we use the following definition. Definition. Let \(A\) be a ring. Let \(I\) be an index set. A multi-center in \(A\) indexed by \(I\) is a set of pairs \(\{[M_{i},a_{i}]\}_{i\in I}\) where for each \(i\), \(M_{i}\) is an ideal of \(A\) and \(a_{i}\) is an element of \(A\). To each multi-center \(\{[M_{i},a_{i}]\}_{i\in I}\), one has the dilatation \(A[\{\frac{M_{i}}{a_{i}}\}_{i\in I}]\), it is an \(A\)-algebra. We will define and study in details dilatations of rings in Section 1, in particular we will state formally the universal property they enjoy. We will also see that \(A[\{\frac{M_{i}}{a_{i}}\}_{i\in I}]\) is generated, as \(A\)-algebra, by \(\{\frac{M_{i}}{a_{i}}\}_{i\in I}\). We will also see that if \(M_{i}=A\) for all \(i\), then \(A[\{\frac{M_{i}}{a_{i}}\}_{i\in I}]=S^{-1}A\) where \(S\) is the multiplicative subset generated by \(\{a_{i}\}_{i\in I}\). Reciprocally, we will see that any sub-\(A\)-algebra of a localization \(S^{-1}A\) for a certain \(S\) is isomorphic to a dilatation of \(A\). Dilatations of schemes and algebraic spaces are obtained from dilatations of rings via glueing. We introduce the following definition. Definition. Let \(X\) be a scheme. Let \(I\) be an index set. A multi-center in \(A\) indexed by \(I\) is a set of pairs \(\{[Y_{i},D_{i}]\}_{i\in I}\) such that \(Y_{i}\) and \(D_{i}\) are closed subschemes for each \(i\) and such that locally, all \(D_{i}\) are principal for \(i\in I\). Associated to each multi-center, one has the dilatation \(\mbox{Bl}\{\begin{subarray}{c}D_{i}\\ Y_{i}\end{subarray}\}_{i\in I}X\), it is a scheme endowed with a canonical affine morphism \(f:\mbox{Bl}\{\begin{subarray}{c}D_{i}\\ Y_{i}\end{subarray}\}_{i\in I}X\to X\). It satisfies, in a universal way, that \(f^{-1}(D_{i})\) is a carrier divisor (i.e. is locally defined by a non-zero-divisor) and that \(f^{-1}(D_{i})=f^{-1}(Y_{i})\) for all \(i\in I\). If \(\#I=1\), we use the terminology mono-centered dilatation. We will study several facets of this construction and show that it enjoys many wonderfull properties in Sections 2 and 3. _Where does this construction come from?_ As we saw in the previous section, dilatations are a basic construction which can be easily encountered in specific situations. As a consequence it was used for a very long time. As the reader may well know, the theory of dilatations has deep and distinguished roots, even though not formulated in the language which we use. Right from the start, we warn the reader that we do not mean to, and probably could not, present a comprehensive historical account. As soon as Cremona and Bertini started using quadratic transformations (or blowups) in the framework of algebraic geometry over fields, "substitutions" of the form \(x^{\prime}=x\) and \(y^{\prime}=y/x\) started being made by algebraic geometers, see for example equation (8) in [10, Section 11] and Noether's acknowledgement, at the start of [10, Section 12], that these manipulations come from Cremona's point of view. Examples of dilatations appear frequently in some works of Zariski and Abhyankar, cf. [1, Definition, p. 86] and [24, p499 proof of Th.4, case (b)]. Other forerunner examples of dilatations play a central role in several independent and unrelated works later, cf. [11], [26, Section 25] and [10, Section 4]. As far as we know, the terminology dilatations emerged in [1, SS3.2], where a section is devoted to study dilatations of schemes over discrete valuation rings systematically. In the context of schemes over a discrete valuation ring, we draw the reader's attention to [1], [20] and [21]. The paper [23] studies dilatations (under the name affine modifications) systematically in the framework of algebraic geometry over fields. Over two-dimensional base schemes dilatations also appear in [22, p. 175]. Set aside localizations, mono-centered dilatations have been the main focus of mathematicians in the past. However, in the context of group schemes over discrete valuation rings, examples of multi-centered dilatations of rings and schemes that are not localizations or mono-centered dilatations appeared and were used in [10, Exp. VIB Ex. 13.3], [21] and [12]. In recent times, the authors of [14], [20] and [23] have set out to accommodate all these constructions in a larger and unified frame, namely for arbitrary schemes and algebraic spaces and arbitrary multi-center. The paper [20] introduces dilatations of arbitrary schemes in the mono-centered case and provides a systematic treatment of mono-centered dilatations of general schemes. An equivalent definition of mono-centered dilatations of general schemes, under the name affine modifications, was introduced earlier in [15, Definition 2.9] under few assumptions. The paper [23] introduces and studies dilatations of arbitrary rings, schemes and algebraic spaces for arbitrary multi-centers. Allowing multi-centers also leads naturally to the formulation of combinatorial isomorphisms on dilatations and gives birth to refined universal properties. Nevertheless, the mono-centered case remains a fundamental case that is frequently the 'atom' for some aspects of the theory. The first part (Sections 1-2-3) of this survey is devoted to theoretical and formal results on dilatations of rings, schemes and group schemes following [15], [20] and [23]. Sections 4-5-6-7 of the second part will deal with several concrete situations were specific kind of dilatations play a role, also providing complementary inceptions on this construction. To finish, beyond rings and schemes, the concept of dilatations makes sense for other structures and geometric settings. Let us indicate some constructions already available. Some dilatation constructions in the framework of complex analytic spaces were introduced in [11], these are used and discussed in Section 7. Dilatations also make sense for general algebraic spaces [23]. Similarly, for many other structures than rings, dilatations also make sense (e.g. categories, monoids and semirings) as noticed in [23]. It is possible that dilatations in other settings will be explored and find a significant role since, at the end, these are a basic mathematical concept. _Terminology_ Recall that dilatations have distinguished roots, as a consequence, several other terminologies are used to call certains dilatations in literature. For examples the constructions named _affine_ blowups_, _affine modifications_, _automatic blowups_, _formal blowups_, _Kaliman modifications_, _localizations_ and _Neron blowups_ are examples of (eventually multi-centered) dilatations. ### Some simple examples We provided an intuition on dilatations of rings before. Let us now provide some simple examples of dilatations of schemes. If \(S\) is a scheme, we denote by \(e_{S}\) the trivial group scheme over \(S\), as scheme it is isomorphic to \(S\). If \(G\) is a separated group scheme over \(S\), we denote by \(e_{G}\) the trivial closed group scheme, \(e_{G}\) is isomorphic to \(e_{S}\) as group schemes over \(S\). 1. We consider, once given a prime number \(p\), the multiplicative group scheme \[G=\mathbb{G}_{m,\mathbb{Z}_{p}}\] over the ring \(\mathbb{Z}_{p}\); its Hopf algebra is \(A=\mathbb{Z}_{p}[x,x^{-1}]\) while the morphism \(\Delta:A\to A\otimes_{\mathbb{Z}_{p}}A\) induced from multiplication \(G\times_{\mathbb{Z}_{p}}G\to G\) is defined by \(\Delta(x)=x\otimes x\). Now, consider the couple \(e_{G}\) and \(G\times_{\mathbb{Z}_{p}}\mathbb{Z}_{p}/\mathfrak{p}^{r}\) of closed subschemes of \(G\) for any \(r>0\) (\(\mathfrak{p}^{r}\) denotes \(p^{r}\mathbb{Z}_{p}\)). These are cut out, respectively, by the ideals \(I=(x-1)\) and \((p^{r})\) of \(A\). 1. For any \(r>0\), the dilatation \(A^{\prime}\) of \(A\) centered at \([e_{G},G\times_{\mathbb{Z}_{p}}\mathbb{Z}_{p}/\mathfrak{p}^{r}]\), or at \([I,(p^{r})]\), is the sub-\(A\)-algebra of \(A[1/p]\) generated by all the elements \(p^{-r}f\), where \(f\in I\). The dilatation \(G^{\prime}:=\operatorname{Spec}A^{\prime}\) is a group scheme of finite type over \(\mathbb{Z}_{p}\). The base change \(G^{\prime}\times_{\mathbb{Z}_{p}}\mathbb{Z}_{p}/\mathfrak{p}^{r}\) is isomorphic to the _additive_ group \(\mathbb{G}_{a}\) over \(\mathbb{Z}_{p}/\mathfrak{p}^{r}\), while \(G^{\prime}\times_{\mathbb{Z}_{p}}\mathbb{Q}_{p}\) is the _multiplicative_ group \(\mathbb{G}_{m}\) over \(\mathbb{Q}_{p}\). Furthermore, on the level of points, \(G^{\prime}(\mathbb{Z}_{p})=1+\mathfrak{p}^{r}\) is a _congruence subgroup_. 2. The dilatation \(A^{\prime}\) of \(A\) centered at \(\{[e_{G},G\times_{\mathbb{Z}_{p}}\mathbb{Z}_{p}/\mathfrak{p}^{r}]\}_{\{r>0\}}\), or at \(\{[I,(p^{r})]\}_{\{r>0\}}\), is the sub-\(A\)-algebra of \(A[1/p]\) generated by all the elements \(p^{-r}f\), where \(f\in I\) and \(r>0\). The dilatation \(G^{\prime}:=\operatorname{Spec}A^{\prime}\) is a group scheme over \(\mathbb{Z}_{p}\), it is not of finite type. The base change \(G^{\prime}\times_{\mathbb{Z}_{p}}\mathbb{Z}_{p}/\mathfrak{p}^{r}\) is isomorphic to the _trivial_ group scheme over \(\mathbb{Z}_{p}/\mathfrak{p}^{r}\), while \(G^{\prime}\times_{\mathbb{Z}_{p}}\mathbb{Q}_{p}\) is the _multiplicative_ group \(\mathbb{G}_{m}\) over \(\mathbb{Q}_{p}\). Furthermore, on the level of points, \(G^{\prime}(\mathbb{Z}_{p})=\{1\}\). 2. Let \(G\) be \(GL_{3}\) over \(\mathbb{Z}_{p}\) and let \(H\) be \(GL_{2}\times e_{\mathbb{Z}_{p}}\) diagonally inside \(G\) and let \(e_{G}\cong(e_{\mathbb{Z}_{p}})^{3}\) be the trivial closed subgroup of \(G\). For any \(r>0\), let \(G\times_{\mathbb{Z}_{p}}\mathbb{Z}_{p}/\mathfrak{p}^{r}\) be the base change of \(G\) to \(\mathbb{Z}/p^{r}\mathbb{Z}\). The dilatation \(G^{\prime}=\operatorname{Bl}\bigl{\{}\begin{smallmatrix}G\times_{\mathbb{Z}_{p} }\mathbb{Z}_{p}/\mathfrak{p}^{5}&G\otimes_{\mathbb{Z}_{p}}/\mathfrak{p}^{r} \\ H^{2}&,\,e_{G}\end{smallmatrix}\bigr{\}}G\) is a group scheme over \(\mathbb{Z}_{p}\). On the level of points, we have \[GL_{3}(\mathbb{Z}_{p})\supset G^{\prime}(\mathbb{Z}_{p}) =\begin{pmatrix}1+\mathfrak{p}^{2}&\mathfrak{p}^{2}&\mathfrak{p }^{5}\\ \mathfrak{p}^{2}&1+\mathfrak{p}^{2}&\mathfrak{p}^{5}\\ \mathfrak{p}^{5}&\mathfrak{p}^{5}&1+\mathfrak{p}^{5}\end{pmatrix}\] \[=\left\{\begin{pmatrix}1+a&b&e\\ c&1+d&f\\ g&h&1+k\end{pmatrix}\right|\!a,b,c,d\in\mathfrak{p}^{2}\ \ e,f,g,h,k\in\mathfrak{p}^{5} \right\}.\] 3. Let \(X=\mathbb{A}^{1}=\operatorname{Spec}(\mathbb{Z}[T])\) be the affine line over \(\mathbb{Z}\), let \(0\subset\mathbb{A}^{1}\) be the closed subscheme defined by the ideal \((T)\) and let \(\emptyset\subset\mathbb{A}^{1}\) be the closed subscheme defined by the ideal \(\mathbb{Z}[T]\). Then the dilatation \(\operatorname{Bl}_{\emptyset}^{D}X\) identifies with the open subscheme \(\mathbb{G}_{m}\) of \(\mathbb{A}^{1}\), indeed \(\mathbb{Z}[T][\frac{\mathbb{Z}[T]}{T}]\cong\mathbb{Z}[T,T^{-1}]\). This is an example of localization. 4. Let \(X=\mathbb{A}^{2}=\operatorname{Spec}(\mathbb{Z}[A,B])\) be the affine plane over \(\mathbb{Z}\), let \(D\subset\mathbb{A}^{2}\) be the line defined by the ideal \((A)\) and let \(0\subset\mathbb{A}^{2}\) be the origin defined by the ideal \((A,B)\). Then \(\operatorname{Bl}_{0}^{D}X\) identifies with \(\operatorname{Spec}(\mathbb{Z}[A,B,C]/(AC-B))\). Indeed, one has an isomorphism (e.g. by Proposition 1.4) \[\mathbb{Z}[A,B][\frac{(A,B)}{A}]\cong\mathbb{Z}[A,B][\frac{(B)}{A}]\cong\mathbb{ Z}[A,B,C]/AC=B.\] The morphism \(\operatorname{Bl}_{0}^{D}X\to X\) is given by \(\mathbb{Z}[A,B]\to\mathbb{Z}[A,B,C]/(AC-B),A,B\mapsto A,B\). At the level of points \(\big{(}\operatorname{Bl}_{0}^{D}X\big{)}(\mathbb{Z})\) is made of pairs \((a,b)\in\mathbb{Z}^{2}\) such that \(b\) is a multiple of \(a\). 5. More advanced examples of dilatations in contextual situations are available in the second part of this survey. _What is the aim of this survey?_ Recall that we wish to provide the reader with a short guide to recent works on the theory of dilatations. We do not mean to present a comprehensive account. We rather concentrate on the contributions that ourselves were responsible for [10, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] and those which were our starting points [1, 2, 3, 1, 1, 12, 13, 14, 15, 16]. Part I is devoted to an exposition of general definitions and results around the concept of algebraic dilatations introduced and proved in [14] and [17]. Section 1 discusses dilatations of commutative unital rings. Section 2 summarizes general results on dilatations of schemes. Section 3 focuses on dilatations of group schemes. Part II provides an overview on some applications of dilatations in various mathematical contexts. In Section 4, we explain some recent applications of dilatations to the theory of affine group schemes and their representation categories in the case where objects are defined over a discrete valuation ring \(R\). These were developed mainly in order to advance the study of Tannakian categories defined over \(R\) and appearing in geometry, such as the case of group schemes associated to \(\mathcal{D}\)-modules [1, 2, 3, 5, 6] and principal bundles with finite structure groups [6]. After a brief introduction to Tannakian categories over \(R\) (Section 4.1), we go on to explain how to filter these categories by smaller ones and produce in this way the "Galois-Tannaka" group schemes 4.2. We show why Neron blowups are a fundamental tool for studying these groups and explain what has been done so far in order to exhaust Galois-Tannaka groups by means of Neron blowups, both in the case of mono-centered and multi-centered Neron blowups (cf. Section 4.2 and Section 4.3). In Section 5, _congruent isomorphisms_, formulated and stated using the language of dilatations, are discussed in relation with _the Moy-Prasad isomorphism_, _Bruhat-Tits buildings_ and _representations of \(p\)-adic groups_. Section 6 shows that many level structures on _moduli stacks of \(G\)-bundles_ are encoded in torsors under dilatations and that this can be used to obtain integral models of _shtukas_. Section 7 discusses dilatations in _affine geometry_ and related progress in \(\mathbb{A}^{1}\)_-homotopy theory_. All results stated in this paper are proved in indicated references. This survey is an expository text and does not contain any new mathematical result. What is perhaps new is to summarize some aspects of several independent works involving dilatations in a single text. We hope this could be a source of inspiration for future works. ### Some conventions and notations 1. All rings are unital and commutative, unless otherwise mentioned. 2. Let \((M,+)\) be a monoid. A submonoid \(F\) is a face of \(M\) if whenever \(x+y\in F\), then both \(x\) and \(y\) belong to \(F\). 3. If \(R\) is a discrete valuation ring with field of fractions \(K\), then, for each \(R\)-scheme, we call \(X\otimes_{R}K\) the generic fibre of \(X\). 4. If \(A\) is a ring, then \(A\)-**mod** is the category of finitely presented \(A\)-modules. 5. If \(G\) is a group scheme over a noetherian ring \(R\), or an abstract group, we denote by \(\operatorname{Rep}_{R}(G)\) the category of all \(R\)-modules of finite type affording a representation of \(G\) as explained in [1]. ## Part I. Algebraic dilatations In this part, we introduce formally dilatations of rings and schemes. Locally, dilatations of schemes will be studied through dilatations of rings. ### 1. Dilatations of rings We summarily present basic results on dilatations of rings following the more general path given in [1]. Let \(A\) be a ring. A _center in \(A\)_ is a pair \([M,a]\) consisting of an ideal \(M\subset A\) and an element \(a\in A\). A _multi-center_ is a family of center indexed by some set. Let \(I\) be an index set and let \(\{[M_{i},a_{i}]\}_{i\in I}\) be a multi-center. For \(i\in I\), we put \(L_{i}=M_{i}+(a_{i})\), an ideal of \(A\). Let \(\mathbb{N}_{I}\) be the monoid \(\bigoplus_{i\in I}\mathbb{N}\). If \(\nu=(\nu_{1},\ldots,\nu_{i},\ldots)\in\mathbb{N}_{I}\) we put \(L^{\nu}=L_{1}^{\nu_{1}}\cdots L_{i}^{\nu_{i}}\cdots\) (product of ideals of \(A\)) and \(a^{\nu}=a_{1}^{\nu_{1}}\cdots a_{i}^{\nu_{i}}\cdots\) (product of elements of \(A\)). We also put \(a^{\mathbb{N}_{I}}=\{a^{\nu}|\nu\in\mathbb{N}_{I}\}\). Definition and Proposition 1.1[1]. The dilatation of \(A\) with multi-center \(\{[M_{i},a_{i}]\}_{i\in I}\) is the unital commutative ring \(A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]\) defined as follows: \(\bullet\) The underlying set of \(A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]\) is the set of equivalence classes of symbols \(\frac{m}{a^{\nu}}\) where \(\nu\in\mathbb{N}_{I}\) and \(m\in L^{\nu}\) under the equivalence relation \[\frac{m}{a^{\nu}}\equiv\frac{p}{a^{\lambda}}\Leftrightarrow\exists\beta\in \mathbb{N}_{I}\text{ such that }ma^{\beta+\lambda}=pa^{\beta+\nu}\text{ in }A.\] From now on, we abuse notation and denote a class by any of its representative \(\frac{m}{a^{\nu}}\) if no confusion is likely. \(\bullet\) The addition law is given by \(\frac{m}{a^{\nu}}+\frac{p}{a^{\beta}}=\frac{ma^{\beta+p\nu}}{a^{\beta+\nu}}\). \(\bullet\) The multiplication law is given by \(\frac{m}{a^{\nu}}\times\frac{p}{a^{\beta}}=\frac{mp}{a^{\nu+\beta}}\). \(\bullet\) The additive neutral element is \(\frac{0}{1}\) and the multiplicative neutral element is \(\frac{1}{1}\). From now on, we also use the notation \(A[\frac{M}{a}]\) to denote \(A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]\). We have a canonical morphism of rings \(A\to A[\frac{M}{a}]\) given by \(a\mapsto\frac{a}{1}\). The element \(\frac{a}{1}\) of \(A[\frac{M}{a}]\) will be denoted by \(a\) if no confusion is likely. Fact 1.2[1]. (i) Let \(\{N_{i}\}_{i\in I}\) be ideals in \(A\) such that \(N_{i}+(a_{i})=L_{i}\) for all \(i\in I\). Then we have identifications of \(A\)-algebras \(A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]=A[\big{\{}\frac{N_{i}}{a_{i}} \big{\}}_{i\in I}]=A[\big{\{}\frac{L_{i}}{a_{i}}\big{\}}_{i\in I}]\). (ii) Dilatations of rings generalize entirely localizations of rings. Indeed, let \(A\) be a ring and let \(S\) be a multiplicative subset of \(A\). Then \(S^{-1}A=A[\big{\{}\frac{A}{s}\big{\}}_{s\in S}]\). (iii) Any sub-\(A\)-algebra of a localization \(S^{-1}A\) for a subset \(S\subset A\) can be obtained as a multi-centered dilatation. * Note that we did not used substraction to define dilatations of rings. In fact Definition 1.1 makes sense for arbitrary unital commutative semirings, cf. [10, SS2] or more generally for categories (e.g. monoids) cf. [10]. This construction enjoys the following properties, cf. [10, SS2]. If \(\#I=1\), most results appear in [12, Tag 052P]. **Proposition 1.3** ([10]): _The following assertions hold._ * _As_ \(A\)_-algebra,_ \(A[\frac{M}{a}]\) _is generated by_ \(\big{\{}\frac{L_{i}}{a_{i}}\big{\}}_{i\in I}\)_. Since_ \(L_{i}=M_{i}+(a_{i})\)_, this implies that_ \(A[\frac{M}{a}]\) _is generated by_ \(\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}\)_._ * _If_ \(A\) _is a domain and_ \(a_{i}\neq 0\) _for all_ \(i\)_, then_ \(A[\frac{M}{a}]\) _is a domain._ * _If_ \(A\) _is reduced, then_ \(A[\frac{M}{a}]\) _is reduced._ * _The following assertions are equivalent._ * There exists_ \(\nu\in\mathbb{N}_{I}\) _such that_ \(a^{\nu}=0\) _in_ \(A\)_._ * The ring_ \(A[\frac{M}{a}]\) _equals to the zero ring._ * _Let_ \(\nu\) _be in_ \(\mathbb{N}_{I}\)_. The image of_ \(a^{\nu}\) _in_ \(A[\frac{M}{a}]\) _is a non-zero-divisor._ * _Let_ \(f:A\to B\) _be a morphism of rings. Let_ \(\{[N_{i},b_{i}]\}_{i\in I}\) _be centers of_ \(B\) _such that_ \(f(M_{i})\subset N_{i}\) _and_ \(f(a_{i})=b_{i}\) _for all_ \(i\in I\)_. Then we have a canonical morphism of_ \(A\)_-algebras_ \[\phi:A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]\to B[\big{\{}\frac{N_{i}} {b_{i}}\big{\}}_{i\in I}]\.\] * _Let_ \(c\) _be a non-zero-divisor element in_ \(A\)_. Then_ \(\frac{c}{1}\) _is a non-zero-divisor in_ \(A[\frac{M}{a}]\)_._ * _Let_ \(K\subset I\) _put_ \(J=I\setminus K\)_. Then we have a canonical morphism of_ \(A\)_-algebras_ \[\varphi:A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in K}]\longrightarrow A[ \big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}].\] _Moreover_ * _if_ \(M_{i}\subset(a_{i})\) _for all_ \(i\in J\)_, then_ \(\varphi\) _is surjective, and_ * _if_ \(a_{i}\) _is a non-zero-divisor in_ \(A\) _for all_ \(i\in J\)_, then_ \(\varphi\) _is injective._ * _Let_ \(K\subset I\)_. Then we have a canonical isomorphism of_ \(A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in K}]\)_-algebras_ \[A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]=A[\big{\{}\frac{M_{i}}{a_{i}} \big{\}}_{i\in K}][\big{\{}\frac{A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I }]\frac{M_{i}}{1}}{\frac{a_{i}}{1}}\big{\}}_{j\in I\setminus K}],\] _where_ \(A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]\frac{M_{j}}{1}\) _is the ideal of_ \(A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]\) _generated by_ \(\frac{M_{j}}{1}\subset A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]\)_._ * _Assume that_ \(a_{i}=a_{j}=:b\) _for all_ \(i,j\in I\)_, then_ \[A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]=A[\big{\{}\frac{\sum_{i\in I} M_{i}}{b}]\] * _Let_ \(\nu\in\mathbb{N}_{I}\)_. We have_ \(L^{\nu}A[\frac{M}{a}]=a^{\nu}A[\frac{M}{a}]\)_._ * _(Universal property) If_ \(\chi:A\to B\) _is a morphism of rings such that_ \(\chi(a_{i})\) _is a non-zero-divisor and generates_ \(\chi(L_{i})B\) _for all_ \(i\in I\)_, then there exists a unique morphism_ \(\chi^{\prime}\) _of_ \(A\)_-algebras_ \(A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]\to B\)_. The morphism_ \(\chi^{\prime}\) _sends_ \(\frac{l}{a^{\nu}}\) _(_\(\nu\in\mathbb{N}_{I},l\in L^{\nu}\)_) to the unique element_ \(b\in B\) _such that_ \(\chi(a^{\nu})b=\chi(l)\)_._ * _Assume that_ \(I=\{1,\ldots,k\}\) _is finite. Then we have a canonical identification of_ \(A\)_-algebras_ \[A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]=A[\big{\{}\frac{\sum_{i\in I }(M_{i}\prod_{j\in I\setminus\{i\}}a_{j})}{a_{1}\cdots a_{k}}].\] * _Write_ \(I=\operatorname{colim}_{J\subset I}J\) _as a filtered colimit of sets. We have a canonical identification of_ \(A\)_-algebras_ \[A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in I}]=\operatorname{colim}_{J\subset I }A[\big{\{}\frac{M_{i}}{a_{i}}\big{\}}_{i\in J}].\] * Let \(f:A\to B\) be an \(A\)-algebra. Put \(N_{i}=f(M_{i})B\) and \(b_{i}=f(a_{i})\) for \(i\in I\). Then \(B[\left\{\frac{N_{i}}{b_{i}}\right\}_{i\in I}]\) is the quotient of \(B\otimes_{A}A[\left\{\frac{M_{i}}{a_{i}}\right\}_{i\in I}]\) by the ideal \(T_{b}\) of elements annihilated by some element in \(b^{N_{I}}:=\{b^{\nu}|\nu\in\mathbb{N}_{I}\}\). If moreover \(f:A\to B\) is flat, then \(T_{b}=0\) and we have a canonical isomorphism \[B[\{\frac{N_{i}}{b_{i}}\}_{i\in I}]=B\otimes_{A}A[[\{\frac{M_{i}}{a_{i}}\}_{i \in I}].\] * Let \(f:R\to A\) be a morphism of rings and let \(\{r_{i}\}_{i\in I}\subset R\). Let \(R^{\prime}=R[\left\{\frac{R}{r_{i}}\right\}_{i\in I}]\); this is a localization of \(R\) and hence \(R\to R^{\prime}\) is a flat morphism. Let \(A^{\prime}=A\otimes_{R}R^{\prime}\), \(M^{\prime}_{i}=M_{i}\otimes_{R}R^{\prime}\subset A^{\prime}\). Then, if \(a_{i}:=f(r_{i})\), the dilatation \(A[\left\{\frac{M_{i}}{a_{i}}\right\}_{i\in I}]\) is isomorphic to the \(A\)-subalgebra of \(A^{\prime}=A\otimes_{R}R^{\prime}\) generated by \(\{M_{i}\otimes r_{i}^{-1}\}_{i\in I}\) and \(A\). We finish with an important description of dilatations in a particular case, cf. [10, Proposition 5.5] and [12, Tag 0BIQ]. Proposition 1.4.: Let \(A\) be a ring. Let \(a,g_{1},\ldots,g_{n}\) be a \(H_{1}\)-regular sequence in \(A\) (cf. [12, Tag 062E] for \(H_{1}\)-regularity). Let \(d_{1},\ldots,d_{n}\) be positive integers. The dilatation algebra identifies with a quotient of a polynomial algebra as follows \[A[\frac{(g_{1})}{a^{d_{1}}},\ldots,\frac{(g_{n})}{a^{d_{n}}}]=A[x_{1},\ldots,x _{n}]/(g_{1}-a^{d_{1}}x_{1},\ldots,g_{n}-a^{d_{n}}x_{n}).\] ## 2 Dilatations of schemes This section is an introduction to dilatations of schemes, the main references are [13] and [10]. Dilatations of schemes involve operations on closed subschemes that we recall at the beginning of this section. We suggest readers to be familiar with SS2.1 before reading other subsections of Section 2. Note that [10] deals with general algebraic spaces, in fact most results of Section 2 extend to this setting. ### Definitions Let \(X\) be a scheme. Let \(Clo(X)\) be the set of closed subschemes of \(X\). Recall that \(Clo(X)\) corresponds to quasi-coherent ideals of \(\mathcal{O}_{X}\). Let \(IQCoh(\mathcal{O}_{X})\) denote the set of quasi-coherent ideals of \(\mathcal{O}_{X}\). It is clear that \((IQCoh(\mathcal{O}_{X}),+,\times,0,\mathcal{O}_{X})\) is a semiring. So we obtain a semiring structure on \(Clo(X)\), usually denoted by \((Clo(X),\cap,+,X,\emptyset)\). For clarity, we now recall directly operations on \(Clo(X)\). Given two closed subschemes \(Y_{1},Y_{2}\) given by ideals \(\mathcal{J}_{1},\mathcal{J}_{2}\), their sum \(Y_{1}+Y_{2}\) is defined as the closed subscheme given by the ideal \(\mathcal{J}_{1}\mathcal{J}_{2}\). Moreover, if \(n\in\mathbb{N}\), we denote by \(nY_{1}\) the \(n\)-th multiple of \(Y_{1}\). The set of locally principal closed subschemes of \(X\) (cf. [12, Tag 01WR]), denoted \(Pri(X)\), forms a submonoid of \((Clo(X),+)\). Effective Cartier divisors of \(X\) [12, Tag 01WR], denoted \(Car(X)\), form a submonoid of \((Pri(X),+)\). Note that \(Car(X)\) is a face of \(Pri(X)\). We have an other monoid structure on \(Clo(X)\) given by intersection, this law is denoted \(\cap\). The operation \(\cap\) corresponds to the sum of quasi-coherent sheaves of ideals. The set \(Clo(X)\) endowed with \(\cap,+\) is a semiring whose neutral element for \(+\) is \(\emptyset\) and whose neutral element for \(\cap\) is \(X\). Let \(C\in Car(X)\), a non-zero-divisor (for \(+\)) in the semiring \(Clo(X)\). Let \(Y,Y^{\prime}\in Clo(X)\). If \(C+Y\) is a closed subscheme of \(C+Y^{\prime}\), then \(Y\) is a closed subscheme of \(Y^{\prime}\). Moreover if \(C+Y=C+Y^{\prime}\), then \(Y=Y^{\prime}\). Let \(f:X^{\prime}\to X\) be a morphism of schemes, then \(f\) induces a morphism of semirings \(Clo(f):Clo(X)\to Clo(X^{\prime}),Y\mapsto Y\times_{X}X^{\prime}\), moreover \(Clo(f)\) restricted to \((Pri(X),+)\) factors through \((Pri(X^{\prime}),+)\), this morphism of monoids is denoted \(Pri(f)\). In general the image of the map \(Pri(f)|_{Car(X)}\) is not included in \(Car(X^{\prime})\). Let \(Y_{1},Y_{2}\in Clo(X)\), we write \(Y_{1}\subset Y_{2}\) if \(Y_{1}\) is a closed subscheme of \(Y_{2}\). We obtain a poset \((Clo(X),\subset)\) Let \(Y_{1},Y_{2},Y_{3}\in Clo(X)\), if \(Y_{1}\subset Y_{2}\) and \(Y_{1}\subset Y_{3}\) then \(Y_{1}\subset Y_{2}\cap Y_{3}.\) Let \(Y_{1},Y_{2}\in Clo(X)\), then \((Y_{1}\cap Y_{2})\subset Y_{1}\) and \(Y_{1}\subset(Y_{1}+Y_{2}).\) Finally, if \(Y=\{Y_{e}\}_{e\in E}\) is a subset of \(Clo(X)\) and if \(\nu\in\mathbb{N}^{E},\) we put \(Y^{\nu}=\{\nu_{e}Y_{e}\}_{e\in E}\) and if moreover \(\nu\in\mathbb{N}_{E},\) we put \(\nu Y=\sum_{e\in E}\nu_{e}Y_{e}.\) Definition 2.1 ([32, SS2.3][21]): Let \(D=\{D_{i}\}_{i\in I}\) be a subset of \(Clo(X).\) Let \(\operatorname{Sch}_{X}^{D\text{-reg}}\) be the full subcategory of schemes \(f:T\to X\) over \(X\) such that \(T\times_{X}D_{i}\) is an effective Cartier divisor of \(T\) for each \(i.\) If \(T^{\prime}\to T\) is flat and \(T\to X\) is an object in \(\operatorname{Sch}_{X}^{D\text{-reg}},\) so is the composition \(T^{\prime}\to T\to X.\) In particular, the category \(\operatorname{Sch}_{X}^{D\text{-reg}}\) can be equipped with the fpqc/fppf/etale/Zariski Grothendieck topology so that the notion of sheaves is well-defined. Fact 2.2 ([21]): Let \(D=\{D_{i}\}_{i\in I}\) be a subset of \(Clo(X).\) 1. Let \(f:T\to X\) be an object in \(\operatorname{Sch}_{X}^{D\text{-reg}}.\) Then for any \(\nu\in\mathbb{N}_{I}\), the scheme \(T\times_{X}\nu D\) is an effective Cartier divisor of \(T\), namely \(\nu(T\times_{X}D).\) 2. Assume that \(\#I\) is finite, then \(\operatorname{Sch}_{X}^{D\text{-reg}}\) equals \(\operatorname{Sch}_{X}^{\sum_{i\in I}D_{i}}.\) Definition 2.3 ([21]): A multi-center in \(X\) is a set \(\{[Y_{i},D_{i}]\}_{i\in I}\) such that 1. \(Y_{i}\) and \(D_{i}\) belong to \(Clo(X),\) 2. there exists an affine open covering \(\{U_{\gamma}\to X\}_{\gamma\in\Gamma}\) of \(X\) such that \(D_{i}|_{U_{\gamma}}\) is principal for all \(i\in I\) and \(\gamma\in\Gamma\) (in particular \(D_{i}\) belongs to \(Pri(X)\) for all \(i\)). In other words a multi-center \(\{[Y_{i},D_{i}]\}_{i\in I}\) is a set of pairs of closed subschemes such that locally each \(D_{i}\) is principal. Remark 2.4: Let \(\{Y_{i},D_{i}\}_{i\in I}\) such that \(Y_{i}\in Clo(X)\) and \(D_{i}\in Pri(X)\) for any \(i\in I.\) Assume that \(I\) is finite, then \(\{[Y_{i},D_{i}]\}_{i\in I}\) is a multi-center in \(X\), i.e. the second condition in Definition 2.3 is satisfied. We now fix a multi-center \(\{[Y_{i},D_{i}]\}_{i\in I}\) in \(X.\) Denote by \(\mathcal{M}_{i},\) respectively \(\mathcal{J}_{i},\) the quasi-coherent sheaf of ideals of \(\mathcal{O}_{X}\) defining \(Y_{i},\) respectively \(D_{i}.\) We put \(Z_{i}=Y_{i}\cap D_{i}\) and \(\mathcal{L}_{i}=\mathcal{M}_{i}+\mathcal{J}_{i}\) so that \(Z_{i}\) is defined by \(\mathcal{L}_{i}.\) We put \(Y=\{Y_{i}\}_{i\in I},\)\(D=\{D_{i}\}_{i\in I}\) and \(Z=\{Z_{i}\}_{i\in I}.\) We now introduce dilatations \(\mathcal{O}_{X}\)-algebras by glueing. Definition and Proposition 2.5: The dilatation of \(\mathcal{O}_{X}\) with multi-center \(\{[\mathcal{M}_{i},\mathcal{J}_{i}]\}_{i\in I}\) is the quasi-coherent \(\mathcal{O}_{X}\)-algebra \(\mathcal{O}_{X}\Big{[}\Big{\{}\frac{\mathcal{M}_{i}}{\mathcal{J}_{i}}\Big{\}} _{i\in I}\Big{]}\) obtained by glueing as follows. The quasi-coherent \(\mathcal{O}_{X}\)-algebra \(\mathcal{O}_{X}\Big{[}\Big{\{}\frac{\mathcal{M}_{i}}{\mathcal{J}_{i}}\Big{\}} _{i\in I}\Big{]}\) is characterized by the fact that its restriction, on any open subscheme \(U\subset X\) such that \(U\) is an affine scheme and each \(D_{i}\) is principal on \(U\) and generated by \(a_{iU}\), is given by \[\Big{(}\mathcal{O}_{X}\Big{[}\Big{\{}\frac{\mathcal{M}_{i}}{\mathcal{J}_{i}} \Big{\}}_{i\in I}\Big{]}\Big{)}_{\bigcup_{U}}=\Gamma(U,\mathcal{O}_{X}) \widetilde{\Big{[}\Big{\{}\frac{\Gamma(U,\mathcal{M}_{i})}{a_{iU}}\Big{\}}}_{i \in I}\Big{]}\] where is the associated sheaf of algebras on \(U\). Definition 2.6 ([21]): The _dilatation_ of \(X\) with multi-center \(\{[Y_{i},D_{i}]\}_{i\in I}\) is the \(X\)-affine scheme \[\operatorname{Bl}_{Y}^{D}X\ \overset{\text{\tiny def}}{=}\ \operatorname{Spec}_{X} \bigl{(}\mathcal{O}_{X}\Big{[}\Big{\{}\frac{\mathcal{M}_{i}}{\mathcal{J}_{i}} \Big{\}}_{i\in I}\Big{]}\bigr{)}.\] The terminologies _affine blowups_ and _affine modifications_ are also used. Remark 2.7.: In the mono-centered case, this definition is the one of [14]. If moreover \(D\) is a Cartier divisor, one has another equivalent definition (cf. Proposition 2.14) that goes back to [13] and [15]. Remark 2.8.: We always have \(\mathrm{Bl}^{D}_{Y}X=\mathrm{Bl}^{D}_{Z}X\). Notation 2.9.: We will also use the notation \(\mathrm{Bl}\big{\{}^{D_{i}}_{Y_{i}}\big{\}}_{i\in I}X\) and \(\mathrm{Bl}^{\{D_{i}\}_{i\in I}}_{\{Y_{i}\}_{i\in I}}X\) to denote \(\mathrm{Bl}^{D}_{Y}X\). If \(I=\{i\}\) is a singleton we also use the notation \(\mathrm{Bl}^{D_{i}}_{Y_{i}}X\). If \(K\subset I\), we sometimes use the notation \(\mathrm{Bl}^{\{D_{i}\}_{i\in K},\{D_{i}\}_{i\in I\setminus K}}_{\{Y_{i}\}_{i \in I\setminus K}}X\). If \(I=\{1,\ldots,k\}\), we use the notation \(\mathrm{Bl}^{D_{1},\ldots,D_{k}}_{Y_{1},\ldots,Y_{k}}X\). Etc. Definition 2.10.: We say that a morphism \(f:X^{\prime}\to X\) is a dilatation morphism if \(f\) is equal to \(\mathrm{Bl}\big{\{}^{D_{i}}_{Y_{i}}\big{\}}_{i\in I}X\to X\) for some multi-center \(\{[Y_{i},D_{i}]\}_{i\in I}\). Fact 2.11.: _[_15_, 16_]_ _The dilatation morphism \(\mathrm{Bl}\big{\{}^{D_{i}}_{\emptyset}\big{\}}_{i\in I}X\to X\) is an open immersion. In other words, if \(Y_{i}\) is the empty closed subscheme defined by the ideal \(\mathcal{O}_{X}\) for all \(i\in I\), then \(\mathrm{Bl}^{D}_{Y}X\) identifies with an open subscheme of \(X\). In this case, we say that \(\mathrm{Bl}\big{\{}^{D_{i}}_{\emptyset}\big{\}}_{i\in I}X\to X\) is a localization. ### Exceptional divisors We proceed with the notation from SS 2.1. Proposition 2.12 ([14, Ma23d]).: As closed subschemes of \(\mathrm{Bl}^{D}_{Y}X\), one has, for all \(\nu\in\mathbb{N}_{I}\), \[\mathrm{Bl}^{D}_{Y}X\times_{X}\nu Z=\mathrm{Bl}^{D}_{Y}X\times_{X}\nu D,\] which is an effective Cartier divisor on \(\mathrm{Bl}^{D}_{Y}X\). ### Relation to affine projecting cone We proceed with the notation from SS 2.1 and assume that \(\{D_{i}\}_{i\in I}\) belong to \(Car(X)\). In this case, we can also realize \(\mathrm{Bl}^{D}_{Y}X\) as a closed subscheme of the multi-centered affine projecting cone associated to \(X,Z\) and \(D\). Definition 2.13.: The _affine projecting cone \(\mathcal{O}_{X}\)-algebra_ with multi-center \(\{[Z_{i}=V(\mathcal{L}_{i}),D_{i}=V(\mathcal{J}_{i})]\}_{i\in I}\) is \[\mathrm{C}^{\mathcal{J}}_{\mathcal{L}}\mathcal{O}_{X}\;\stackrel{{ \mathrm{\tiny def}}}{{=}}\;\bigoplus_{\nu\in\mathbb{N}_{I}} \mathcal{L}^{\nu}\otimes\mathcal{J}^{-\nu}.\] The _affine projecting cone_ of \(X\) with multi-center \(\{[Z_{i},D_{i}]\}_{i\in I}\) is \[\mathrm{C}^{D}_{Z}X\;\stackrel{{\mathrm{\tiny def}}}{{=}}\; \mathrm{Spec}\big{(}\mathrm{C}^{\mathcal{J}}_{\mathcal{L}}\mathcal{O}_{X} \big{)}.\] Proposition 2.14 ([13, 14, 15, 16]).: The dilatation \(\mathrm{Bl}^{D}_{Z}X\) is the closed subscheme of the affine projecting cone \(\mathrm{C}^{D}_{Z}X\) defined by the equations \(\{\varrho_{i}-1\}_{i\in I}\), where for all \(i\in I\), \(\varrho_{i}\in\mathrm{C}^{\mathcal{J}}_{\mathcal{L}}\mathcal{O}_{X}\) is the image of \(1\in\mathcal{O}_{X}\) under the map \[\mathcal{O}_{X}\cong\mathcal{J}_{i}\otimes\mathcal{J}_{i}^{-1}\subset\mathcal{ L}_{i}\otimes\mathcal{J}_{i}^{-1}\subset\mathrm{C}^{\mathcal{J}}_{\mathcal{L}} \mathcal{O}_{X}.\] ### Description of the exceptional divisor in the mono-centered case We proceed with the notation from SS 2.1 and assume \(I=\{i\}\) is a singleton and we ommit the subscripts \(i\) in notation. We saw in Lemma 2.12 that the preimage of the center \(\mathrm{Bl}^{D}_{Z}X\times_{X}Z=\mathrm{Bl}^{D}_{Z}X\times_{X}D\) is an effective Cartier divisor in \(\mathrm{Bl}^{D}_{Z}X\). In order to describe it following [14], as before we denote by \(\mathcal{L}\) and \(\mathcal{J}\) the sheaves of ideals of \(Z\) and \(D\) in \(\mathcal{O}_{X}\). Also we let \(\mathcal{C}_{Z/D}=\mathcal{L}/(\mathcal{L}^{2}+\mathcal{J})\) and \(\mathcal{N}_{Z/D}=\mathcal{C}_{Z/D}^{\vee}\) be the conormal and normal sheaves of \(Z\) in \(D\). Proposition 2.15 (MRR20, Proposition 2.9).: Assume that \(X\) is a scheme. Assume that \(D\subset X\) is an effective Cartier divisor, and \(Z\subset D\) is a regular immersion. Write \(\mathcal{J}_{Z}:=\mathcal{J}|_{Z}\). 1. The exceptional divisor \(\operatorname{Bl}_{Z}^{D}X\times_{X}Z\to Z\) is an affine bundle (i.e. a torsor under a vector bundle), Zariski locally over \(Z\) isomorphic to \(\mathbb{V}(\mathcal{C}_{Z/D}\otimes\mathcal{J}_{Z}^{-1})\to Z\). 2. If \(H^{1}(Z,\mathcal{N}_{Z/D}\otimes\mathcal{J}_{Z})=0\) (for example if \(Z\) is affine), then \(\operatorname{Bl}_{Z}^{D}X\times_{X}Z\to Z\) is globally isomorphic to \(\mathbb{V}(\mathcal{C}_{Z/D}\otimes\mathcal{J}_{Z}^{-1})\to Z\). 3. If \(Z\) is a transversal intersection in the sense that there is a cartesian square of closed subschemes whose vertical maps are regular immersions \[\begin{CD}W@>{}>{}>X\\ \updownarrow\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad ### Combinatorial and arithmetic relations We proceed with the notation from SS 2.1. Let \(X^{\prime}\) and \(\{[Y^{\prime}_{i},D^{\prime}_{i}]\}_{i\in I}\) be another datum as in SS 2.1. As usual, put \(Z^{\prime}_{i}=Y^{\prime}_{i}\cap D^{\prime}_{i}\). A morphism \(f:X^{\prime}\to X\) such that, for all \(i\in I\), its restriction to \(D^{\prime}_{i}\) (resp. \(Z^{\prime}_{i}\)) factors through \(D_{i}\) (resp. \(Z_{i}\)), and such that \(f^{-1}(D_{i})=D^{\prime}_{i}\), induces a unique morphism \(\operatorname{Bl}_{Y^{\prime}}^{D^{\prime}}X^{\prime}\to\operatorname{Bl}_{Y }^{D}X\) such that the following diagram of schemes commutes ### Base change We proceed with the notation from SS 2.1. Let \(X^{\prime}\to X\) be a map of schemes, and denote by \(Y^{\prime}_{i},Z^{\prime}_{i},D^{\prime}_{i}\subset X^{\prime}\) the preimage of \(Y_{i},Z_{i},D_{i}\subset X\). Then \(D^{\prime}_{i}\subset X^{\prime}\) is locally principal for any \(i\) so that the dilatation \(\operatorname{Bl}_{Y^{\prime}}^{D^{\prime}}X^{\prime}\to X^{\prime}\) is well-defined. By SS 2.8 there is a canonical morphism of \(X^{\prime}\)-schemes \[\operatorname{Bl}_{Y^{\prime}}^{D^{\prime}}X^{\prime}\;\longrightarrow \;\operatorname{Bl}_{Y}^{D}X\times_{X}X^{\prime}. \tag{2.3}\] Lemma 2.23 ([14, Ma23d]. If \(\operatorname{Bl}_{Y}^{D}X\times_{X}X^{\prime}\to X^{\prime}\) is an object of \(\operatorname{Sch}_{X^{\prime}}^{D\text{-reg}}\), then (2.3) is an isomorphism. Corollary 2.24 ([14, Ma23d]. If the morphism \(X^{\prime}\to X\) is flat and satisfies a property \(\mathcal{P}\) which is stable under base change, then \(\operatorname{Bl}_{Y^{\prime}}^{D^{\prime}}X^{\prime}\to\operatorname{Bl}_{Y }^{D}X\) is flat and satisfies \(\mathcal{P}\). ### Iterated multi-centered dilatations We proceed with the notation from SS 2.1. Let \(\nu,\theta\in\mathbb{N}^{I}\) such that \(\theta\leqslant\nu\), i.e. \(\theta_{i}\leqslant\nu_{i}\) for all \(i\in I\). Proposition 2.25 (Ma23d).: There is a unique \(X\)-morphism \[\varphi_{\nu,\theta}:\operatorname{Bl}_{Y}^{D^{\nu}}X\longrightarrow \operatorname{Bl}_{Y}^{D^{\theta}}X.\] Assume now moreover that \(\nu,\theta\in\mathbb{N}_{I}\subset\mathbb{N}^{I}\). We will prove that, under some assumptions, \(\varphi_{\nu,\theta}\) is a dilatation morphism with explicit descriptions. We need the following observation. Proposition 2.26 (MRR20, Ma23d).: Assume that we have a commutative diagram of schemes where the right-hand side morphism is the dilatation map. Assume that \(f\) is a closed immersion. Then \(f^{\prime}\) is a closed immersion. We now assume that \(Z_{i}\subset Y_{i}\) is a Cartier divisor inclusion for all \(i\in I\). Let \(\mathcal{D}_{i}\) be the canonical diagram of closed immersions obtained by Proposition 2.26. Let \(f_{i}\) be the canonical morphism (e.g. cf. 2.19 or 2.25) \[\operatorname{Bl}_{Y}^{D^{\nu}}X\rightarrow\operatorname{Bl}_{Y_{i}}^{\nu_{i} D_{i}}X.\] We denote by \(Y_{i}\times_{\operatorname{Bl}_{Y_{i}}^{\nu_{i}D_{i}}X}\operatorname{Bl}_{Y}^{D^{ \nu}}X\) the fiber product obtained via the arrows given by \(f_{i}\) and \(\mathcal{D}_{i}\). We use similarly the notation \(D_{i}\times_{\operatorname{Bl}_{Y_{i}}^{\nu_{i}D_{i}}X}\operatorname{Bl}_{Y}^ {D^{\nu}}X\). Proposition 2.27 (PY06, SS7.2).: Recall that \(\theta\leqslant\nu\). Put \(\gamma=\nu-\theta\). Put \(K=\{i\in I|\gamma_{i}>0\}\). We have an identification \[\operatorname{Bl}_{Y}^{D^{\nu}}X=\operatorname{Bl}_{\begin{subarray}{c}\{Y_ {i}D_{i}\times_{\operatorname{Bl}_{Y_{i}}^{\theta}D_{i}}X\operatorname{Bl}_{ Y}^{\theta}X\}_{i\in K}\\ \{Y_{i}\times_{\operatorname{Bl}_{Y_{i}}^{\theta}D_{i}}X\operatorname{Bl}_{Y}^ {\theta}X\}_{i\in K}\end{subarray}}\operatorname{Bl}_{Y}^{D^{\theta}}X.\] In particular the unique \(X\)-morphism \[\varphi_{\nu,\theta}:\operatorname{Bl}_{Y}^{D^{\nu}}X\longrightarrow \operatorname{Bl}_{Y}^{D^{\theta}}X\] of Proposition 2.25 is a dilatation map. It is now natural to introduce the following terminology. Definition 2.28.: For any \(\nu\in\mathbb{N}^{k}\), let us consider \[\operatorname{Bl}_{Y}^{D^{\nu}}X=\operatorname{Bl}_{\begin{subarray}{c}\{Y_ {i}D_{i}\}\\ Y_{i}\end{subarray}}^{\{\nu_{i}D_{i}\}}_{i\in I}X\] and call it the \(\nu\)-th iterated dilatation of \(X\) with multi-center \(\{Y_{i},D_{i}\}_{i\in I}\). ### Some flatness and smoothness results We proceed with the notation from SS 2.1 and assume \(I=\{i\}\) is a singleton and we ommit the subscripts \(i\) in notation. We assume further that there exists a scheme \(S\) under \(X\) together with a locally principal closed subscheme \(S_{0}\subset S\) fitting into a commutative diagram of schemes (2.4) where the square is cartesian, that is \(D\to X_{0}:=X\times_{S}S_{0}\) is an isomorphism. Proposition 2.29 (Mrr20).: Assume that \(S_{0}\) is an effective Cartier divisor on \(S\). 1. If \(Z\subset D\) is regular, then \(\operatorname{Bl}^{D}_{Z}X\to X\) is of finite presentation. 2. If \(Z\subset D\) is regular, the fibers of \(\operatorname{Bl}^{D}_{Z}X\times_{S}S_{0}\to S_{0}\) are connected (resp. irreducible, geometrically connected, geometrically irreducible) if and only if the fibers of \(Z\to S_{0}\) are. 3. If \(X\to S\) is flat and if moreover one of the following holds: 1. \(Z\subset D\) is regular, \(Z\to S_{0}\) is flat and \(S,X\) are locally noetherian, 2. \(Z\subset D\) is regular, \(Z\to S_{0}\) is flat and \(X\to S\) is locally of finite presentation, 3. the local rings of \(S\) are valuation rings, then \(\operatorname{Bl}^{D}_{Z}X\to S\) is flat. 4. If both \(X\to S\), \(Z\to S_{0}\) are smooth, then \(\operatorname{Bl}^{D}_{Z}X\to S\) is smooth. Remark 2.30.: Complementary smoothness and flatness results for multi-centered dilatations can be found in [10, SS6]. ### Remarks Dilatations commute with algebraic attractors [10, Proposition 13.1]. ## 3 Dilatations of group schemes or Neron blowups One of the key properties allowed by dilatations is that it preserves the structure of group schemes in many cases. Dilatations of group schemes are also called Neron blowups and we also often use this terminology. ### Definitions of multi-centered Neron blowups Let \(S\) be a scheme and \(G\to S\) a group scheme. Let \(C=\{C_{i}\}_{i\in I}\) be a set of locally principal closed subschemes of \(S\). Put \(G|_{C_{i}}=G\times_{S}C_{i}\) and \(G|_{C}=\{G|_{C_{i}}\}\). Let \(H_{i}\subset G|_{C_{i}}\) be a closed subgroup scheme over \(C_{i}\) for all \(i\in I\) and let \(H=\{H_{i}\}\). The multi-centered dilatation \[\mathcal{G}:=\operatorname{Bl}^{G|_{C}}_{H}G\longrightarrow G\] is called the _Neron blowup_ of \(G\) with multi-center \(H,G|_{C}\). We also use the notation \(\operatorname{Bl}^{C}_{H}G\) to denote \(\mathcal{G}\). In the case \(I\) has a single element, we shall refer to \(\operatorname{Bl}^{C}_{H}G\) as mono-centred Neron blowups By Proposition 2.12 the structural morphism \(\mathcal{G}\to S\) defines an object in \(\operatorname{Sch}^{C\text{-reg}}_{S}\). Proposition 3.1 (Mrr20, Ma23d).: Let \(\mathcal{G}\to S\) be the above multi-centered Neron blowup. 1. The \(S\)-scheme \(\mathcal{G}\) represents the contravariant functor \(\operatorname{Sch}_{S}^{C\text{-reg}}\to\operatorname{Sets}\) \[T\;\longmapsto\;\left\{T\to G\;:\;\begin{array}{c}T|_{C_{i}}\to G|_{C_{i}} \text{ factors through}\\ H_{i}\subset G|_{C_{i}}\text{ for all }i\end{array}\right\}.\] 2. Let \(T\to S\) be an object in \(\operatorname{Sch}_{S}^{C\text{-reg}}\), then as subsets of \(G(T)\) \[\mathcal{G}(T)=\bigcap_{i\in I}\big{(}\operatorname{Bl}_{H_{i}}^{C_{i}}G \big{)}(T).\] 3. The map \(\mathcal{G}\to G\) is affine. Its restriction over \(C_{i}\) factors as \(\mathcal{G}_{i}\to H_{i}\subset G|_{C_{i}}\) for all \(i\) 4. If the Neron blowup \(\mathcal{G}\to S\) is flat, then it is equipped with the structure of a group scheme such that \(\mathcal{G}\to G\) is a morphism of \(S\)-group schemes. Remark 3.2. We saw that in favorable cases, dilatations preserve group scheme structures. In fact dilatations preserve similarly monoid scheme structures and Lie algebra schemes structures, or more generally structures defined by products, cf. [10, SS7] for details. Remark 3.3. Dilatations commute with the formation of Lie algebra schemes in a natural sense \[\mathbb{L}ie(\operatorname{Bl}_{H}^{G|_{C}}G)\cong\operatorname{Bl}\big{\{} \hskip-1.0pt\begin{subarray}{c}Lie(G)\times_{S}C_{i}\\ \mathbb{L}ie(H_{i})\end{subarray}\hskip-1.0pt\big{\}}_{i\in I}\mathbb{L}ie(G)\] cf. [10, SS7] for precise flatness assumptions. ### Mono-centered Neron blowups We proceed with the notation from SS3.1 and now deal with the mono-centered case, so now \(k=1\). We put \(S_{0}=C_{1}\) and \(H=H_{1}\). We also put \(G_{0}=G\times_{S}S_{0},H_{0}=H\times_{S}S_{0}\) and \(K_{0}=K\times_{S}S_{0}\). Proposition 3.4 (10). Assume that \(S_{0}\) is a Cartier divisor in \(S\) and \(G\to S\) is flat. Let \(\eta:K\to G\) be a morphism of group schemes over \(S\) such that \(K\to S\) is flat. Assume that \(H\subset G\) is a closed subgroup scheme over \(S\) such that \(H\to S\) is flat and \(\operatorname{Bl}_{H}^{S_{0}}G\to S\) is flat (and in particular a group scheme). Assume that \(K_{0}\) commutes with \(H_{0}\) in the sense that the morphism \(K_{0}\times_{S_{0}}H_{0}\to G_{0}\), \((k,h)\mapsto\eta(k)h\eta(k)^{-1}\) equals the composition morphism \(K_{0}\times_{S_{0}}H_{0}\to H_{0}\subset G_{0}\), \((k,h)\mapsto h\). Then \(K\) normalizes \(\operatorname{Bl}_{H}^{S_{0}}G\), more precisely the solid composition map factors uniquely through \(\operatorname{Bl}_{H}^{S_{0}}G\). Theorem 3.5 (20, 21). Assume that \(G\to S\) is flat, locally finitely presented and \(H\to S_{0}\) is flat, regularly immersed in \(G_{0}\). Let \(\mathcal{G}\to G\) be the dilatation \(\operatorname{Bl}_{H}^{S_{0}}G\) with exceptional divisor \(\mathcal{G}_{0}:=\mathcal{G}\times_{S}S_{0}\). Let \(\mathcal{J}\) be the ideal sheaf of \(G_{0}\) in \(G\) and \(\mathcal{J}_{H}:=\mathcal{J}|_{H}\). Let \(V\) be the restriction of the normal bundle \(\mathbb{V}(\mathcal{C}_{H/G_{0}}\otimes\mathcal{J}_{H}^{-1})\to H\) along the unit section \(e_{0}\colon S_{0}\to H\). 1. Locally over \(S_{0}\), there is an exact sequence of \(S_{0}\)-group schemes \(1\to V\to\mathcal{G}_{0}\to H\to 1\). 2. Assume given a lifting of \(H\) to a flat \(S\)-subgroup scheme of \(G\). Then there is globally an exact, canonically split sequence \(1\to V\to\mathcal{G}_{0}\to H\to 1\). 3. If \(G\to S\) is smooth, separated and \(\mathcal{G}\to G\) is the dilatation of the unit section of \(G\), there is a canonical isomorphism of smooth \(S_{0}\)-group schemes \(\mathcal{G}_{0}\;\stackrel{{\sim}}{{\longrightarrow}}\; \operatorname{Lie}(G_{0}/S_{0})\otimes\operatorname{N}_{S_{0}/S}^{-1}\) where \(\operatorname{N}_{S_{0}/S}\) is the normal bundle of \(S_{0}\) in \(S\). Remark 3.6.: In the situation of Theorem 3.5 (2), the group \(H\) acts by conjugation on \(V=\mathbb{V}(e_{0}^{*}\mathcal{C}_{H/G_{0}}\otimes\mathcal{J}_{S_{0}}^{-1})\). It is expected that this additive action is linear, and is in fact none other than the "adjoint" representation of \(H\) on its normal bundle as in [12, Exp. I, Prop. 6.8.6]. When the base scheme is the spectrum of a discrete valuation ring this is proved in [1, Prop. 2.7]. Assume now that \(j\colon S_{0}\hookrightarrow S\) is an effective Cartier divisor, that \(G\to S\) is a flat, locally finitely presented group scheme and that \(H\subset G_{0}:=G\times_{S}S_{0}\) is a flat, locally finitely presented closed \(S_{0}\)-subgroup scheme. In this context, there is another viewpoint on the dilatation \(\mathcal{G}\) of \(G\) in \(H\), namely as the kernel of a certain map of syntomic sheaves. To explain this, let \(f\colon G_{0}\to G_{0}/H\) be the morphism to the fppf quotient sheaf, which by Artin's theorem ([13, Cor. 6.3] and [14, 15]) is representable by an algebraic space. By the structure theorem for algebraic group schemes (see [12, Exp. VII\({}_{\mathrm{B}}\), Cor. 5.5.1]) the morphisms \(G\to S\) and \(H\to S_{0}\) are syntomic. Since \(f\colon G_{0}\to G_{0}/H\) makes \(G_{0}\) an \(H\)-torsor, it follows that \(f\) is syntomic also. Proposition 3.7 ([16, Lemma 3.8]).: Let \(S_{\mathrm{syn}}\) be the small syntomic site of \(S\). Let \(\eta\colon G\to j_{*}j^{*}G\) be the adjunction map in the category of sheaves on \(S_{\mathrm{syn}}\) and consider the composition \(v=(j_{*}f)\circ\eta\): \[G\xrightarrow{\eta}j_{*}j^{*}G=j_{*}G_{0}\xrightarrow{j_{*}f}j_{*}(G_{0}/H).\] Then the dilatation \(\mathcal{G}\to G\) is the kernel of \(v\). More precisely, we have an exact sequence of sheaves of pointed sets in \(S_{\mathrm{syn}}\): \[1\xrightarrow{\mathcal{G}}\xrightarrow{v}G\xrightarrow{v}j_{*}(G_{0}/H) \xrightarrow{}1.\] If \(G\to S\) and \(H\to S_{0}\) are smooth, then the sequence is exact as a sequence of sheaves on the small etale site of \(S\). As a corollary, one has the useful and typical following result. Corollary 3.8. ([16]).: Let \(\mathcal{O}\) be a ring and \(\pi\subset\mathcal{O}\) an invertible ideal such that \((\mathcal{O},\pi)\) is a henselian pair. Let \(G\) be a smooth, separated \(\mathcal{O}\)-group scheme and \(\mathcal{G}\to G\) the dilatation of the trivial subgroup over \(\mathcal{O}/\pi\). If either \(\mathcal{O}\) is local or \(G\) is affine, then the exact sequence of Proposition 3.7 induces an exact sequence of groups: \[1\xrightarrow{\mathcal{G}}(\mathcal{O})\xrightarrow{}G(\mathcal{O}) \xrightarrow{}G(\mathcal{O}/\pi)\xrightarrow{}1.\] ## Part II. Some applications ### 4. Models of group schemes, representation categories and Tannakian groups In several mathematical theories, one finds the structure of a category with a _tensor product_, and one of the main goals of categorical Tannakian theory is to realize the latter categories as representations of group schemes. If we deal with categories over a _field_, and this is a somewhat well-known area with [13] being a fundamental reference, dilatations have not played a role. In the case we deal with categories which are linear over a _discrete valuation_ ring, a _Dedekind domain_, or more complicated rings, the outputs are much scarcer and the main reference is the beautiful, yet arid, monograph [10]. But in this situation, dilatations have played a role. Following [11], N. D. Duong and P. H. Hai [14] went into technical aspects of [10] and produced a more contemporaneous text to study tensor categories over Dedekind domain. This prompted further study [15, 16]; in these papers, the authors begin to look at Neron blowups (in the sense of Section 3) and the resulting categories systematically. It is also useful to mention here the paper [11], where the idea of looking at representation categories of Neron blowups already appears. In this section we fix a discrete valuation ring \(R\) with uniformizer \(\pi\), residue field \(k\) and fraction field \(K\). We put \(S=\operatorname{Spec}(R)\) and \(S_{i}=\operatorname{Spec}(R/(\pi^{i+1}))\) for \(i\in\mathbb{N}\). ### Group schemes from categories Let \(\mathcal{T}\) be a neutral Tannakian category over \(R\) in the sense of [14, Definition 1.2.5]. The reader having encountered only (neutral) Tannakian categories over fields [15, Section 2] should note that the distinctive property of \(\mathcal{T}\) is a weakening of the existence of "duals" [15, Definition 1.7]. This is to be replaced by the property that every object is a quotient of an "object having a dual." That this property holds for representation categories of group schemes is [13, Proposition 3]. (For a higher dimensional bases, see [12, Lemma 2.5].) But we face a non-trivial requirement: for example, \(\operatorname{Rep}_{W(\overline{\mathfrak{F}}_{p})}(\overline{\mathbb{F}}_{p})\) fails to satisfy it [16, Example 4.7]. Once this definition of neutral Tannakian category is given, the main theorem of [15, Theorem 2.11] has his analogue in the present context: If \(\omega:\mathcal{T}\to R\text{-}\mathbf{mod}\) is a faithful, \(R\)-linear and exact tensor functor, then there exists an affine and _flat_ group scheme \(\Pi_{\mathcal{T}}\) over \(R\) and an equivalence \[\overline{\omega}:\mathcal{T}\longrightarrow\operatorname{Rep}_{R}(\Pi_{ \mathcal{T}})\] such that composing \(\overline{\omega}\) with the forgetful functor \(\operatorname{Rep}_{R}(\Pi_{\mathcal{T}})\to R\text{-}\mathbf{mod}\) renders us \(\omega\) back. See [10, II.4.1.1] and [14, Theorem 1.2.2]. Let us present some examples of categories to which the theory can be applied. **Example 4.1**.: Let \(\Gamma\) be an abstract group and suppose that \(R=k\llbracket\pi\rrbracket\). Then, the category of \(R[\Gamma]\)-modules which are of finite type over \(R\) together with the forgetful functor is a neutral Tannakian category [16, 4.1]. **Example 4.2**.: Let \(X\) be a smooth and connected scheme over \(R\), \(\mathcal{D}_{X/R}\) the ring of differential operators [EGA, IV.16.8], and \(\mathcal{T}^{+}\) the category of \(\mathcal{D}_{X/R}\)-modules which, as \(\mathcal{O}_{X}\)-modules, are coherent. Using the fibre-by-fibre flatness criterion and [1], one proves that an object \(E\in\mathcal{T}^{+}\) is locally free if and only if it is \(R\)-flat. Let now \(\mathcal{T}\) be the full subcategory of \(\mathcal{T}^{+}\) having \[\left\{M\in\mathcal{T}^{+}\,:\quad\begin{array}{l}\text{There exists $E\in \mathcal{T}^{+}$ which}\\ \text{is $R$-flat and a surjection $E\to M$}\end{array}\right\}\] as objects. Once we give ourselves an \(R\)-point \(x_{0}\in X(R)\), it follows that \[\mathcal{T}\longrightarrow R\text{-}\mathbf{mod},\qquad E\longmapsto( \text{global sections of})\ x_{0}^{*}(E)\] defines a neutral Tannakian category. For more details, see [1] and [15]. **Example 4.3**.: We assume that \(R\) is Henselian and Japanese, e.g. \(R\) is complete. Let \(X\) be an irreducible, proper and flat \(R\)-scheme with geometrically reduced fibres. Let \(x_{0}\in X(R)\). Given a coherent sheaf \(E\) on \(X\), we say that \(E\) is _trivialized by a proper morphism_ if there exists a surjective and proper morphism \(\psi:Y\to X\) such that \(\psi^{*}E\) "comes from \(S=\operatorname{Spec}R\)", by which we mean that \(\psi^{*}E\) is the pull-back of a module via the structural morphism \(Y\to S\). Let \(\mathcal{T}^{+}\) be the full subcategory of the category of coherent modules on \(X\) having as objects those sheaves which are trivialized by a proper morphism. Proceeding along the lines of Example 4.2, it is possible to construct a smaller full subcategory \(\mathcal{T}\) of \(\mathcal{T}^{+}\) such that, endowing \(\mathcal{T}\) with the tensor product of sheaves, the functor \[\mathcal{T}\longrightarrow R\text{-}\mathbf{mod},\qquad E\longmapsto\text{( global sections of)}\ x_{0}^{*}(E)\] defines a neutral Tannakian category. Details are in [10]. This is the analogue theory of Nori's theory for the fundamental group scheme [11] in the relative setting, and one objective is to show that the group scheme associated to \(\mathcal{T}\) is _pro-finite_. See [10, Theorem 8.8]. ### Galois-Tannaka group schemes One obvious strategy to study Tannakian categories is to filter them by categories "generated" by a single object, just as in studying Galois groups it is fundamental to study finite extensions. Let \(\omega:\mathcal{T}\to R\text{-}\mathbf{mod}\) be as in the previous section so that \(\mathcal{T}\) is equivalent to \(\operatorname{Rep}_{R}(\Pi)\) for some affine and flat group scheme \(\Pi\). We shall take this equivalence as an equality, but we warn the reader that the structure of \(\Pi\) should be considered as being very complicated (just as is that of an absolute Galois group). Definition 4.4.: Let \(M\in\mathcal{T}\) be an object possessing a dual \(M^{\vee}\) and for each couple of non-negative integers \(a,b\), define \(\mathbf{T}^{a,b}M\) as \(M^{\otimes a}\otimes M^{\vee\otimes b}\). Then, \(\langle M\rangle_{\otimes}\) is the full subcategory of \(\mathcal{T}\) having as objects those which are quotients of subobjects of elements of the form \[\mathbf{T}^{a_{1},b_{1}}M\oplus\cdots\oplus\mathbf{T}^{a_{r},b_{r}}M,\] for varying \(r\), \(a_{1},\dots,a_{r}\), \(b_{1},\dots,b_{r}\). The Tannakian group scheme associated to \(\langle M\rangle_{\otimes}\) via \(\omega\) shall will be called here the _(full) Galois-Tannaka group (scheme)_ of \(M\). As we concentrate on a neutral Tannakian category, it is instructive to note that the splicing of \(\mathcal{T}\) by various \(\langle M\rangle_{\otimes}\) amounts to looking at various "images" of \(\Pi\). Before entering this topic, recall that, given a base field \(F\) and a morphism \(\varphi:G^{\prime}\to G\) of affine group schemes over \(F\), the closed image \(\operatorname{Im}_{\varphi}\)[EGA, I.9.5] is a _closed subgroup scheme_ of \(G\) such that the natural morphism \(G^{\prime}\to\operatorname{Im}_{\varphi}\) is _faithfully flat_[25, Theorem on 15.1]. In this case, \(\operatorname{Im}_{\varphi}\) enjoys both "desirable properties" of and image. Definition 4.5.: Let \(\rho:\Pi\to G\) be a morphism of flat and affine group schemes over \(R\). Define the _restricted_ image of \(\rho\), denoted \(\operatorname{Im}_{\rho}\), as the affine scheme associated to the algebra \[B_{\rho}=\text{Image of }\mathcal{O}(G)\to\mathcal{O}(\Pi).\] (In other words, \(\operatorname{Im}_{\rho}\) is the "closed" image of \(\rho\)[EGA, I.9.5].) Define its _full_ image \(\operatorname{Im}_{\rho}^{\prime}\) as being the affine scheme associated to \[B_{\rho}^{\prime}=\{f\in K\otimes\mathcal{O}(\Pi)\,:\,\,\pi^{m}f\in B_{\rho},\,\text{for some }m\geqslant 0\}.\] (SS ) It is not difficult to see that \(\operatorname{Im}_{\rho}\) and \(\operatorname{Im}_{\rho}^{\prime}\) are affine group schemes. With these definitions, \(\rho\) factors as \[\Pi\stackrel{{\psi}}{{\longrightarrow}}\operatorname{Im}_{\rho}^ {\prime}\stackrel{{ u}}{{\longrightarrow}}\operatorname{Im}_{\rho} \stackrel{{\iota}}{{\longrightarrow}}G,\] ( \[\dagger\] ) where \(\iota\) is a _closed immersion_ and \(u\) induces an isomorphism between generic fibres. A fundamental result [11, Theorem 4.1.1] now assures that \(\psi\) is faithfully flat, so that the the terms "images" are justified and the factorization in \((\dagger)\) is called the _diptych_ of \(\rho\). In addition, if \[\rho_{K}:\Pi\otimes K\longrightarrow G\otimes K\] stands for the morphism obtained from \(\rho\) by base-change to \(K\), we have \[\operatorname{Im}^{\prime}_{\rho}\otimes K=\operatorname{Im}_{\rho}\otimes K =\operatorname{Im}(\rho_{K}).\] Proposition 4.6 (DHS18, Proposition 4.10).: Let \(M\) be a finite and free \(R\)-module affording a representation of \(\Pi\) and let \(\rho:\Pi\to\operatorname{GL}(M)\) be the associated homomorphism. Then the obvious functor \(\operatorname{Rep}_{R}(\operatorname{Im}^{\prime}_{\rho})\to\operatorname{Rep }_{R}(\Pi)\) defines an equivalence between \(\operatorname{Rep}_{R}(\operatorname{Im}^{\prime}_{\rho})\) and \(\langle M\rangle_{\otimes}\). Put differently, \(\operatorname{Im}^{\prime}_{\rho}\) is the Galois-Tannaka group of \(M\) (in \(\operatorname{Rep}_{R}(\Pi)\)). Remark 4.7.: Let \(\operatorname{Rep}^{\circ}_{R}(\operatorname{Im}_{\rho})\) be the full subcategory of \(\operatorname{Rep}_{R}(\operatorname{Im}_{\rho})\) consisting of objects having a dual; it is possible to show that \(\operatorname{Rep}^{\circ}_{R}(\operatorname{Im}_{\rho})\) is equivalent to a full subcategory of \(\langle M\rangle_{\otimes}\). On the other hand, the functor \(\operatorname{Rep}_{R}(\operatorname{Im}_{\rho})\to\operatorname{Rep}_{R}(\Pi)\) may easily fail to be full. _From now on, we give ourselves a representation \(\rho:\Pi\to\operatorname{GL}(M)\)_ as in Proposition 4.6. It is at this point that the theory over \(R\) parts from the theory over a field in a significant way. Indeed, in the case of a base-field, Galois-Tannaka group schemes are known to be of finite type [11, Proposition 2.20]. _This is not unconditionally true over \(R\)_ since in order to construct \(\operatorname{Im}^{\prime}_{\rho}\), it was required to "saturate" the ring \(B_{\rho}\) in \((\lx@sectionsign\,)\). On the other hand, the morphism \(\operatorname{Im}_{\rho}\to\operatorname{GL}(M)\) is a closed immersion and \(\operatorname{Im}_{\rho}\) is of finite type. Definition 4.8.: A _model_ of a group scheme of finite type \(G\) over \(K\) is a flat group scheme \(\mathbb{G}\) over \(R\) such that \(\mathbb{G}\otimes_{R}K\cong G\), as \(K\)-group schemes. We often identify \(G\) and the generic fibre \(\mathbb{G}\otimes_{R}K\). A morphism of models \(\mathbb{G}\to\mathbb{G}^{\prime}\) of \(G\) is a morphism \(\mathbb{G}\to\mathbb{G}^{\prime}\) of group schemes over \(R\) which induces the identity on \(G\) once unravelled the proper identifications. Remark 4.9.: The definition of model used here differs from the one used in [10, Tag 0C2R] and [12] namely, we do not assume our models to be of finite type over \(R\). With this terminology, \(\operatorname{Im}_{\rho}\) and \(\operatorname{Im}^{\prime}_{\rho}\) are models of \(\operatorname{Im}(\rho_{K})\). A well-known result of Waterhouse-Weisfeler about the relations between models is the following. Theorem 4.10 (12, Theorem 1.4).: [11, Theorem 2.11]. Let \(v:G^{\prime}\to G\) be a morphism of flat \(S\)-group schemes such that \(v\) is an isomorphism on generic fibres. Then \(v\) is a composite of mono-centered Neron blowups (along the divisor defined by \(\pi\)). In other words, a morphism of models of finite type is a composite of mono-centered Neron blowups. If \(G\) and \(G^{\prime}\) are of finite type, then the number of Neron blowups is finite. More precisely: Define \(v_{0}=v\) and \(G_{0}=G\). Suppose that \(v_{n}:G^{\prime}\to G_{n}\) is obtained and put \[G_{n+1}=\operatorname{Bl}^{G_{n}\otimes k}_{\operatorname{Im}_{v_{n}\otimes k }}(G_{n}).\] (Recall that \(k\) is the residue field.) Letting \(v_{n+1}:G^{\prime}\to G_{n+1}\) be the morphism deduced from the universal property of \(\operatorname{Bl}^{G_{n}\otimes k}_{\operatorname{Im}_{v_{n}\otimes k}}(G_{n})\) (cf. Proposition 3.1), then \[\varprojlim_{n}v_{n}:G^{\prime}\longrightarrow\varprojlim_{n}G_{n}\] is an isomorphism. In particular, if for some \(n\in\mathbb{N}\) the homomorphism \(v_{n}\otimes k\) is faithfully flat, then \(G^{\prime}\simeq G_{n}\). As was mentioned before, it is possible that \(\operatorname{Im}_{\rho}^{\prime}\) fails to be of finite type and hence the number of Neron blowups proposed by Theorem 4.10 to describe \(u:\operatorname{Im}_{\rho}^{\prime}\to\operatorname{Im}_{\rho}\) may be infinite. But in some cases, it does happen that the number of Neron blowups is finite and a condition for this situation is described in Theorem 4.10. At this point, we remind the reader that in the situations we have in mind, the group scheme \(\Pi\) is usually extremely complicated and the determination of the image of a morphism \(\Pi\otimes k\to G\otimes k\), so that it is possible to apply the last claim in Theorem 4.10, can only be achieved on the side of \(\operatorname{Rep}_{R}(\Pi)\). It then becomes relevant to determine faithful representations of Neron blowups. (Here, we say that a representation is _faithful_ if the morphism to the associated general linear group is a closed immersion. This is not universally adopted.) The next result explains how to proceed in certain cases. **Theorem 4.11** ([4, Corollary 3.6]): _Let \(G\) be an affine and flat group scheme of finite type over \(S\). Let \(M\) be a finite and free \(R\)-module affording a faithful representation of \(G\). Given \(m\in M\), let_ \[H_{0}=\text{stabilizer of }m\otimes 1\in M\otimes k\] _in \(G\otimes k\). Let \(G^{\prime}=\operatorname{Bl}_{H_{0}}^{G\otimes k}(G)\). Then, letting \(1=R\) stand for the trivial representation of \(G^{\prime}\), the obvious map \(1\to M\otimes k\) determined by \(1\mapsto v\otimes 1\) is \(G^{\prime}\)-equivariant and the fibered product_ \[M^{\prime}:=M\underset{M\otimes k}{\times}1\] ( \[\blacktriangledown\] ) _now affords a faithful representation of \(G^{\prime}\)._ Let us illustrate the above result with a simple example showing how to compute a Galois-Tannaka group. **Example 4.12**: _Let \(k\) be of characteristic zero and \(\mathcal{T}\) be the category of representations of the abstract group \(\mathbb{Z}\) on finite \(R\)-modules. It is not difficult to see that \(\mathcal{T}\) is neutral Tannakian [4, Corollary 4.5]. Let \(\mathbb{Z}\) act on \(M=R\) by \(\gamma\cdot r=(1+\pi)^{\gamma}r\) and write \(\rho:\Pi\to\operatorname{GL}(M)(\simeq\mathbb{G}_{m,R})\) for the associated morphism of group schemes. It is not difficult to see that \(\operatorname{Im}_{\rho}=\mathbb{G}_{m,R}\) and we wish to compute \(\operatorname{Im}_{\rho}^{\prime}\). As mentioned above, the construction (SS ) is of little use. On the other hand, we know that \(\Pi\) will act trivially on \(M\otimes k\) because \(\mathbb{Z}\) does. We then need to perform the "dilatation" \(M^{\prime}\) of \(M\) as in ( \(\blacktriangledown\)), which is a faithful representation of the Neron blowup \(\operatorname{Bl}_{\{e\}}^{\mathbb{G}_{m}\otimes k}(\mathbb{G}_{m})\). The elements \(m_{1}:=(\pi,0)\) and \(m_{2}:=(1,1)\) obviously form a basis for \(M^{\prime}\) and hence the resulting representation of \(\mathbb{Z}\) is defined by_ \[\gamma\longmapsto\begin{pmatrix}1+\pi&1\\ 0&1\end{pmatrix}^{\gamma}.\] _If \(\rho^{\prime}:\Pi\to\operatorname{GL}(M^{\prime})(\simeq\operatorname{GL}_{2})\) stands for the associated representation of \(\Pi\), we can say that \(\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\in\operatorname{GL}_{2}(k)\) belongs to the image of \(\rho^{\prime}\otimes k\) and therefore \(\operatorname{Im}_{\rho}^{\prime}\simeq\operatorname{Bl}_{\{e\}}^{\mathbb{G}_ {m}\otimes k}(\mathbb{G}_{m})\) because \(\operatorname{Bl}_{\{e\}}^{\mathbb{G}_{m}\otimes k}(\mathbb{G}_{m})\otimes \simeq\mathbb{G}_{a,k}\), and any element of \(k\setminus\{0\}\) generates a dense subgroup._ On the other hand, when the number of Neron blowups envisaged by Theorem 4.10 is infinite, a general principle behind [4, 5] is that the Galois-Tannaka groups can be obtained from group schemes of finite type via certain special types of (what we now call) _multi-centered_ Neron blowups. This is treated in the next section. ### Neron blowups of formal subgroup schemes Multi-centered dilatations having divisors which are supported on the same space have been studied more closely. For an affine group scheme \(G\) over \(R\), we shall write \(\widehat{G}\) for the completion \(G_{/G_{0}}\) of \(G\) along its closed fiber [10]. **Definition 4.13** ([4, Definition 5.6]).: _Let \(G\to S\) be an affine flat group scheme of finite type. For each \(i\in\mathbb{N}\), let \(G_{i}\) be the \(S_{i}\)-group scheme \(G\times_{S}S_{i}\), and let \(H_{i}\to S_{i}\) be a closed, \(S_{i}\)-flat, subgroup-scheme of \(G_{i}\). Assume, in addition, that the natural base-change morphism_ \[H_{i+1}\times_{S_{i+1}}S_{i}\longrightarrow G_{i+1}\times_{S_{i+1}}S_{i}=G_{i}\] _defines an isomorphism \(H_{i+1}\times_{S_{i+1}}S_{i}\simeq H_{i}\) of group schemes. Said differently, the family \(\{H_{i}\}\) induces a formal closed subgroup scheme \(\mathfrak{H}\) of \(\widehat{G}\). We define the Neron blowup of \(G\) along \(\mathfrak{H}\), call it \(\operatorname{Bl}^{\widehat{G}}_{\mathfrak{H}}G\), as being \(\operatorname{Bl}^{\{G_{i}\}}_{\{H_{i}\}}G\to G\)._ **Remark 4.14**.: _If the formal scheme \(\mathfrak{H}\) is "algebraizable", meaning that it comes from a closed and flat subgroup scheme \(H\subset G\), this is mentioned in [11, SS 7.2]._ **Example 4.15**.: _Let \(p\) be a prime number, \(R=\mathbb{Z}_{p}\) and \(G=\mathbb{G}_{a,R}\). It then follows that the completion of \(G\) along its closed fibre is \(\operatorname{Spf}\mathbb{Z}_{p}\langle x\rangle\), where \(\mathbb{Z}_{p}\langle x\rangle\) is the subring of \(\mathbb{Z}_{p}\llbracket x\rrbracket\) consisting of power series \(\sum_{n}a_{n}x^{n}\) such that \(\lim a_{n}=0\). Let \(\mathfrak{H}\) be the closed formal subscheme of \(\widehat{G}\) determined by the ideal \((x)\subset\mathbb{Z}_{p}\langle x\rangle\). Then, it is not difficult to see that \(\operatorname{Bl}^{\widehat{G}}_{\mathfrak{H}}G\) is the group scheme determined by the Hopf subalgebra \(A=\{P\in\mathbb{Q}_{p}[x]\,:\,P(0)\in\mathbb{Z}_{p}\}\). Note that \(\operatorname{Bl}^{\widehat{G}}_{\mathfrak{H}}G\otimes\mathbb{F}_{p}\) is the trivial group scheme, while \(\operatorname{Bl}^{\widehat{G}}_{\mathfrak{H}}G\otimes\mathbb{Q}_{p}\) is \(\mathbb{G}_{a,\mathbb{Q}_{p}}\). In particular, the dimension of the generic and special fibres is distinct, even though \(\operatorname{Bl}^{\widehat{G}}_{\mathfrak{H}}G\) is itself flat over \(\mathbb{Z}_{p}\). Note, on the other hand, that the \(\mathbb{Z}_{p}\)-module \(A\) contains a copy of \(\mathbb{Q}_{p}\) and hence fails to be projective over \(\mathbb{Z}_{p}\). This seemingly harmless property is the cause of complications in the category of representations [12, Proposition 6.19] as the _inexistence_ of intersections of subrepresentations. **Example 4.16** ([12, 4.3]).: _Let \(R=k\llbracket\pi\rrbracket\), where \(k\) is a field of characteristic zero and let \(G=\mathbb{G}_{a,R}\times_{R}\mathbb{G}_{m,R}\). Letting \(x\) stand for "the" coordinate of \(\mathbb{G}_{a,R}\) and \(y\) for "the" coordinate of \(\mathbb{G}_{m,R}\), we define_ \[e^{\pi x}=\sum_{i=0}^{\infty}\frac{\pi^{i}}{i!}x^{i};\] _this is an element of \(\widehat{\mathcal{O}(G)}\). It is not difficult to see that \(y-e^{\pi x}\) cuts out a closed and formal subgroup scheme of \(\widehat{G}\), call it \(\mathfrak{H}\), and hence we obtain a model \(\operatorname{Bl}^{\widehat{G}}_{\mathfrak{H}}\to G\). Note that \(\mathfrak{H}\) is not algebraizable. Differently from the situation in Example 4.15, the \(R\)-module \(\mathcal{O}(\operatorname{Bl}^{\widehat{G}}_{\mathfrak{H}})\) is projective._ One important consequence of the procedure of taking formal blowups is the following. It says that, in some contexts, all the information concerning a model of a group scheme can be encoded in a formal Neron blowup (Theorem 4.17). **Theorem 4.17** ([12, Corollary 3.3]).: _Suppose that the \(R\) is complete and of residual characteristic zero. Let \(\mathcal{G}\to G\) be a morphism of affine and flat \(R\)-group schemes inducing an isomorphism on the generic fibres, and suppose in addition that \(G\) is of finite type. Then, there exists a group scheme \(G^{\prime}\) over \(R\), flat and of finite type, and a morphism of group schemes \(G^{\prime}\to G\) which is an isomorphism on generic fibres, a closed and formal subgroup scheme \(\mathfrak{H}^{\prime}\) of \(\widehat{G}^{\prime}\), and an isomorphism_ \[\mathcal{G}\overset{\sim}{\longrightarrow}\operatorname{Bl}^{\widehat{G}^{ \prime}}_{\mathfrak{H}^{\prime}}G^{\prime}.\] Remark 4.18.: Under the assumptions of Theorem 4.17, Theorems 4.10 and 4.17 together say that any morphism of models \(G^{\prime}\to G\) with \(G\) of finite type over \(R\) is obtained as a composite of multi-centered Neron blowups, and more precisely as a formal Neron blowup composed by several mono-centered Neron blowups. ## 5 Congruent isomorphisms and relations with Bruhat-Tits buildings, the **Moy-Prasad isomorphism and admissible representations of \(p\)-adic groups** In this section we report on congruent isomorphisms. Let \((\mathcal{O},\pi)\) be a henselian pair where \(\pi\subset\mathcal{O}\) is an invertible ideal. Let us start with the following result proved in [14]. Theorem 5.1 ([14]).: (Congruent isomorphism) Let \(r,s\) be integers such that \(0\leqslant r/2\leqslant s\leqslant r\). Let \(G\) be a smooth, separated \(\mathcal{O}\)-group scheme. Let \(G_{r}\) be the \(r\)-th iterated dilatation of the unit section (i.e. \(G_{r}=\operatorname{Bl}_{\mathcal{C}_{G}}^{\mathcal{O}/\pi^{r}}G\)) and \(\mathfrak{g}_{r}\) be its Lie algebra. If \(\mathcal{O}\) is local or \(G\) is affine, there is a canonical and functorial isomorphism of groups: \[G_{s}(\mathcal{O})/G_{r}(\mathcal{O})\;\stackrel{{\sim}}{{ \longrightarrow}}\;\mathfrak{g}_{s}(\mathcal{O})/\mathfrak{g}_{r}(\mathcal{ O}).\] ( \[\star\] ) Remark 5.2.: We comment on works prior to Theorem 5.1. 1. In the case of an affine, smooth group scheme over a discrete valuation ring, the isomorphism of Theorem 5.1 appears without proof in [11, proof of Lemma 2.8]. 2. The proof of Theorem 5.1 relies on Proposition 2.15 and Theorem 3.5 whose proofs (given in [14, Prop. 2.9] and [14, Th. 3.5]) basically consist in playing and computing with quasi-coherent ideals. These computations on quasi-coherent ideals in [14] were partly motivated by related computations on ideals done in the affine case in [15, Appendix A] to understand the congruent isomorphism. The statement of [14, Th. 3.5] is moreover partly inspired by [13, Th. 1.5, Th. 1.7]. 3. If \(G=\mathbb{G}_{m}/\mathbb{Z}_{p}\), isomorphism (\(\star\)) follows from the multiplicative structure of \(\mathbb{Z}_{p}\) cf. e.g. [10], [11] and [12, Chap. 15]. Similar isomorphisms for matrix groups over non-Archimedean local fields were used in [10, p. 442 line 1], [11, 2.13], [12, p. 22], [13, p. 337] and many other references to study admissible representations of \(p\)-adic classical groups. In the matrix case, the filtrations involved are defined using matrix theoretic descriptions and avoiding scheme theoretic tools. For general reductive groups over non-Archimedean local fields, such kind of isomorphisms were introduced and used in [13, SS 2], [15, SS 2], [16], [17, SS 1], [18, SS 1] to study admissible representations. In the reductive case, the filtrations involved are the Moy-Prasad filtrations [15], [15] and the isomorphism is called the Moy-Prasad isomorphism. These filtrations are defined for points in the Bruhat-Tits building using the associated valued root datum [10][10]. The Moy-Prasad isomorphim in these references was defined using somehow ad hoc formulas and the valued root datum, in particular avoiding the congruent isomorphism. However it is known that one has to modify the original Moy-Prasad filtrations to ensure the validity of the Moy-Prasad isomorphism in full generality, cf. [11, SS0.3] and [13, SS13]. If \(G=\operatorname{GL}_{2}/\mathbb{Z}_{p}\), \(G_{n}(\mathbb{Z}_{p})=\begin{pmatrix}1+\mathfrak{p}^{n}&\mathfrak{p}^{n}\\ \mathfrak{p}^{n}&1+\mathfrak{p}^{n}\end{pmatrix}\subset\operatorname{GL}_{2} (\mathbb{Z}_{p})\) and \(\mathfrak{g}_{n}(\mathbb{Z}_{p})=\begin{pmatrix}\mathfrak{p}^{n}&\mathfrak{p} ^{n}\\ \mathfrak{p}^{n}&\mathfrak{p}^{n}\end{pmatrix}\subset M_{2}(\mathbb{Z}_{p})\) for any \(n>0\). The isomorphism (\(\star\)) gives us, for pairs \((r,s)\) such that \(0<\frac{r}{2}\leqslant s\leqslant r\), isomor phisms \[\begin{pmatrix}1+\mathfrak{p}^{s}&\mathfrak{p}^{s}\\ \mathfrak{p}^{s}&1+\mathfrak{p}^{s}\end{pmatrix}/\begin{pmatrix}1+\mathfrak{p}^{ r}&\mathfrak{p}^{r}\\ \mathfrak{p}^{r}&1+\mathfrak{p}^{r}\end{pmatrix}\cong\begin{pmatrix} \mathfrak{p}^{s}&\mathfrak{p}^{s}\\ \mathfrak{p}^{s}&\mathfrak{p}^{s}\end{pmatrix}/\begin{pmatrix}\mathfrak{p}^{ r}&\mathfrak{p}^{r}\\ \mathfrak{p}^{r}&\mathfrak{p}^{r}\end{pmatrix}.\] These maps are given by \([1+M]\mapsto[M]\). Using the formula \([1+M]\mapsto[M]\), it is elementary to check that we have other isomorphisms of abstract groups \[\begin{pmatrix}1+\mathfrak{p}^{3}&\mathfrak{p}^{3}\\ \mathfrak{p}^{3}&1+\mathfrak{p}^{3}\end{pmatrix}/\begin{pmatrix}1+\mathfrak{ p}^{5}&\mathfrak{p}^{6}\\ \mathfrak{p}^{6}&1+\mathfrak{p}^{5}\end{pmatrix}\cong\begin{pmatrix} \mathfrak{p}^{3}&\mathfrak{p}^{3}\\ \mathfrak{p}^{3}&\mathfrak{p}^{3}\end{pmatrix}/\begin{pmatrix}\mathfrak{p}^{ 5}&\mathfrak{p}^{6}\\ \mathfrak{p}^{6}&\mathfrak{p}^{5}\end{pmatrix},\] These isomorphisms are obtained as follows from the point of view of dilatations. Theorem 5.3 (Ma23d).: (Multi-centered congruent isomorphism) Let \(G\) be a separated and smooth group scheme over \(S\). Let \(H_{0}\subset H_{1}\subset\ldots\subset H_{k}\) be closed subgroup schemes of \(G\) such that \(H_{i}\) is smooth over \(S\) for \(0\leqslant i\leqslant d\) and \(H_{0}=e_{G}\). Let \(s_{0},s_{1},\ldots,s_{k}\) and \(r_{0},r_{1},\ldots,r_{k}\) be in \(\mathbb{N}\) such that 1. \(s_{i}\geqslant s_{0}\) and \(r_{i}\geqslant r_{0}\) for all \(i\in\{0,\ldots,k\}\), 2. \(r_{i}\geqslant s_{i}\) and \(r_{i}-s_{i}\leqslant s_{0}\) for all \(i\in\{0,\ldots,k\}\). Assume that \(G\) is affine or \(\mathcal{O}\) is local. Then we have a canonical isomorphism of groups \[\operatorname{Bl}_{H_{0},H_{1},\ldots,H_{k}}^{s_{0},\ s_{1},\ \ldots,s_{k}}G/ \operatorname{Bl}_{H_{0},H_{1},\ldots,H_{k}}^{r_{0},\ r_{1},\ \ldots,r_{k}}G\cong \operatorname{Lie}(\operatorname{Bl}_{H_{0},H_{1},\ldots,H_{k}}^{s_{0},\ s_{1},\ \ldots,s_{k}}G)/ \operatorname{Lie}(\operatorname{Bl}_{H_{0},H_{1},\ldots,H_{k}}^{r_{0},\ r_{1},\ \ldots,r_{k}}G)\] where \(\operatorname{Bl}_{H_{0},\ldots,H_{k}}^{t_{0},\ \ldots,t_{k}}G\) denotes \(\operatorname{Bl}_{H_{0},\ \ldots,\,H_{k}}^{\mathcal{O}/\pi^{t_{0}},\ldots, \mathcal{O}/\pi^{s_{k}}}G\) for any \(t_{0},\ldots,t_{k}\in\mathbb{N}\). Now let \(G\) be \(GL_{2}/\mathbb{Z}_{p}\). Let \(e_{G}\subset G\) be the trivial subgroup. Let \(T\) be the diagonal split torus in \(G\). Let \(B\) be the lower triangular Borel in \(G\) over \(\mathbb{Z}_{p}\). 1. The isomorphism \((**)\) above is given by Theorem 5.3 with \((\mathcal{O},\pi)=(\mathbb{Z}_{p},\mathfrak{p})\), \(H_{0}=e_{G}\), \(H_{1}=T\), \(s_{0}=3,s_{1}=3,r_{0}=5\) and \(r_{1}=6\). 2. The isomorphism \((***)\) above is given by Theorem 5.3 with \((\mathcal{O},\pi)=(\mathbb{Z}_{p},\mathfrak{p})\), \(H_{0}=e_{G}\), \(H_{1}=B\), \(s_{0}=3,s_{1}=9,r_{0}=6\) and \(r_{1}=9\). Remark 5.4.: We comment Theorem 5.3. 1. Theorem 5.3 corresponds to [11, Corollary 8.3], a slightly more general result is given by [11, Theorem 8.1]. 2. The proof of Theorem 5.3 given in [11] relies on Theorem 5.1 and the study of multi-centered dilatations. 3. Note that [12, Lemma 1.3] provides a comparable "multi-centered" isomorphism, in the framework of reductive groups over non-Archimedean local field. Recall that dilatations of schemes over discrete valuation rings are used in Yu's approach [12] on Bruhat-Tits theory for reductive groups over henselian discrete valuation field with perfect residue field. We refer to the monograph by Kaletha and Prasad [13] that include among other things a detailled exposition of [12]. The congruent isomorphism (Theorem 5.1) and its proof (relying on several results of [14]) are now used as foundation to prove the Moy-Prasad isomorphism for reductive groups mentioned in Remark 5.2, cf. [13, Theorem 13.5.1 and its proof, Proposition A.5.19 (3) and its proof]. As a consequence, dilatations and congruent isomorphisms are now part of the foundation to study admissible representations of reductive \(p\)-adic groups. Furthermore, other connections between dilatations and groups used to study admissible representations can be found in [21, SS10] and [17, Example 1.4]. Reciprocally, the problem of constructing supercuspidal representations of \(p\)-adic groups (cf. e.g. [17, Remark/Conclusion], [16, 19]), or more generally types in the sense of [1], could continue to be a source of inspiration to expend the theory of dilatations. **Remark 5.5**.: As we explained before, the book [10] provides a carefully written new approach to Bruhat-Tits theory in the case of discrete valuations. This beautiful monograph uses the theory of dilatations to deal with integral models whereas the original Bruhat-Tits theory [1] did not. Let us quote [10, Introduction]: "_Next we turn to the construction of integral models [...]. Instead of using the approach of Bruhat-Tits via schematic root data, we employ a simpler and more direct method due to Jiu-Kang Yu [21], based on the systematic use of Neron dilatations._" The book [10] offers an appendix on dilatations. Though [10, Appendix A.5] takes into account the treatment of dilatations in [20], it restricts to the framework of discrete valuations. Originally, Bruhat-Tits theory [1] deals also with non discrete valuations, it is natural to ask whether the modern and general approach to dilatations of schemes initiated in [20] could help to provide a more conceptual treatment (in the spirit of [21] and [10]) of some parts of [11]. Bruhat-Tits theory and dilatations over non discrete valuations were used in [20] and [17] to study Berkovich's point of view [14, Chap. 5] on Bruhat-Tits buildings of reductive groups over discrete and non-discrete valuations (cf. e.g. [20, 1.3.4] for precise assumptions). ## 6 Torsors, level structures and shtukas In this subsection, we explain that many level structures on moduli stacks of \(G\)-bundles are encoded in torsors under Neron blowups of \(G\) following [20]. Assume that \(X\) is a smooth, projective, geometrically irreducible curve over a field \(k\) with a Cartier divisor \(N\subset X\), that \(G\to X\) is a smooth, affine group scheme and that \(H\to N\) is a smooth closed subgroup scheme of \(G|_{N}\). In this case, the Neron blowup \(\mathcal{G}\to X\) is a smooth, affine group scheme. Let \(\operatorname{Bun}_{G}\) (resp. \(\operatorname{Bun}_{\mathcal{G}}\)) denote the moduli stack of \(G\)-torsors (resp. \(\mathcal{G}\)-torsors) on \(X\). This is a quasi-separated, smooth algebraic stack locally of finite type over \(k\) (cf. e.g. [1, Prop. 1] or [1, Thm. 2.5]). Pushforward of torsors along \(\mathcal{G}\to G\) induces a morphism \(\operatorname{Bun}_{\mathcal{G}}\to\operatorname{Bun}_{G}\), \(\mathcal{E}\mapsto\mathcal{E}\times^{\mathcal{G}}G\). We also consider the stack \(\operatorname{Bun}_{(G,H,N)}\) of \(G\)-torsors on \(X\) with level-(\(H\),\(N\))-structures, cf. [20, Definition 4.5]. Its \(k\)-points parametrize pairs \((\mathcal{E},\beta)\) consisting of a \(G\)-torsor \(\mathcal{E}\to X\) and a section \(\beta\) of the fppf quotient \((\mathcal{E}|_{N}/H)\to N\), i.e., \(\beta\) is a reduction of \(\mathcal{E}|_{N}\) to an \(H\)-torsor. **Proposition 6.1** [20].: _There is an equivalence of \(k\)-stacks_ \[\operatorname{Bun}_{\mathcal{G}}\ \stackrel{{\cong}}{{\longrightarrow}} \ \operatorname{Bun}_{(G,H,N)},\ \ \mathcal{E}\longmapsto(\mathcal{E}\times^{\mathcal{G}}G,\,\beta_{\mathrm{can}}),\] _where \(\beta_{\mathrm{can}}\) denotes the canonical reduction induced from the factorization \(\mathcal{G}|_{N}\to H\subset G|_{N}\)._ Thus, many level structures are encoded in torsors under Neron blowups. This construction is also compatible with the adelic viewpoint as follows. Let \(|X|\subset X\) be the set of closed points, and let \(\eta\in X\) be the generic point. We denote by \(F=\kappa(\eta)\) the function field of \(X\). For each \(x\in|X|\), we let \(\mathcal{O}_{x}\) be the completed local ring at \(x\) with fraction field \(F_{x}\) and residue field \(\kappa(x)=\mathcal{O}_{x}/\mathfrak{m}_{x}\). Let \(\mathbb{A}:=\bigcap_{x\in|X|}^{\prime}F_{x}\) be the ring of adeles with subring of integral elements \(\mathbb{O}=\bigcap_{x\in|X|}\mathcal{O}_{x}\). Proposition 6.2.: Assume either that \(k\) is a finite field and \(G\to X\) has connected fibers, or that \(k\) is a separably closed field. The Neron blowup \(\mathcal{G}\to X\) is smooth, affine with connected fibers, and there is a commutative diagram of groupoids identifying the vertical maps as the level maps. Now assume that \(k\) is a finite field. As a consequence of Proposition 6.1 one naturally obtains integral models for moduli stacks of \(G\)-shtukas on \(X\) with level structures over \(N\) via an isomorphism \(\operatorname{Sht}_{\mathcal{G},I_{\bullet}}\xrightarrow{\cong}\operatorname{ Sht}_{(G,H,N),I_{\bullet}}\) (cf. [12, SS4.2.2] for precise definitions and details). ## 7 The "topology"of dilatations of affine schemes ### Constructing smooth complex affine varieties with controlled topology Dilatations have played an important role in complex affine algebraic geometry during the nineties in connection to the construction and study of exotic complex affine spaces [13, 14], that is, smooth algebraic \(\mathbb{C}\)-varieties \(X\) of dimension \(n\) whose analytifications \(X^{\operatorname{an}}\) are homeomorphic to the Euclidean space \(\mathbb{R}^{2n}\) endowed with its standard structure of topological manifold but which are not isomorphic to the affine space \(\mathbb{A}_{\mathbb{C}}^{n}\) as \(\mathbb{C}\)-varieties. In this context, dilatations appeared under the name _affine modifications_ and were used as a powerful tool to produce from a given smooth complex affine variety \(X\) a new smooth complex affine variety \(X^{\prime}=\operatorname{Bl}_{Z}^{D}X\) for which the homology or homotopy type of the underlying topological manifold of the analytification of \(X^{\prime}\) can determined under suitable hypotheses in terms of those of \(X\) and of the center \(\{[Z,D]\}\) of the dilatation. The study of the strong topology of affine modifications was initiated in this context mainly by Kaliman through an analytic counter part of the notion of dilatation: Definition 7.1 [15]. Given a triple \((M,H,C)\) consisting of a complex analytic manifold \(M\), a closed submanifold \(C\) of \(M\) of codimension at least \(2\) and a complex analytic hypersurface \(H\) of \(M\) containing \(C\) in its smooth locus, the _Kaliman modification \(M\) along \(H\) with center at \(C\)_ is the complex analytic manifold defined as the complement \(M^{\prime}\) of the proper transform \(H^{\prime}\) of \(H\) in the blow-up \(\sigma_{C}:\hat{M}\to M\) of \(M\) with center at \(C\). In the case where \((M,H,C)\) is the analytification of a triple \((X,D,Z)\) consisting of a smooth algebraic \(\mathbb{C}\)-variety \(X\), a smooth algebraic sub-variety \(Z\) of \(X\) of codimenion at least two and of a reduced effective Cartier divisor \(D\) on \(X\) containing \(Z\) in its regular locus, the Kaliman modification of \((M,H,C)\) coincides with the analytification of the dilatation \(X^{\prime}=\operatorname{Bl}_{Z}^{D}X\) of \(X\) with center \(\{[Z,D]\}\) of Section 2. Kaliman and Zaidenberg [13] developed a series of tools to describe the topology of the analytications of affine modifications of smooth affine \(\mathbb{C}\)-varieties along principal divisors \(D\) with non-necessarily smooth centers. One of these provides in particular a control on the preservation of the topology of the analytification under affine modifications: Theorem 7.2 [13, Proposition 3.1 and Theorem 3.1].: Let \(X\) be a smooth affine \(\mathbb{C}\)-variety and let \(\{[Z,D]\}\) be a center on \(X\) consisting of closed sub-scheme \(Z\) of codimension at least \(2\) and of a principal effective divisor \(D\) containing \(Z\) as a closed subscheme. Let \(\sigma:\tilde{X}=\operatorname{Bl}^{D}_{Z}(X)\to X\) be the dilatation of \(X\) with center \(\{[Z,D]\}\) and let \(E\) be the exceptional divisor of \(\sigma\). Assume that the following conditions are satisfied: (i) The \(\mathbb{C}\)-variety \(\tilde{X}=\operatorname{Bl}^{D}_{Z}(X)\) is smooth; (ii) The divisors \(E\) and \(D\) are irreducible, \(E=\sigma^{*}D\), and the analytifications of \(E_{\operatorname{red}}\) and \(D_{\operatorname{red}}\) are topological manifolds. Then the following properties hold: (a) The homomorphism \(\sigma_{*}^{\operatorname{an}}:\pi_{1}(\tilde{X}^{\operatorname{an}})\to\pi_ {1}(X^{\operatorname{an}})\) induced by \(\sigma^{\operatorname{an}}\) is an isomorphism; (b) The homomorphism \(\sigma_{*}^{\operatorname{an}}:H_{*}(\tilde{X}^{\operatorname{an}};\mathbb{Z} )\to H_{*}(X^{\operatorname{an}};\mathbb{Z})\) induced by \(\sigma^{\operatorname{an}}\) is an isomorphism if and only if the homomorphism \(\sigma|_{E,*}^{\operatorname{an}}:H_{*}(E_{\operatorname{red}}^{\operatorname {an}};\mathbb{Z})\to H_{*}(D_{\operatorname{red}}^{\operatorname{an}},\mathbb{ Z})\) is. Corollary 7.3.: In the setting of Theorem 7.2, assume that \(X^{\operatorname{an}}\) is a contractible smooth manifold, that \(Z_{\operatorname{red}}^{\operatorname{an}}\) is a topological manifold and that the homomorphism \[j_{*}^{\operatorname{an}}:H_{*}(Z_{\operatorname{red}}^{\operatorname{an}}; \mathbb{Z})\to H_{*}(D_{\operatorname{red}}^{\operatorname{an}};\mathbb{Z})\] induced by the closed immersion \(j:Z\hookrightarrow D\) is an isomorphisms. Then the analytification of \(\tilde{X}=\operatorname{Bl}^{D}_{Z}(X)\) is a contractible smooth manifold. Having the flexibility to use as centers or divisors of modifications schemes which are either non-reduced or whose analytifications are not necessarily smooth manifolds but only topological manifolds is particularly relevant for applications to the construction smooth \(\mathbb{C}\)-varieties with contractible analytifications, as illustrated by the following examples. Example 7.4 The tom Dieck - Petrie surfaces. Let \(p,q\geqslant 2\) be a pair of relatively prime integers and let \(C_{p,q}\subset\mathbb{A}^{2}_{\mathbb{C}}=\operatorname{Spec}(\mathbb{C}[x,y])\) be an irreducible rational cuspidal curve with equation \(x^{p}-y^{q}=0\). The underlying topological space of \(C_{p,q}^{\operatorname{an}}\) is a contractible real topological surface, and hence, Corollary 7.3 applies to conclude that the analytification of the dilatation \(S_{p,q}=\operatorname{Bl}^{C_{p,q}}_{(1,1)}\mathbb{A}^{2}_{\mathbb{C}}\) of \(\mathbb{A}^{2}_{\mathbb{C}}\) along the principal Cartier divisor \(D=C_{p,q}\) with center at the closed point \(Z=(1,1)\in C_{p,q}\) is a smooth contractible real \(4\)-manifold. The smooth affine surface \(S_{p,q}\), which can be described explicitly as the hypersurface in \(\mathbb{A}^{3}_{\mathbb{C}}=\operatorname{Spec}(\mathbb{C}[x,y,z])\) with equation \[\frac{(xz+1)^{p}-(yz+1)^{q}}{z}=1,\] is not isomorphic to \(\mathbb{A}^{2}_{\mathbb{C}}\) since, for instance, it has non negative logarithmic Kodaira dimension [14, Example 2.4]. Moreover, the underlying real \(4\)-manifold of \(S_{p,q}^{\operatorname{an}}\) is an example of a contractible \(4\)-manifold with non-trivial fundamental group at infinity, hence non-homeomorphic to the standard euclidean space \(\mathbb{R}^{4}\). Example 7.5 Some Koras-Russell threefolds. Let again \(p,q\geqslant 2\) be a pair of relative prime integers and consider for every \(n\geqslant 2\) the smooth hypersurface \(X_{p,q,n}\) in \(\mathbb{A}^{4}_{\mathbb{C}}=\operatorname{Spec}(\mathbb{C}[x,y,z,w])\) with equation \[x^{n}y+z^{p}+w^{q}+x=0.\] The restriction \(\sigma_{p,q,n}:X_{p,q,n}\to\mathbb{A}^{3}_{\mathbb{C}}\) of the projection to the coordinates \(x\), \(z\) and \(w\) expresses \(X_{p,q,n}\) as the dilatation of \(\mathbb{A}^{3}_{k}\) along the principal divisor \(D_{n}=\operatorname{div}x^{n}\) and with center at the non-reduced codimension closed sub-scheme \(Z=Z_{p,q,n}\) with defining ideal \[I_{p,q,n}=(z^{p}+w^{q}+x,x^{n})\subset\mathbb{C}[x,z,w].\] The analytification of \(Z_{\mathrm{red}}\) is a topological manifold homeomorphic to the underlying topological space of the curve \(C^{\mathrm{an}}_{p,q}\) of the previous example. Thus, Corollary 7.3 applies to conclude that \(X^{\mathrm{an}}_{p,q,n}\) is a contractible real \(6\)-manifold, hence, by a result of Dimca-Ramanujam, is diffeomorphic to the standard euclidean space \(\mathbb{R}^{6}\), see [20, Theorem 3.2]. The interest in these affine threefolds \(X_{p,q,n}\) was motivated in the nineties by their appearance in the course of the study of the linearization problem for actions of the multiplicative group \(\mathbb{G}_{m,\mathbb{C}}\) on \(\mathbb{A}^{3}_{\mathbb{C}}\) by Koras and Russell [14]. One crucial question at that time was to decide whether these threefolds were isomorphic to \(\mathbb{A}^{3}_{\mathbb{C}}\) or not. The fact that none of them is isomorphic to \(\mathbb{A}^{3}_{\mathbb{C}}\) was finally established by Makar-Limanov [13] by commutative algebra techniques. An interesting by-product of his proof is that the dilatation morphism \(\sigma_{p,q,n}:X_{p,q,n}\to\mathbb{A}^{3}_{\mathbb{C}}\) is equivariant with respect to the natural action of the group of \(\mathbb{C}\)-automorphism of \(X_{p,q,n}\), more precisely, \(\sigma_{p,q,n}\) induces an isomorphism \[\sigma^{*}_{p,q,n}:\mathrm{Aut}_{\mathbb{C}}(\mathbb{A}^{3}_{\mathbb{C}},\{[Z _{p,q,n},D_{n}]\})\to\mathrm{Aut}_{\mathbb{C}}(X_{p,q,n})\] between the subgroup \(\mathrm{Aut}_{\mathbb{C}}(\mathbb{A}^{3}_{\mathbb{C}},\{[Z_{p,q,n},D_{n}]\})\) of \(\mathrm{Aut}_{\mathbb{C}}(\mathbb{A}^{3}_{\mathbb{C}})\) consisting of \(\mathbb{C}\)-automorphisms preserving the divisor and the center of the dilatation \(\sigma_{p,q,n}\) and the group \(\mathrm{Aut}_{\mathbb{C}}(X_{p,q,n})\), see [11]. ### Deformation to the normal cone A very natural class of dilatations which plays a fundamental role in intersection theory is given by the affine version of the deformation space \(D(X,Y)\) of a closed immersion \(Y\hookrightarrow X\) of schemes of finite type over a fixed base scheme \(S\) to its normal cone, [13, 14]. Indeed, \(D(X,Y)\) is simply the dilatation of \(X\times_{S}\mathbb{A}^{1}_{S}\) with divisor \(D=X\times_{S}\{0\}_{S}\), where \(\{0\}_{S}\) denotes the zero section, and center \(Z=Y\times_{S}\{0\}_{S}\). In the affine setting, say \(X=\mathrm{Spec}(A)\) and \(Y=\mathrm{Spec}(A/I)\) for some ideal \(I\subset A\), \(D(X,Y)\) is the spectrum of the sub-algebra \(A[t]\)-algebra \[A[\frac{(I,t)}{t}]\cong\sum_{n}I^{n}t^{-n}\subset A[t,t^{-1}].\] The composition \(f:D(X,Y)\to\mathbb{A}^{1}_{S}\) of the dilatation morphism \(\sigma:D(X,Y)\to X\times_{S}\mathbb{A}^{1}_{S}\) with the projection \(p_{2}:X\times_{S}\mathbb{A}^{1}_{S}\to\mathbb{A}^{1}_{S}\) is a flat morphism restricting to the trivial bundle \(X\times_{S}(\mathbb{A}^{1}_{S}\setminus\{0\}_{S})\) over \(\mathbb{A}^{1}_{S}\setminus\{0\}_{S}\) and whose fiber over \(\{0\}_{S}\) equals the normal cone \(N_{Y/X}=\mathrm{Spec}(\bigoplus_{n\geq 0}I^{n}/I^{n+1})\) of the closed embedding \(Y\hookrightarrow X\) (see Proposition 2.15). For regular immersions \(Y\hookrightarrow X\) between smooth schemes of dimension \(n\) and \(m\) over a field \(k\), the deformation space \(D(X,Y)\) etale locally looks like the deformation space \[\mathrm{Spec}(k[x_{1},\ldots,x_{m}][t][u_{1},\ldots,u_{m-n}]/(tu_{i}-x_{i})_{i =1,\ldots,m-n})\cong\mathbb{A}^{m+1}_{k}\] of the immersion of \(\mathbb{A}^{n}_{k}\) as the linear subspace \(\{x_{1}=\ldots=x_{m-n}=0\}\) of \(\mathbb{A}^{m}_{k}=\mathrm{Spec}(k[x_{1},\ldots,x_{m}])\). Deformation spaces of closed immersions between smooth affine \(\mathbb{C}\)-varieties provide an endless source of smooth affine \(\mathbb{C}\)-varieties whose analytifications are contractible smooth manifolds: Example 7.6.: Given a smooth affine \(\mathbb{C}\)-variety \(X\) such that \(X^{\mathrm{an}}\) is contractible and a smooth subvariety \(Y\subset X\) such that the induced inclusion \(Y^{\mathrm{an}}\subset X^{\mathrm{an}}\) is a topological homotopy equivalence, Theorem 7.2 implies that the analytification of the deformation space \(D(X,Y)\) is a contractible smooth manifold. For instance, the deformation spaces \(D(\mathbb{A}^{3}_{\mathbb{C}},S_{p,q})\subset\mathbb{A}^{5}_{\mathbb{C}}\) of the tom Dieck - Petrie surfaces \(S_{p,q}\) of Example 7.4 are smooth affine \(\mathbb{C}\)-varieties of dimension \(4\) whose analytifications are all diffeomorphic to \(\mathbb{R}^{8}\). In the same way, for every Koras-Russell threefold \(X_{p,q,n}\subset\mathbb{A}^{4}_{\mathbb{C}}\) in Example 7.5, the deformation space \[D(\mathbb{A}^{4}_{\mathbb{C}},X_{p,q,n})\cong\{tu=x^{n}y+z^{p}+w^{q}+x\}\subset \mathbb{A}^{6}_{\mathbb{C}}\] is a smooth affine \(\mathbb{C}\)-variety whose analytification is diffeomorphic to \(\mathbb{R}^{10}\). It is not known whether these deformation spaces are algebraically isomorphic to affine spaces. More general versions of Kaliman and Zaidenberg techniques allow to fully describe the singular homology of the analytification of the deformation space \(D(\mathbb{A}^{n}_{\mathbb{C}},Z)\) of a smooth hypersurface \(Z=Z(p)=\operatorname{Spec}(\mathbb{C}[x_{1},\dots,x_{n}]/(p))\) of \(\mathbb{A}^{n}_{\mathbb{C}}\) in terms of that of the analytification of \(Z\), namely Proposition 7.7: [13, Proposition 4.1]. For a smooth hypersurface \(Z\subset\mathbb{A}^{n}_{\mathbb{C}}\), \(D(\mathbb{A}^{n}_{\mathbb{C}},Z)^{\operatorname{an}}\) is simply connected and the inclusion \(N_{Z/\mathbb{A}^{n}_{\mathbb{C}}}\hookrightarrow D(\mathbb{A}^{n}_{\mathbb{C}},Z)\) induces an isomorphism of reduced homology groups \(\tilde{H}_{*}(D(\mathbb{A}^{n}_{\mathbb{C}},Z)^{\operatorname{an}};\mathbb{ Z})\cong\tilde{H}_{*-2}(Z^{\operatorname{an}};\mathbb{Z})\). In particular, \(D(\mathbb{A}^{n}_{\mathbb{C}},Z)^{\operatorname{an}}\) has the reduced homology type of the \(S^{2}\)-subspension of \(Z^{\operatorname{an}}\). ### Contractible affine varieties in motivic \(\mathbb{A}^{1}\)-homotopy theory The possibility to import Kaliman and Zaidenberg techniques in the framework of Morel-Voevodsky \(\mathbb{A}^{1}\)-homotopy theory of schemes [16] has focused quite a lot of attention recently, especially in the direction of the construction of \(\mathbb{A}^{1}\)-contractible smooth affine varieties, motivated in part by possible applications to the Zariski Cancellation Problem, see [1] and the reference therein for a survey. Very informally, one views in this context smooth schemes over a fixed base field \(k\) as analogous to topological manifolds, with the affine line \(\mathbb{A}^{1}_{k}\) playing the role of the unit interval, and consider the corresponding homotopy category. More rigorously, the \(\mathbb{A}^{1}\)-homotopy category \(\operatorname{H}_{\mathbb{A}^{1}}(k)\) of \(k\)-schemes is defined as the left Bousfield localization of the injective Nisnevich-local model structure on the category of simplicial presheaves of sets on the category \(\operatorname{Sm}_{k}\) of smooth \(k\)-schemes, with respect to the class of maps generated by projections from the affine line \(\mathcal{X}\times_{k}\mathbb{A}^{1}_{k}\to\mathcal{X}\). Isomorphisms in the homotopy category \(\operatorname{H}_{\mathbb{A}^{1}}(k)\) are called \(\mathbb{A}^{1}\)-_weak equivalences_, and a smooth \(k\)-scheme \(X\) is called \(\mathbb{A}^{1}\)-_contractible_ if the structure morphism \(X\to\operatorname{Spec}(k)\) is an isomorphism in \(\operatorname{H}_{\mathbb{A}^{1}}(k)\). The affine space \(\mathbb{A}^{n}_{k}\) is by definition \(\mathbb{A}^{1}\)-contractible. Since the analytification of an \(\mathbb{A}^{1}\)-contractible smooth \(\mathbb{C}\)-variety is a contractible smooth manifold, smooth algebraic \(\mathbb{C}\)-varieties with contractible analytifications provided conversely a first natural framework to seek for interesting \(\mathbb{A}^{1}\)-contractible affine varieties non isomorphic to affine spaces. A first step in this direction was accomplished by Hoyois, Krishna and Ostvaer [14, Theorem 4.2] who used the underlying geometry associated to the dilatations morphisms \(\sigma_{p,q,n}:X_{p,q,n}\to\mathbb{A}^{3}_{\mathbb{C}}\) to verify that the Koras-Russell threefolds of Example 7.5 were \(\mathbb{A}^{1}\)-contractible possibly up to a finite number of \(\mathbb{P}^{1}\)-suspensions, in the sense that for some \(n\geqslant 0\), the suspension \((X_{p,q,n},o)\wedge(\mathbb{P}^{1})^{\wedge n}\) is an \(\mathbb{A}^{1}\)-contractible object in \(\operatorname{H}_{\mathbb{A}^{1}}(\mathbb{C})\), where here, \(X_{p,q,n}\subset\mathbb{A}^{4}_{\mathbb{C}}\) is considered as a pointed smooth \(\mathbb{C}\)-scheme with distinguished point \(o=(0,0,0,0)\). These first constructions motivated a more systematic study to obtain \(\mathbb{A}^{1}\)-homotopic analogues of Kaliman and Zaidenberg's topological comparison results for affine modifications. The best counterparts of Theorem 7.2 and Corollary 7.3 available so far are the following: Theorem 7.8: [17, Theorem 2.17]. Let \((X,D,Z)\) be a triple in \(\operatorname{Sm}_{k}\) where \(D\) is a Cartier divisor on \(X\) and \(Z\subset D\) is a closed subscheme and let \(\sigma:\tilde{X}=\operatorname{Bl}^{2}_{\mathbb{D}}\mathrm{X}\to\mathrm{X}\) be the dilatation of \(X\) along \(D\) with center at \(Z\). Assume that the following conditions are satisfied: (i) The supports of \(D\) and of the exceptional divisor \(E\) of \(\sigma\) are irreducible; (ii) The closed immersion \(Z\hookrightarrow D\) is an \(\mathbb{A}^{1}\)-weak equivalence. Then there is a naturally induced \(\mathbb{A}^{1}\)-weak equivalence \(\Sigma_{s}\sigma:\Sigma_{s}\tilde{X}\to\Sigma_{s}X\) between the simplicial \(1\)-suspensions of \(\tilde{X}\) and \(X\) respectively. In particular, if \(X\) is \(\mathbb{A}^{1}\)-contractible then \(\Sigma_{s}\tilde{X}\) is \(\mathbb{A}^{1}\)-contractible. Moreover, a stronger conclusion holds in the reverse direction: if \(\tilde{X}\) is \(\mathbb{A}^{1}\)-contractible then \(X\) is \(\mathbb{A}^{1}\)-contractible. Corollary 7.9.: Let \(i:Y\hookrightarrow X\) be a closed immersion between \(\mathbb{A}^{1}\)-contractible smooth \(k\)-schemes. Then the simplicial \(1\)-suspension \(\Sigma_{s}D(X,Y)\) of the deformation space \(D(Y,X)\) of \(Y\) in \(X\) is \(\mathbb{A}^{1}\)-contractible. In contrast with the results of subsection 7.1 which can be applied to possibly singular triples \((X,D,Z)\), Theorem 7.8 and its corollary fundamentally depend on smoothness hypotheses. In particular, Theorem 7.8 is not applicable to tom Dieck -Petrie surfaces and Koras-Russell threefolds over \(\mathbb{C}\) and their natural generalization over other fields. It was nevertheless verified in [10] by different geometric methods that over any base field \(k\) of characteristic zero, the Koras-Russell threefolds \(X_{p,q,n}=\{x^{n}y+z^{p}+w^{q}+x=0\}\) are indeed all \(\mathbb{A}^{1}\)-contractible. These provide in turn when combined with Theorem 7.8 and Corollary 7.9 the building blocks for the construction of many other new examples of smooth affine \(k\)-varieties whose simplicial \(1\)-suspensions are \(\mathbb{A}^{1}\)-contractible, among which some can be further verified by additional methods to be genuinely \(\mathbb{A}^{1}\)-contractible, see [10, Section 4]. A more detailed re-reading of the notion of deformation to the normal cone of a closed immersion \(Y\hookrightarrow X\) between smooth \(k\)-schemes gives rise to a notion of "parametrized" deformation space over a smooth base \(k\)-scheme \(W\), which is defined as a dilatation of the scheme \(Y\times_{k}W\) with appropriate center, see [1, Construction 2.1.2]. This leads to the following counterpart and extension of Proposition 7.7 in the \(\mathbb{A}^{1}\)-homotopic framework: Theorem 7.10 [1, Theorem 2].: Let \(X\) be a smooth \(k\)-scheme, let \(\pi:X\to\mathbb{A}^{n}_{k}\) be a smooth morphism with a section \(s\). Assume that \(\pi|_{\pi^{-1}(\mathbb{A}^{n}_{k}\setminus\{0\})}:\pi^{-1}(\mathbb{A}^{n}_{k} \setminus\{0\})\to\pi^{-1}(\mathbb{A}^{n}_{k}\setminus\{0\})\) is an \(\mathbb{A}^{1}\)-weak equivalence. Then there exists an induced pointed \(\mathbb{A}^{1}\)-weak equivalence \[(X,s(0))\sim(\mathbb{P}^{1})^{\wedge n}\wedge(\pi^{-1}(0),s(0)).\] In particular, the deformation space \(D(X,Y)\) of a closed immersion \((Y,\star)\hookrightarrow(X,\star)\) between pointed smooth \(k\)-schemes is \(\mathbb{A}^{1}\)-weakly equivalent to \(\mathbb{P}^{1}\wedge(Y,\star)\). Example 7.11.: Let \(Q_{2n}\subset\mathbb{A}^{2n+1}_{k}=\operatorname{Spec}(k[u_{1},\ldots,u_{n},v_{ 1},\ldots,v_{n},z])\) be the smooth \(2n\)-dimensional split quadric with equation \(\sum_{i=1}^{n}u_{i}v_{i}=z(z+1)\). The projection \(\pi=\operatorname{pr}_{u_{1},\ldots,u_{n}}:Q_{2n}\to\mathbb{A}^{n+1}_{k}\) is a smooth morphism restricting to a Zariski locally trivial \(\mathbb{A}^{n}\)-bundle over \(\mathbb{A}^{n}_{k}\setminus\{0\}\), hence to an \(\mathbb{A}^{1}\)-weak equivalence over \(\mathbb{A}^{n}_{k}\setminus\{0\}\), and having the morphism \[s:\mathbb{A}^{n}_{k}\to Q_{2n},\,(u_{1},\ldots,u_{n})\mapsto(u_{1},\ldots,u_{ n},0,\ldots,0,0)\] as a natural section. On the other hand, \(\pi^{-1}(0)\) is \(\mathbb{A}^{1}\)-weakly equivalent to the disjoint union of \(s(0)\) and of the point \(p=(0,\ldots,0\ldots,-1)\). Theorem 7.10 thus renders the conclusion that \((Q_{2n},s(0))\) is \(\mathbb{A}^{1}\)-weakly equivalent to \((\mathbb{P}^{1})^{\wedge n}\wedge(p\sqcup s(0))\sim(\mathbb{P}^{1})^{\wedge n}\). In particular \(Q_{2n}\) provides a smooth \(k\)-scheme model of the motivic sphere \((\mathbb{P}^{1})^{\wedge n}=S^{n}\wedge\mathbb{G}^{\wedge n}_{m,k}\).
2307.11983
Extremal problems for a matching and any other graph
For a family of graphs $\F$, a graph is called $\F$-free if it does not contain any member of $\F$ as a subgraph. The generalized Tur\'an number $\ex(n,K_r,\F)$ is the maximum number of $K_r$ in an $n$-vertex $\F$-free graph and $\ex(n,K_2,\F)=\ex(n,\F)$, i.e., the classical Tur\'an number. Let $M_{s+1}$ be a matching on $s+1$ edges and $F$ be any graph. In this paper, we determine $\ex(n,K_r, \{M_{s+1},F\})$ apart from a constant additive term and also give a condition when the error constant term can be determined. In particular, we give the exact value of $\ex(n,\{M_{s+1},F\})$ for $F$ being any non-bipartite graph or some bipartite graphs. Furthermore, we determine $\ex(n,K_r,\{M_{s+1},F\})$ when $F$ is color critical with $\chi(F)\ge \max\{r+1,4\}$. These extend the results in [2,11,18].
Xiutao Zhu, Yaojun Chen
2023-07-22T05:23:57Z
http://arxiv.org/abs/2307.11983v1
# Extremal problems for a matching and any other graph ###### Abstract For a family of graphs \(\mathcal{F}\), a graph is called \(\mathcal{F}\)-free if it does not contain any member of \(\mathcal{F}\) as a subgraph. The generalized Turan number \(\mathrm{ex}(n,K_{r},\mathcal{F})\) is the maximum number of \(K_{r}\) in an \(n\)-vertex \(\mathcal{F}\)-free graph and \(\mathrm{ex}(n,K_{2},\mathcal{F})=\mathrm{ex}(n,\mathcal{F})\), i.e., the classical Turan number. Let \(M_{s+1}\) be a matching on \(s+1\) edges and \(F\) be any graph. In this paper, we determine \(\mathrm{ex}(n,K_{r},\{M_{s+1},F\})\) apart from a constant additive term and also give a condition when the error constant term can be determined. In particular, we give the exact value of \(\mathrm{ex}(n,\{M_{s+1},F\})\) for \(F\) being any non-bipartite graph or some bipartite graphs. Furthermore, we determine \(\mathrm{ex}(n,K_{r},\{M_{s+1},F\})\) when \(F\) is color critical with \(\chi(F)\geq\max\{r+1,4\}\). These extend the results in [2, 11, 18]. ## 1 Introduction In this paper, let \(K_{r},K_{s,t}\) and \(S_{k}\) denote the complete graph on \(r\) vertices, complete bipartite graph with two parts of size \(s\) and \(t\), a star on \(k\) vertices, respectively. Let \(M_{s}\) denote a matching on \(s\) edges. We use \(|G|\) to denote the number of vertices of \(G\). For a family of graphs \(\mathcal{F}\), a graph is called \(\mathcal{F}\)-free if it does not contain any member of \(\mathcal{F}\) as a subgraph. Let \(G(n,s,H)\) denote the graph obtained from the complete bipartite graph \(K_{s,n-s}\) by embedding a maximum \(H\)-free graph into the part of size \(s\). For a subset \(U\in V(G)\), we use \(G[U]\) and \(G-U\) to denote the subgraph induced by \(U\) and \(V(G)-U\), respectively. For an integer \(r\) and a family \(\mathcal{F}\), the generalized Turan number \(\mathrm{ex}(n,K_{r},\mathcal{F})\) is the maximum number of copies of \(K_{r}\) in an \(n\)-vertex \(\mathcal{F}\)-free graph. Note that \(\mathrm{ex}(n,K_{2},\mathcal{F})=\mathrm{ex}(n,\mathcal{F})\), i.e., the classical Turan number. The generalized Turan number was firstly proposed by Alon and Shikhelman [3] in 2016. It has received a lot of attention in the past few years. Many classical results on Turan problem have been extended to generalized Turan number and some other interesting problem are studied too, see [7, 10, 12, 13, 14, 15, 16, 17, 19, 20, 21]. In this paper, we mainly focus on the Turan problem concerning matching. The first result in this issue dates back to Erdos and Gallai [9], they proved \[\mathrm{ex}(n,M_{s+1})=\max\{e(G(n,s,K_{s+1})),e(K_{2s+1})\},\] and determined the extremal graphs. This result was extended to generalized Turan number \(\mathrm{ex}(n,K_{r},M_{s+1})\) by Wang [19]. Beyond that, Chvatal and Hanson [8], and independently by Balachandran and Khare [4] using different method, determined the value of \(\mathrm{ex}(n,\{M_{s+1},S_{k+1}\})\) (The case for \(s=k\) was proved early by Abbott, Hanson and Sauer [1]). Recently, Alon and Frankl [2] suggested to study \(\mathrm{ex}(n,\{M_{s+1},F\})\) for any \(F\). If there is an edge \(e\) in \(F\) such that \(\chi(F-e)<\chi(F)\), then we call \(F\) a color critical graph. They obtained the following results. **Theorem 1**.: _(Alon and Frankl [2])_ 1. _For all_ \(n\geq 2s+1\)_,_ \(\mathrm{ex}(n,\{M_{s+1},K_{k+1}\})=\max\{e(K_{2s+1}),e(G(n,s,K_{k}))\}\)_._ _._ 2. _Let_ \(F\) _be a color critical graph with_ \(\chi(F)=k+1\geq 3\)_. When_ \(s\) _is large and_ \(n\gg s\)_,_ \[\operatorname{ex}(n,\{M_{s+1},F\})=e(G(n,s,K_{k})).\] Follow these results, Gerbner [11] constructed some possible lower bounds of \(\operatorname{ex}(n,\{M_{s+1},F\})\) and determined \(\operatorname{ex}(n,\{M_{s+1},F\})\) apart from a constant additive term for some special bipartite graph \(F\). **Theorem 2**.: _(Gerbner[11]) Let \(F\) be a bipartite graph and \(p\) be the smallest size of a color class in any proper 2-coloring of \(F\) with \(p\leq s\). Then_ \[\operatorname{ex}(n,\{M_{s+1},F\})=(p-1)n+O(1).\] It appears likely that for \(r\geq 3\), the function \(\operatorname{ex}(n,K_{r},\{M_{s+1},F\})\) behaves very differently from the classical Turan number \(\operatorname{ex}(n,\{M_{s+1},F\})\). The result of (1) in Theorem 1 is also extended to the generalized Turan number \(\operatorname{ex}(n,K_{r},\{M_{s+1},K_{k+1}\})\) by Ma and Hou [18]. Let \(\mathcal{N}_{r}(G)\) denote the number of copies of \(K_{r}\) in \(G\). **Theorem 3**.: _(Ma and Hou [18]) For \(n\geq 2s+1\) and \(k\geq r\geq 2\),_ \[\operatorname{ex}(n,K_{r},\{M_{s+1},K_{k+1}\})=\max\{\mathcal{N}_{r}(K_{2s+1}),\mathcal{N}_{r}(G(n,s,K_{k}))\}.\] Furthermore, they also provided some possible lower bounds for \(\operatorname{ex}(n,K_{r},\{M_{s+1},F\})\) and asked the exact value of \(\operatorname{ex}(n,K_{r},\{M_{s+1},F\})\) when \(\chi(F)\geq 3\). In this paper, we consider the generalized Turan number about the matching and another graph. Before showing our results, we need some definitions. A covering \(S\) of \(F\) is a subset of \(V(F)\) such that \(F-S\) is an empty graph, i.e., there is no edge in \(F-S\). Let \(F\) be a graph and \(p\) be an integer, we define a family of subgraphs \(\mathcal{F}[p]\) as follow, **Definition 1**.: _If \(F\) has no covering of size at most \(p\), then \(\mathcal{F}[p]=\{K_{p+1}\}\). Otherwise \(\mathcal{F}[p]=\{F[S]:S\) is a covering of F with \(|S|\leq p\}\)._ In addition to this, we call the covering \(S\) an independent covering if \(S\) is an independent set in \(F\). We also need the definition about the size of the minimum independent covering. **Definition 2**.: _If \(F\) is bipartite, then \(p(F)=\min\{|S|:S\) is an independent covering of F \(\}.\) If \(\chi(F)\geq 3\), then \(p(F)=\infty\)._ Note that, if \(F\) is a bipartite graph, then \(p(F)\) is exactly the smallest size of a color class in any proper 2-coloring, as we mentioned in Theorem 2. We determine \(\operatorname{ex}(n,K_{r},\{M_{s+1},F\})\) apart from a constant additive term and some exact values of \(\operatorname{ex}(n,K_{r},\{M_{s+1},F\})\) for special \(F\). **Theorem 4**.: _Let \(F\) be a graph and \(M_{s+1}\) be a matching. Let \(p<\min\{s+1,p(F)\}\) and \(\operatorname{ex}(p,K_{r-1},\mathcal{F}[p])\) attains the maximum at \(p=t\). Then,_ \[\operatorname{ex}(n,K_{r},\{M_{s+1},F\})=\operatorname{ex}(t,K_{r-1},\mathcal{ F}[t])n+O(1).\] _Moreover, if \(p(F)\geq s+1\) and \(t=s\), then_ \[\operatorname{ex}(n,K_{r},\{M_{s+1},F\})=\operatorname{ex}(s,K_{r-1}, \mathcal{F}[s])(n-s)+\operatorname{ex}(s,K_{r},\mathcal{F}[s]),\] _and \(G(n,s,\mathcal{F}[s])\) is the unique extremal graph._ **Remark 1**.: _The function \(\operatorname{ex}(p,K_{r-1},\mathcal{F}[p])\) is not necessarily increasing on \(p\). A simple example is \(F=C_{5}\) and \(r=3\). Since \(C_{5}\) has no covering of size \(2\) but has a covering of size \(3\), then \(\mathcal{F}[2]=\{K_{3}\}\) and \(K_{2}\cup K_{1}\in\mathcal{F}[p]\) for \(p\geq 3\). We get \(\operatorname{ex}(2,K_{2},\mathcal{F}[2])=1\) but \(\operatorname{ex}(p,K_{2},\mathcal{F}[p])=0\) for \(p\geq 3\). Moreover, one can check that for all other odd cycle \(C_{k}\), \(\operatorname{ex}(p,K_{r-1},\mathcal{F}[p])\) is not increasing either._ ## 2 Some applications of Theorem 4 **Corollary 1**.: _Suppose \(p(F)\geq s+1\) and \(n\) is large enough,_ \[\mathrm{ex}(n,\{M_{s+1},F\})=s(n-s)+\mathrm{ex}(s,\mathcal{F}[s]).\] _Moreover, \(G(n,s,\mathcal{F}[s])\) is the unique extremal graph._ Proof.: When we consider the classical Turan number, then \(r=2\) in Theorem 4. Note that \(\mathrm{ex}(p,K_{1},\mathcal{F}[p])=p\) as long as the independent set \(I_{p}\) is not in \(\mathcal{F}[p]\). However since \(p(F)\geq s+1\), that is to say \(F\) has no independent covering of size less than \(s+1\), \(I_{p}\notin\mathcal{F}[p]\) when \(p<\min\{s+1,p(F)\}\). Thus \(\mathrm{ex}(p,K_{1},\mathcal{F}[p])\) attains the maximum at \(p=s\). Therefore, by Theorem 4, \(\mathrm{ex}(n,\{M_{s+1},F\})=s(n-s)+\mathrm{ex}(s,\mathcal{F}[s])\) and \(G(n,s,\mathcal{F}[s])\) is the unique extremal graph. This corollary extends Theorem 1 and determined the exact value for all non-bipartite graphs and the bipartite graphs with \(p(F)\geq s+1\). **Corollary 2**.: _Suppose \(p(F)\leq s\), then_ \[\mathrm{ex}(n,\{M_{s+1},F\})=(p(F)-1)n+O(1).\] Proof.: Analogously, \(\mathrm{ex}(p,K_{1},\mathcal{F}[p])\) attains the maximum at \(p=p(F)\), then by Theorem 4, we are done. Using Theorem 4, we also determine the generalized Turan number \(\mathrm{ex}(n,K_{r},\{M_{s+1},F\})\) when \(F\) is color critical. **Theorem 5**.: _Let \(F\) be a color critical graph with \(\chi(F)=k+1\geq\max\{r+1,4\}\). When \(s\geq c(F,r)\) and \(n\gg s\),_ \[\mathrm{ex}(n,K_{r},\{M_{s+1},F\})=\mathrm{ex}(s,K_{r-1},K_{k})(n-s)+\mathrm{ ex}(s,K_{r},K_{k})\] _and \(G(n,s,K_{k})\) is the unique extremal graph._ Proof.: Since \(\chi(F)\geq 4\), we have \(p(F)=\infty\). To use Theorem 4, we need to study the property of \(\mathrm{ex}(p,K_{r-1},\mathcal{F}[p])\) for \(p<\min\{s+1,p(F)\}=s+1\). Since \(\chi(F)=k+1\), all graphs in \(\mathcal{F}[p]\) have chromatic number at least \(k\). If not, the chromatic number of \(F\) would not exceed \(k\) by the definition of \(\mathcal{F}[p]\). Hence we have \[\mathrm{ex}(p,K_{r-1},\mathcal{F}[p])\geq\mathcal{N}_{r-1}(T_{k-1}(p)),\] here \(T_{k-1}(p)\) denotes the balanced complete \((k-1)\)-partite graph on \(p\) vertices(called Turan graph). On the other hand, since \(F\) is color critical, we can find a \(k+1\)-coloring with the color class \(V_{1},V_{2},\ldots,V_{k+1}\) such that there is only one edge between \(V_{1},V_{2}\). Then if we delete the color class \(V_{k+1}\), the resulting graph \(F^{-}=F-V_{k+1}\) is still color critical and \(\chi(F^{-})=k\geq 3\). By the following theorem, **Theorem 6**.: _(Ma and Qiu [17]) Let \(H\) be a color critical graph with \(\chi(H)=k>m\geq 2\). Then when \(n\geq c_{0}(H,m)\), \(\mathrm{ex}(n,K_{m},H)=\mathcal{N}_{m}(T_{k-1}(n))\)._ If we take \(H=F^{-}\) and \(m=r-1\) in the above theorem, then we know when \(p\geq c_{0}(F,r)\), \(\mathrm{ex}(p,K_{r-1},F^{-})=\mathcal{N}_{r-1}(T_{k-1}(p))\). Note that \(F^{-}\in\mathcal{F}[p]\), then \[\mathcal{N}_{r-1}(T_{k-1}(p))\leq\mathrm{ex}(p,K_{r-1},\mathcal{F}[p])\leq \mathrm{ex}(p,K_{r-1},F^{-})=\mathcal{N}_{r-1}(T_{k-1}(p)).\] That is to say, \(\mathrm{ex}(p,K_{r-1},\mathcal{F}[p])=\mathcal{N}_{r-1}(T_{k-1}(p))\) is an increasing function when \(s\geq p\geq c_{0}(F,r)\). For \(p\leq c_{0}(F,r)\), \(\mathrm{ex}(p,K_{r-1},\mathcal{F}[p])\) does not exceed a large constant \(C\). Thus we can let \(s\) be a large constant depending on \((F,r)\) so that \(\mathrm{ex}(s,K_{r-1},\mathcal{F}[s])\) attains the maximum. Then by Theorem 4, we know \(G(n,s,\mathcal{F}[s])\) is the unique extremal graph. This extends the Theorems 1 and 3. Proof of Theorem 4 In this section, we prove Theorem 4. Let \(p<\min\{s+1,p(F)\}\) and \(\operatorname{ex}(p,K_{r-1},\mathcal{F}[p])\) attains the maximum at \(p=t\). Then \(G(n,t,\mathcal{F}[t])\) is \(\{M_{s+1},F\}\)-free by Definition 1 and \[\operatorname{ex}(n,K_{r},\{M_{s+1},F\}) \geq\operatorname{ex}(t,K_{r-1},\mathcal{F}[t])(n-t)+ \operatorname{ex}(t,K_{r},\mathcal{F}[t])\] \[=\operatorname{ex}(t,K_{r-1},\mathcal{F}[t])n+O(1).\] So the lower bound is done. Next we prove the upper bound. Let \(G\) be the extremal graph of \(\operatorname{ex}(n,K_{r},\{M_{s+1},F\})\). We need the following well-known theorem to discuss the structure of \(G\). **Theorem 7**.: _(Tutte-Berge [5]) The graph \(G\) is \(M_{s+1}\)-free if and only if there is a subset \(B\subseteq V(G)\) such that for all components \(G_{1},\dots,G_{m}\) of \(G-B\), they satisfy_ \[|B|+\sum_{i=1}^{m}\left\lfloor\frac{|G_{i}|}{2}\right\rfloor\leq s. \tag{1}\] Since \(G\) is \(M_{s+1}\)-free, there is a set \(B\) satisfying the inequality (1) in the above theorem. Let \(G_{1},\dots,G_{m}\) be all components of \(G-B\). Note that \(s\) is a fixed constant, then most of these components are isolated vertices. Without loss of generality, we may assume the components \(G_{1},\dots,G_{\ell}\) are not isolated vertices. Let \(N_{j}\) denote the number of copies of \(K_{r}\) which have \(j\) vertices in \(V(G)-B\) and \(r-j\) vertices in \(B\). Obviously, \(N_{0}\leq\binom{s}{r}\). For other \(2\leq j\leq r\), by inequality (1), we have \(|B|+\sum_{i=1}^{\ell}|G_{i}|\leq 3s\) and hence \[N_{j}\leq\sum_{i=1}^{\ell}\mathcal{N}_{j}(G_{i})\binom{s}{r-j}<\binom{3s}{r}= O(1),\] the second inequality holds since we can view \(B\cup G_{1}\cup\dots\cup G_{\ell}\) as a big clique. This implies \[\mathcal{N}_{r}(G)=\sum_{j=0}^{r}N_{j}=N_{1}+O(1). \tag{2}\] Now we mainly deal with the term \(N_{1}\). We divide \(V(G)\setminus B\) into many subsets by the following way: let \(U\) be a subset of \(B\), \[A_{U}=\{v\in V(G)\setminus B:N(v)\cap B=U\}.\] Let \(R=\{U:|A_{U}|\geq|F|\}\) and \(Q=\{U:|A_{U}|<|F|\}\). Note that \(|R|+|Q|=2^{|B|}\) and we have \[N_{1}=\sum_{U\in R}\mathcal{N}_{r-1}(G[U])|A_{U}|+\sum_{U\in Q}\mathcal{N}_{r- 1}(G[U])|A_{U}|. \tag{3}\] For the set \(U\) in \(Q\), we have \(\mathcal{N}_{r-1}(G[U])|A_{U}|<\binom{|B|}{r-1}|F|\) and hence \[\sum_{U\in Q}\mathcal{N}_{r-1}(G[U])|A_{U}|<2^{|B|}|F|\binom{|B|}{r-1}\leq 2^{ s}|F|\binom{s}{r-1}=O(1). \tag{4}\] For the set \(U\) in \(R\), since \(|A_{U}|\geq|F|\) and \(G[U,A_{U}]\) is a complete bipartite graph, we can deduce that \(G[U]\) is \(\mathcal{F}[|U|]\)-free by Definition 1 and \(|U|<\min\{s+1,p(F)\}\). Hence \[\mathcal{N}_{r-1}(G[U])|A_{U}|\leq\operatorname{ex}(|U|,K_{r-1},\mathcal{F}[| U|])|A_{U}|.\] On the other hand, as we assumed, \(t\) is the integer less than \(\min\{s+1,p(F)\}\) such that \(\operatorname{ex}(p,K_{r-1},\mathcal{F}[p])\) attains the maximum, then \[\sum_{U\in R}\mathcal{N}_{r-1}(G[U])|A_{U}|\leq \sum_{U\in R}\operatorname{ex}(|U|,K_{r-1},\mathcal{F}[|U|])|A_{U}|\] \[\leq \operatorname{ex}(t,K_{r-1},\mathcal{F}[t])\sum_{U\in R}|A_{U}| \tag{5}\] \[\leq \operatorname{ex}(t,K_{r-1},\mathcal{F}[t])(n-|B|).\] Now combine the inequality (2)-(5), we know \(\mathcal{N}_{r}(G)=\operatorname{ex}(t,K_{r-1},\mathcal{F}[t])n+O(1)\). We complete the proof of the first part in Theorem 4. Next we prove the second part, at this time \(p(F)\geq s+1\) and \(\operatorname{ex}(p,K_{r-1},\mathcal{F}[p])\) attains the maximum at \(p=s\). Furthermore, by the inequality (2)-(5) and the lower bound, we have \[\operatorname{ex}(s,K_{r-1},\mathcal{F}[s])(n-s)\leq\mathcal{N}_{r}(G)\leq \sum_{U\in R}\operatorname{ex}(|U|,K_{r-1},\mathcal{F}[|U|])|A_{U}|+O(1).\] Therefore, when \(n\) is large, \(B\in R\) with \(|B|=s\) and the corresponding \(A_{B}\) satisfying \(A_{B}=n-O(1)\). Otherwise the right hand of the above would not exceed \(\operatorname{ex}(s,K_{r-1},\mathcal{F}[s])(n-s)\). Hence we know \(G[B]\) is \(\mathcal{F}[s]\)-free since \(A_{B}\) is large. This also implies all components of \(G-B\) are isolated vertices by Theorem 7. For other vertices in \(V(G)-B\cup A_{B}\), we can add all missing edges between them and \(B\), this would not create any copy of \(F\) since \(G[B]\) is \(\mathcal{F}[s]\)-free, but the number of \(K_{r}\) increases. Thus, we know \(G=G(n,s,\mathcal{F}(s))\) and we are done. ## 4 Classical Turan number for balanced forests By the corollaries in Section 2, the only unsolved case for classical Turan number is that \(F\) is a bipartite graph with \(p(F)\leq s\). In this section we deal with this case. A tree \(T[A,B]\) is balanced if \(|A|=|B|\). A forest is balanced if each of its component is a balanced tree. The Turan number of balanced forest was studied firstly by Bushaw and Kettle [6] if this forest contains at least two components. Here we study the Turan problem combining the matching and a balanced forest and give a unified proof no matter the forest contains how many components. **Theorem 8**.: _Let \(F\) be a balanced forest with \(v(F)=2p\leq 2s\). If Erdos-Sos conjecture holds for each component of \(F\) and \(n\) is large, then_ \[\operatorname{ex}(n,\{F,M_{s+1}\})=(p-1)(n-p+1)+\operatorname{ex}(p-1, \mathcal{F}[p-1]).\] _If \(F\) has at least two components, then \(G(n,p-1,\mathcal{F}[p-1])\) is the unique extremal graph. If \(F\) is a tree, then \(G(n-t(2p-1),p-1,\mathcal{F}[p-1])\cup tK_{2p-1}\) is the extremal graph, where \(t\leq\frac{s-p+1}{(p-1)}\)._ **Remark 2**.: _By a result of Bushaw and Kettle(see Lemma 3.4 and Lemma 3.5 in [6]), it is easy to prove that \(\operatorname{ex}(p-1,\mathcal{F}[p-1])=\binom{p-1}{2}\) if \(F\) contains a perfect matching and \(\operatorname{ex}(p-1,\mathcal{F}[p-1])=0\), otherwise._ Proof.: Let \(F=T_{1}\cup\cdots\cup T_{k}\) be a balanced forest. The graph \(G(n,p-1,\mathcal{F}[p-1])\) has matching number \(p-1\) and it is also \(F\)-free by the definition of \(\mathcal{F}[p-1]\). Besides this, if \(F\) is a tree, then \(G(n-t(2p-1),p-1,\mathcal{F}[p-1])\cup tK_{2p-1}\) is \(F\)-free and has matching number at most \((p-1)+t(p-1)\leq s\). So the lower bound is done. Next we prove the upper bound. Let \(M=\{v_{1}u_{1},\ldots,v_{t}u_{t}\}\) be a maximum matching in \(G\), \(t\leq s\). This implies \(V(G-M)\) is an independent set. We divide \(V(G-M)\) into two subsets \(W\) and \(W^{\prime}\) such that \[W^{\prime}=\{v\in V(G-M):d(v)\geq p\}\text{ and }W=\{v\in V(G-M):d(v)\leq p-1\}.\] First we claim that \(|W^{\prime}|\leq p\binom{2t}{p}\). Indeed, since there are at most \(\binom{2t}{p}\)\(p\)-sets in \(V(M)\) and if there are \(p\binom{2t}{p}\) vertices in \(W^{\prime}\), then by the pigeonhole principle, there are \(p\) vertices in \(W^{\prime}\) such that they have \(p\) common neighbors in \(V(M)\). Then we find a large complete bipartite graph, and hence a copy of \(F\), a contradiction. Next we assert that there are at least \(2s\binom{2t}{p}\) vertices of degree \(p-1\) in \(W\). If not, then \[e(G)\leq \binom{2t}{2}+2t|W^{\prime}|+2s\binom{2t}{p}(p-1)+(p-2)\left(n-2t -|W^{\prime}|-2s\binom{2t}{p}\right)\] \[< (p-1)(n-p+1).\] The last inequality holds when \(n\) is large, a contradiction. This also implies we can find \(2s\) vertices of degree \(p-1\) in \(W\) such that they have \(p\) common neighbors in \(M\). Without loss of generality, let these \(2p\) vertices be \(\{x_{1},\ldots,x_{2s}\}\) and \(U=\{v_{1},\ldots,v_{p-1}\}\) be the set of the common neighbors of them. On the other hand, for all other vertices in \(W\) and the vertices in \(V(M)-U\) whose degree is at most \(p-1\), we can change their neighborhoods to \(U\). This operation does not decrease the number of edges and the resulting graph is still \(\{F,M_{s+1}\}\)-free. Since if there is a copy of \(F\) or \(M_{s+1}\) in the resulting graph, then we can use the vertices in \(\{x_{1},\ldots,x_{2s}\}\) to replace the vertices in the copy of \(F\) or \(M_{s+1}\) which are incident with some new edges. That is, we can find a copy of \(F\) or \(M_{s+1}\) in \(G\), a contradiction. After the operation, let us redefine the set \(W\) and \(W^{\prime}\), where \(W\) denotes the set of all vertices of degree \(p-1\) and have the neighborhood \(U\), \(W^{\prime}\) denote the other vertices of degree at least \(p\). By above, \(W^{\prime}\) consists of vertices in the original \(W^{\prime}\) and some vertices in \(V(M)\) whose degree is at least \(p\). Thus \(|B|\leq K\) for some constant \(K\). Furthermore, \(W\) is independent with \(|W|\geq n-K-2t=n-O(1)\). Since \(G[U,W]\) is a large complete bipartite graph, we have \(G[U]\) is \(\mathcal{F}[p-1]\)-free by the definition. We also claim that there is no edge between \(U\) and \(W^{\prime}\). If not, suppose \(v_{i}w\) is an edge between \(U\) and \(W^{\prime}\). Recall that \(F=F[A,B]\) is a bipartite graph and \(A,B\) are two color classes. There is a vertex, saying \(x\), whose all neighbors except one are leaves. Suppose this vertex \(x\) is in \(B\) and let \(y\) be the neighbor of \(x\) which is not a leaf. Now we can embed the vertex \(x\) into \(w\), embed the vertex \(y\) into \(v_{i}\), embed the other neighbors of \(x\) into the neighbor of \(w\), embed the other vertices of \(A\) into \(U\) and the other vertices of \(B\) into \(W\). This can be done since \(|U|=p-1\) and \(d(w)\geq p\). Finally, we find a copy of \(F\), a contradiction. Therefore, \(W^{\prime}\) induces some connected components of \(G\). Furthermore, \(G[W^{\prime}]\) is \(T_{1}\)-free. Otherwise, a copy of \(T_{1}\) in \(W^{\prime}\) together with a copy of \(T_{2}\cup\cdots\cup T_{k}\) in \(G[U,W]\) would construct a copy of \(F\). Thus, we have \[e(G)\leq \mathrm{ex}(p-1,\mathcal{F}[p-1])+(p-1)(n-p+1-|W^{\prime}|)+ \mathrm{ex}(|W^{\prime}|,T_{1})\] \[\leq \mathrm{ex}(p-1,\mathcal{F}[p-1])+(p-1)(n-p+1)-|W^{\prime}|(p-1)+ \frac{v(T_{1})-2}{2}|W^{\prime}|\] \[\leq \mathrm{ex}(p-1,\mathcal{F}[p-1])+(p-1)(n-p+1).\] The last inequality holds under the assumption of Erdos-Sos conjecture. From the above inequalities, if \(F\) is a real forest, then \((p-1)>(v(T_{1})-2)/2\). So if the equality holds, then \(W^{\prime}=\emptyset\) and \(G=G(n,p-1,\mathcal{F}[p-1])\). If \(F\) is a tree, then the equality holds if and only if \(W^{\prime}\) induces some disjoint cliques \(K_{2p-1}\). But since \(G\) is \(M_{s+1}\)-free, \(W^{\prime}\) induces at most \(\frac{s-p+1}{p-1}\) copies of \(K_{2p-1}\). That is \(G=G(n-(p-1)-t(2p-1),p-1,\mathcal{F}[p-1])\cup tK_{2p-1}\) with \(t\leq\frac{s-p+1}{p-1}\). The proof is completed. \(\blacksquare\) ## 5 Acknowledgements This Research was supported by NSFC under grant numbers 12161141003 and 11931006.
2304.08982
Microtearding mode study in NSTX using machine learning enhanced reduced model
This article presents a survey of NSTX cases to study the microtearing mode (MTM) stabilities using the newly developed global reduced model for Slab-Like Microtearing modes (SLiM). A trained neutral network version of SLiM enables rapid assessment (0.05s/mode) of MTM with $98\%$ accuracy providing an opportunity for systemic equilibrium reconstructions based on the matching of experimentally observed frequency bands and SLiM prediction across a wide range of parameters. Such a method finds some success in the NSTX discharges, the frequency observed in the experiment matches with what SLiM predicted. Based on the experience with SLiM analysis, a workflow to estimate the potential MTM frequency for a quick assessment based on experimental observation has been established.
Max T. Curie, Joel Larakers, Jason Parisi, Gary Staebler, Stefano Munaretto, Walter Guttenfelder, Emily Belli, David R. Hatch, Mate Lampert, Galina Avdeeva, Tom Neiser, Sterling Smith, Ahmed Diallo, Oak Nelson, Stanley Kaye, Eric Fredrickson, Joshua M Manela, Shelly Lei, Michael Halfmoon, Matthew M Tennery, Ehab Hassan
2023-04-18T13:24:57Z
http://arxiv.org/abs/2304.08982v1
# Microtearding mode study in NSTX using machine learning enhanced reduced model ###### Abstract This article presents a survey of NSTX cases to study the microtearing mode (MTM) stabilities using the newly developed global reduced model for **Slab-Like** Microtearing modes (SLiM). A trained neutral network version of SLiM enables rapid assessment (0.05s/mode) of MTM with 98% accuracy providing an opportunity for systemic equilibrium reconstructions based on the matching of experimentally observed frequency bands and SLiM prediction across a wide range of parameters. Such a method finds some success in the NSTX discharges, the frequency observed in the experiment matches with what SLiM predicted. Based on the experience with SLiM analysis, a workflow to estimate the potential MTM frequency for a quick assessment based on experimental observation has been established. + Footnote †: preprint: AIP/12012 ## I Introduction National Spherical Torus Experiment (NSTX) is a fusion device based on the spherical Tokamak concept. Studies [14; 15; 16; 17; 18; 19; 2; 3] show the micro-instabilities contributes the degradation of the pedestal through transport. Microtearing mode (MTM) is the electromagnetic micro-instability that is driven by the electron temperature gradient. It contributes a significant amount of electron heat transport in the NSTX pedestal along with electron temperature gradient mode (ETG) [21; 25]. MTM's stability depends on a host of factors [23]. Most notably, when the collision frequency is similar to the mode frequency (\(\nu_{ei}/\omega\sim 1\)), the slab MTM becomes unstable. MTM has mode frequency of the electron diamagnetic frequency the mode location [22; 23]. And discharges with lithium-coated plasma-facing components [21; 27; 28] provide the collision frequency similar to the diamagnetic frequency \(\nu_{ei}/\omega_{se}\sim 1\), therefore, the slab-like MTM is likely to be unstable. Gyrokinetic simulations found the unstable MTM in the NSTX pedestal and contributed a significant amount of electron heat transport across several discharges. [26; 29] The newly developed **Slab** Like **MTM** (SLiM) model has successfully demonstrated its application in conventional Tokamaks such as DIII-D and JET [20] on explaining mode skipping, chirping, and calculating the mode frequency and stability. Importantly, due to the high level of sensitivity of location of the rational surfaces safety factor. SLiM has demonstrated its ability to constrain the safety factor at the pedestal which provides a better profile for more computationally costly simulations. This article will explore SLiM's capability on NSTX with a more sophisticated profile variation scheme. Along the way, a workflow using SLiM to assist gyrokinetic analysis is provided. And SLiM's limitation in strongly shaped devices such as NSTX is discussed. The article has the following structure. Chapter II is a brief review of the background of the SLiM, including the past success on conventional Tokamaks. An example from the past article shows its ability to constrain the profile. It is hard to achieve such accuracy with pure experimental observation. Chapter III presents the methodology for varying the profile. Chapter IV shows the way SLiM varies the equilibrium and method to pick the equilibrium that has SLiM predicted MTM best matches experimental observed magnetic signals. Chapter V presents an NSTX case showcase of such a tool for picking the best equilibrium. Chapter VI draws the conclusions and presents the workflow inspired by SLiM that can be applied to any reduced models to help guide the high-fidelity simulations. As a supplement, Chapter VII.1 will discuss the trained neural network for SLiM in order to speed up the mode identification enough to sample large profile variations, Chapter VII.2 will shows the definitions of quantities in greater detail. ## II Background The SLiM model is the linear slab MTM model that uses kinetic theory [23]. Such a reduced model solves the dispersion relation defined by Eqs.1 and 2. Microtearding mode study in NSTX using machine learning enhanced on the \(d^{2}A_{||}\) \[\frac{d^{2}A_{||}}{dx^{2}}=-\frac{4\pi}{c}\sigma_{||}(\omega,x)E_{||} \tag{1}\] \[\left(\frac{c}{v_{A}}\right)^{2}(\omega-\omega_{n})\frac{d^{2}\phi}{dx^{2}}=-4 \pi k_{||}\sigma_{||}(\omega,x)E_{||} \tag{2}\] In the equation above, \(A_{||}\) is the magnetic vector potential that is parallel to the magnetic field \(B_{0}\), \(\phi\) is the electric potential, \(E_{||}\) is the electric field that parallels to \(B_{0}\), \(\sigma_{||}(\omega,x)\) is the conductivity [5] parallel to \(B_{0}\), \(c\) is the speed of light, \(x\) is the distance from the rational surface to the \(\omega_{e}\) peak normalized to gyro-radius, \(v_{A}\) is the Alfven velocity, \(k_{||}=\tilde{b}\cdot\mathbf{k}\), where \(\tilde{b}\) is the unit vector of the magnetic field \(\vec{B_{0}}\), and Eq.2 is based on quasi-neutrality using kinetic theory. Eq.1 is derived from Ampere's law and Ohm's law. The consideration of the distance of the rational surface to the \(\omega_{e}\) peak makes SLiM the biggest difference from other MTM reduced models. Such a model has shown success in conventional Tokamaks [20]. Fig. 1 presents a DIII-D discharge studied with SLiM [7]. Here is how SLiM works: it takes the equilibrium profiles and calculates electron diamagnetic frequency \(\omega_{e}\) (detailed definitions can be found in Sec. VII) to find the radial range of interest around \(\omega_{e*}\) peak. Takes the rational surfaces that are close to the peak, and calculates its corresponding sets of parameters \(\nu,Z_{eff},\eta,\tilde{s},\beta,k_{y},\mu\) (detailed definitions can be found in Sec. VII). And then SLiM will calculate the mode stabilities and frequency based on the parameters. 4 DIII-D discharges and 1 JET discharge have been studied by SLiM [4; 12; 20]. All discharges have magnetic frequency bands with low mode numbers, and the profiles have low magnetic shears relative to the electron pressure gradient \(q(\mu=0)-q(\mu=0.2*x_{*})\sim 1/n\). Where \(\mu\) is the distance of the rational surface to the \(\omega_{e*}\) peak, and \(x_{*}\) is the spread of \(\omega_{e*}\), q is the safety factor, \(n\) is the toroidal mode number where the frequency band is found. (detailed definitions can be found in Sec. VII). Such a low magnetic shear enables rational surfaces to be spatially sparse in the pedestal, which produces discrete frequency bands [20], other than the board band [13]. SLiM can analyze these cases with a relatively high procession in comparison with global gyrokinetic simulations. ## III Variation of the equilibrium With faster SLiM_NN (more detail in Sec.VII.1), and confidence in SLiM_NN's accuracy, let's now consider how to vary the profile. For electron density \(n_{e}\), we can carry out modifications with 1 free parameter: \(n_{e,scale}\). \[n_{e}=n_{e0}\left[1+\left(n_{e,scale}-1\right)weight_{n_{e}}(r)\right] \tag{3}\] where \(n_{e0}\) is the nominal electron density profile, \(weight_{n_{e}}(r)=1/2+1/2\cdot tanh[(r-r_{top})/width]\), \(r_{top}\) is the location of the top pedestal, \(width\) is the width of pedestal. Similarly, for electron temperature \(T_{e}\), we have the following expression. \[T_{e}=T_{e0}\left[1+\left(T_{e,scale}-1\right)weight_{T_{e}}(r)\right] \tag{4}\] Where \(T_{e0}\) is the nominal electron temperature profile, and the \(weight_{T_{e}}(r)=1/2+1/2\cdot tanh[(r-r_{top})/width]\) is the weight function. The weight function is the modified hyperbolic tangent, which goes close to zero at the top pedestal and scrape-off layer (SOL), and near \(n_{e,scale}/T_{e,scale}\) at mid-pedestal. Thus this modification provides a profile gradient change in the mid-pedestal while not influencing the profile in the core and scrape-off layer (SOL). Fig. 2 shows the modification of \(T_{e}\) by change \(T_{e,scale}\) from 0.8 to 1.2 with 0.05 increment (from bottom to top). Figure 1: This plot shows the alignment of rational surfaces (orange vertical lines) and \(\omega_{e*}\) (black/stable and red/unstable curves), the purple highlighted area is bounded by the frequency observed in experiment and top 4% of the \(\omega_{e*}\), the dots represent the intersections of the rational surfaces with the corresponding \(\omega_{e*}\) curves. Figure 2: The plot shows the modification of \(T_{e}\) by change \(T_{e,scale}\) from 0.8 to 1.2 with 0.05 increment. The blue curve is the nominal profile. The orange lines are the modified profiles, where the bottom curve has \(T_{e,scale}=0.8\). For the safety factor, we can employ similar modifications with 3 free parameters: \(\hat{s}_{scale}\), \(q_{scale}\). \[q=q_{0}\cdot q_{scale}\cdot[1+(\hat{s}_{scale}-1)\,weights_{\hat{s}}(r)] \tag{5}\] where \(weight_{\hat{s}}=-\frac{1}{2}\tanh\left[\frac{r-r_{mid}}{0.1\,width_{\hat{s}} }\right],0.1\) is an arbitrary factor, and \(r_{mid}\) is the location of mid-pedestal. To demonstrate the effect of the modification of safety factor with \(\hat{s}_{scale}\). Fig. 3 shows the modification of \(q\) by change \(\hat{s}_{scale}\) from 0.8 to 1.2 with 0.05 increment while \(q_{scale}=1\), and \(q_{shift}=0\). \(\hat{s}_{scale}\) is an important factor to change the spacing between the rational rational surfaces. It is not hard to imagine that the rational surfaces are more densely packed with high \(\hat{s}\) while rational surfaces are more sparse given lower \(\hat{s}\). Thus lower \(\hat{s}\) could help stabilize the unwanted modes by making the rational surfaces further away from the \(\omega_{se}\) peak. Unless specified, the modification parameter will be kept at nominal values: \(n_{e,scale}=1\), \(T_{e,scale}=1\), \(\hat{s}_{scale}=1\), \(q_{scale}=1\). ## IV Method to determine the best equilibrium Since a large number of possible equilibria are sampled, manual checking is unrealistic. A metric is constructed to find the optimum equilibrium. A set of magnetic frequency bands can be observed in experiment spectrogram \(f_{exp,i}\), where subscript 'exp' stands for the experiment, 'i' stands for the \(i_{th}\) frequency band with mode number \(n_{tor}=i\). SLiM then can calculate the unstable MTM frequency \(f_{SLM,i}\) based on the equilibrium. We can then construct a metric \(\delta_{f}\) to assess how good the reconstructed equilibrium is based on the frequency matching between the experiment and SLIM calculation. \[\delta_{f}=\frac{1}{N}\sum_{i=1}^{N}|f_{exp,i}-f_{SLM,i}|/f_{exp,i} \tag{6}\] Where \(N\) is the total number of frequency bands. We can then choose the reconstructed profile with the smallest \(\delta_{f}\) and use such a profile for further investigations such as large-scale simulations. ## V Application to discharges ### Nstx 132588 Let's use an example to illustrate the method mentioned in the previous section (Sec. IV) We can estimate the frequency and mode number from the magnetic spectrogram shown in Fig. 4. Such an estimate has been given in the Ch. I. It is worth noting that the only effect magnetic shear will play in SLiM is to change the distance of the rational surfaces. And we are looking for rational surfaces that resonate with \(n_{tor}=1\), so scanning \(\hat{s}_{scale}\) is not necessary. The nominal profile does not produce the frequency that agrees with experimental observations shown in Fig.5 (a). The frequency is mismatched for the other mode numbers while SLiM does not find unstable MTM since its rational surfaces are too far from the \(\omega_{se}\) peak. SLiM takes the following set of variations in Ch. II in order to find the unstable MTM that is the best match with experimentally observed magnetic frequency bands. With \(q_{scale}=1.04\), \(n_{e,scale}=1.12\), \(T_{e,scale}=1\), the rational \begin{table} \begin{tabular}{|l|l|} \hline mode number & frequency \\ \hline 1 & 14 \\ \hline 2 & 21 \\ \hline 3 & 35 \\ \hline 4 & 50 \\ \hline 5 & 59 \\ \hline \end{tabular} \end{table} Table 1: This chart shows the mode number and frequency of each frequency band in the NSTX 132588 Figure 3: The plot shows the modification of \(\hat{s}\) by change \(\delta_{scale}\) from 0.8 to 1.2 with 0.05 increment while \(q_{scale}=1\), and \(q_{shift}=0\). The blue curve is the nominal profile. The orange curves are the modified profile, where the flattest curve around \(\rho_{tor}=0.95\) has \(\delta_{scale}=0.8\). Figure 4: This magnetic spectrogram over from 0.5 seconds to 0.85 seconds with toroidal mode number ranging from 1 to 6 surfaces perfectly align with the \(\omega_{se}\) peak shown in Fig. 5 (b). SLiM find 4 unstable MTM (\(n_{tor}=1\sim 4\)) that matches with the experiment. Fig. 6 shows the matching of the SLiM calculation and experimental observation. The profile is then reproduced with the variation. Fig. 7 shows the frequency and growth rate of the SLiM calculation. Additionally, Guttenfelder (2022)[6] uses CGYRO nonlinear simulations to show that the MTM can explain the missing electron heat transport in this discharge. ### Nstx 129038 NSTX 129038 has a magnetic signal with the mode number \(n_{tor}=2,4\) with \(f_{2}=32kHz\) and \(f_{4}=64kHz\) at \(t=500ms\), which is shown in Fig. 8. This case could demonstrate the application of rational surface alignment to constrain \(q\) profile in the pedestal. rational surfaces \(n_{tor}=4\) exist on top of all \(n_{tor}=2\) since it is resonating. Thus, in order to have a profile with \(n_{tor}=2,4\) rational surfaces at the \(\omega_{se}\) peak. We need to have \(q_{peak}=m/2\), where \(m\) is an arbitrary integer. While we do not have rational surfaces with \(n_{tor}=1\), we want to avoid \(q_{peak}=m\). In order to make the frequency match, and make MTM more unstable, we take \(T_{e,scale}=1.2\). Since the \(q\sim 10\) around the \(\omega_{se}\) peak. We can take \(q=10.5\) using \(q_{scale}=1.07\). This profile modification makes \(\omega_{se}\) peak align with \(n_{tor}=2,4\) rational surfaces. Figure.10 shows the nominal profile (a) V.S. modified profile (b). The resulting frequency from SLiM matches the experiment as shown in Fig. 10. \begin{table} \begin{tabular}{|l|l|l|l|} \hline quantity & min & max & increment \\ \hline \(q_{scale}\) & 0.8 & 1.2 & 0.001 \\ \hline \(n_{e,scale}\) & 0.8 & 1.2 & 0.01 \\ \hline \(T_{e,scale}\) & 0.8 & 1.2 & 0.01 \\ \hline \end{tabular} \end{table} Table 2: This table describes the variation of profile that SLiM takes to find the best fit with the experiment. Figure 5: This figure shows the alignment of rational surfaces with \(\omega_{se}\) in plasma frame with nominal profile (figure a) and modified profile (figure b). Where the curves are \(\omega_{se}\) with \(n_{tor}=1\) at the bottom with 1 increment. The orange lines are the rational surfaces that intersected with their corresponding \(\omega_{se}\) curves at the blue dots. The red curve means that the mode number contains a potentially unstable MTM, while the black means stable. Figure 6: This figure shows the overlay of the frequency (yellow text for frequency and toroidal mode number) of SLiM calculations (blue lines) and experimentally observed magnetic spectrogram. Figure 7: This plot shows the frequency in the lab frame (top plot) and growth rate (bottom plot) of MTM by SLiM calculations. With \(k_{y}\rho_{k}\) from left to right maps to \(n=1\sim 4\). ## VI Discussion and Conclusion Notice the NSTX rational surfaces are closer than DIII-D, it is due to higher magnetic shear in NSTX and the less steep electron pressure gradient, thus larger pedestal width. Despite the NSTX being strongly shaped, SLiM demonstrated success on several discharges. But is important to point out that SLiM works the best at low magnetic shear, with modes number below 10. There are observations of MTM with high toroidal mode number [8; 9; 13; 4], such higher frequency magnetic frequency bands can be observed in narrow pedestal H-mode NSTX discharges with high collision frequency. SLiM neural net enables the large sample size of variation of the profile so that one can get desired profile for further investigations. Two discharges in NSTX are presented to showcase SLiM's capabilities for predicting MTM in spherical Tomkamaks by comparing the results with a magnetic spectrogram using the varied profile produced by SLiM. The shaping effect will be further discussed in a future publication. Regardless of the limitations of the SLiM, it can still be a powerful tool to find the right equilibrium for simulations. The workflow can be the following: * Find the mode numbers and mode frequency for the potential MTM in the discharge using a magnetic spectrogram. * Use the SLiM neural network to find the variation of equilibrium that matches the experimentally observed frequency and mode number. * Select the desired equilibrium as a reference and reconstruct the equilibrium using self-consistent equilibrium. * Test the newly reconstructed equilibrium on SLiM * Conduct further high-fidelity simulations. The method of profile modification discussed in II is not physically self-consistent. Sample a few profiles that have SLiM prediction matches with experiments to reconstruct new equilibriums will be desirable for the next step of the research. Such a method could potentially aid the equilibrium reconstructions by constraining the profile on the discharges that have the potential slab MTM. ## VII Appendix ### Training Neutral Network To sample sufficient variations to the equilibrium to find a match between unstable MTMs calculated by SLiM and experimentally observed magnetic frequency bands, SLiM needed to be sped up. Algorithmic improvements provided a 10-fold increase in speed by using vectorization and simplifying the calculation. However, 15sec/mode is not fast enough for sampling a large set of possible equilibria. Let's do a quick calculation. Assuming 5 modes per equilibrium to be calculated, sampling 2000 equilibria will take 40 hours to finish. The good news is that the improved version of the dispersion calculation is economic enough to run on a high-performance computer over a large parameter range to train a neural network. There are over 3 million dispersion calculations within a normal operating range of DIII-D and NSTX for training. The varying parameters are \(\nu,Z_{eff},\eta,\hat{s},\beta,k_{s},\mu\) (detailed definitions can be found in Sec. VII), and keep \(x_{s}=10\). The range of the variable is listed in Table. 3. To further illustrate the range of variables for the calculation Fig. 11 Two neural networks were trained: one is a stability neural network classifier, which categorizes whether the mode has an unstable MTM or not at a given parameter set. The other neural network calculates the frequency of the given unstable MTM. Fig. 12 shows the accuracy of the stability prediction of MTM over training iterations (epochs). It shows the validation accuracy of 97.0%. And Fig. 13 shows the mean Figure 10: This figure shows the overlay of the frequency (orange text for frequency and toroidal mode number) of SLiM calculations (orange lines) and experimentally observed magnetic spectrogram. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Quantity & \(\nu\) & \(\hat{s}\) & \(\beta\) & \(Z_{eff}\) & \(\eta\) & \(k_{y}\) & \(\mu\) \\ \hline Minimum & 0.01 & 0.001 & 0.0005 & 1 & 0.5 & 0.01 & 0 \\ \hline Maximum & 10 & 1 & 0.1 & 5 & 5 & 0.3 & 10 \\ \hline Distribution & log & log & log & linear & linear & linear \\ \hline \end{tabular} \end{table} Table 3: The chart shows the range of the variables that SLiM uses for calculation. The first row shows the name of the quantities. The second row and third row show the minimum and maximum values respectively. The fourth row shows the distribution function, and the “log” and “linear” represents even distribution function in log space and linear space Figure 9: This figure shows the alignment of rational surfaces with \(\omega_{ee}\) in plasma frame with nominal profile (figure a) and modified profile (figure b). Where the curves are \(\omega_{ee}\) with \(n_{tot}=1\) at the bottom with 1 increment. The orange lines are the rational surfaces that intersected with their corresponding \(\omega_{ee}\) curves at the blue dots. The red curve means that the mode number contains a potentially unstable MTM, while the black means stable. average error of MTM frequency prediction over training iterations (epochs), which can be translated into an accuracy of 98.6%. The trained neural network is accurate, yet 300 times faster than the (already optimized) dispersion calculation. Table. 4 shows the updated MTM models including the trained neural network for the SLiM model (SLiM_NN). The trained neural network takes 0.05s/mode to analyze an MTM. Improved speed reduces the sampling of the 2,000 equilibria down to 8 min, which enables a realistic and economic assessment of a large set of variants of the equilibrium. The trained neural networks have been benchmarked against the SLiM dispersion calculation. The dispersion relation can be expressed as \[\omega(\nu,Z_{eff},\eta,\hat{s},\beta,k_{y},\mu/x_{*})\ (\text{details can be found in Sec.\ref{eq:Lississ}).}\] Benchmarks for an \(\eta=\omega_{T}/\omega_{\text{n}}\) scan, a \(\mu\) (rational surfaces alignment) scan, and a \(\nu\) (collision frequency) scan are shown in Fig. 14, Fig. 15, and Fig. 16 respectively. The baseline for all three scans is \(\nu=1.4\), \(Z_{eff}=2.8\), \(\eta=1.16\), \(\hat{s}=0.006\), \(\beta=0.0007\), \(k_{y}\rho_{s}=0.04\), \(\mu/x_{*}=0\). The plots show the high level of agreement between SLiM_NN and SLiM. Which permits us to proceed to the next step. ### Definition of quantities Here are some definitions of the quantities \[k_{y}=\sqrt{2}\frac{n_{tor}q\rho_{s}}{r} \tag{7}\] \[\omega_{*e}=\frac{k_{y}c_{s}}{\sqrt{2}}\left(\frac{1}{L_{T_{e}}}+ \frac{1}{L_{n_{e}}}\right)\] (8) \[\nu=\nu_{ei}/\omega_{*e,n}\] (9) \[Z_{eff}=(n_{i}+n_{z}Z^{2})/n_{e}\] (10) \[\eta=L_{n_{e}}/L_{T_{e}}\] (11) \[\delta=L_{n_{e}}/L_{q}\] (12) \[\beta=8\pi n_{e}k_{B}T_{e}/B_{0}^{2} \tag{13}\] Where \(\omega_{*e,n}=\frac{n_{tor}q\rho_{s}c_{s}}{al_{ee}}\), \(L_{n_{e}}=\frac{1}{n_{e}}\frac{dn_{e}}{dr}\) is the electron density gradient length scale, \(L_{T_{e}}=\frac{1}{L_{e}}\frac{dTa}{dr}\) is the electron temperature gradient length scale, \(L_{q}=\frac{1}{q}\frac{dq}{dr}\) is the safety factor gradient length scale, \(c_{s}=\frac{T_{e}}{m_{i}}\) is the speed of sound, \(\rho_{s}=c_{s}/\omega_{g}\) is gyro radius, \(\omega_{g}=eB_{0}/m_{i}c\) is gyro frequency, \(e\) is the electron charge, \(q\) is the safety factor, \(n_{tor}\) is toroidal mode number, \(r\) is the minor radial location, \(n_{e}\) is the electron density, \(n_{e}\) is the ion density, \(n_{z}\) is the impurity density, Z is the charge of the impurity, \(k_{B}\) is the Boltzmann constant, \(B_{0}\) is the magnetic field strength, \(m_{i}\) is the ion mass. \(\mu\) is the distance from the rational surface to the peak of \(\omega_{*e}\), normalized by \(\rho_{s}\), which has been shown in greater detail in [20]. Figure 16: The plot shows the growth rate (top plot) and frequency (bottom plot) with different v. In the top plot: The blue line shows the growth rate calculated by SLiM. The red dots represent the unstable MTM determined by the neural network version of SLiM: SLiM_NN. The black dots represent the stable MTM determined by the neural network version of SLiM: SLiM: SLiM_NN, In the bottom plot, the frequency calculated by SLiM (blue line) and SLiM_NN (orange line) are shown Figure 14: The plot shows the growth rate (top plot) and frequency (bottom plot) with different \(\eta\). In the top plot: The blue line shows the growth rate calculated by SLiM. The red dots represent the unstable MTM determined by the neural network version of SLiM: SLiM_NN. The black dots represent the stable MTM determined by the neural network version of SLiM: SLiM_NN, In the bottom plot, the frequency calculated by SLiM (blue line) and SLiM_NN (orange line) are shown Figure 15: The plot shows the growth rate (top plot) and frequency (bottom plot) with different \(\mu\). In the top plot: The blue line shows the growth rate calculated by SLiM. The red dots represent the unstable MTM determined by the neural network version of SLiM: SLiM_NN. The black dots represent the stable MTM determined by the neural network version of SLiM: SLiM_NN, In the bottom plot, the frequency calculated by SLiM (blue line) and SLiM_NN (orange line) are shown ## VIII Acknowledgements This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, using the DIII-D National Fusion Facility, a DOE Office of Science user facility, under Award(s): DE-FC02-04ER54698, DE-SC0022164. This work was supported by U.S. DOE Contract No. DE-FG02-04ER54742 at the Instituted for Fusion Studies (IFS) at the University of Texas at Austin. This research was supported at Oak Ridge National Laboratory supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility. We acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support. This work was supported by the U.S. Department of Energy under awards DE-SC0022051 and DE-FG02-95ER54309. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract DE-AC02-05CH11231. This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
2307.00656
Scale of Dirac leptogenesis and left-right symmetry in the light of recent PTA results
Motivated by the recent release of new results from five different pulsar timing array (PTA) experiments claiming to have found compelling evidence for primordial gravitational waves (GW) at nano-Hz frequencies, we study the consequences for two popular beyond the Standard Model (SM) frameworks, where such nano-Hz GW can arise due to annihilating domain walls (DW). Minimal framework of Dirac leptogenesis, as well as left-right symmetric model (LRSM) can lead to formation of DW due to spontaneous breaking of $Z_2$ symmetry. Considering the NANOGrav 15 yr data, we show that the scale of Dirac leptogenesis should be above $10^7$ GeV for conservative choices of Dirac Yukawa couplings with fine-tuning at the level of the SM. The scale of {\it minimal} LRSM is found to be more constrained $M_{\rm LR} \sim 10^6$ GeV in order to fit the NANOGrav 15 yr data. On the other hand, the {\it non-minimal} LRSM can be compatible with the NANOGrav data for $10^2 \, {\rm TeV} \lesssim M_{\rm LR} \lesssim 10^3$ TeV but with the corresponding $B-L$ breaking scale violating collider bounds.
Basabendu Barman, Debasish Borah, Suruj Jyoti Das, Indrajit Saha
2023-07-02T20:21:15Z
http://arxiv.org/abs/2307.00656v2
# Scale of Dirac leptogenesis and left-right symmetry in the light of recent PTA results ###### Abstract Motivated by the recent release of new results from five different pulsar timing array (PTA) experiments claiming to have found compelling evidence for primordial gravitational waves (GW) at nano-Hz frequencies, we study the consequences for two popular beyond the Standard Model (SM) frameworks, where such nano-Hz GW can arise due to annihilating domain walls (DW). Minimal framework of Dirac leptogenesis, as well as left-right symmetric model (LRSM) can lead to formation of DW due to spontaneous breaking of \(Z_{2}\) symmetry. Considering the NANOGrav 15 yr data, we show that the scale of Dirac leptogenesis should be above \(10^{7}\) GeV for conservative choices of Dirac Yukawa couplings with fine-tuning at the level of the SM. The scale of _minimal_ LRSM is found to be more constrained \(M_{\rm LR}\sim 10^{6}\) GeV in order to fit the NANOGrav 15 yr data. On the other hand, the _non-minimal_ LRSM can be compatible with the NANOGrav data for \(10^{2}\,{\rm TeV}\lesssim M_{\rm LR}\lesssim 10^{3}\) TeV but with the corresponding \(B-L\) breaking scale violating collider bounds. **Introduction:** Recently, four different pulsar timing array (PTA) experiments namely NANOGrav [1], European Pulsar Timing Array (EPTA) together with the first data release from Indian Pulsar Timing Array (InPTA) [2], PPTA [3], all part of the consortium called International Pulsar Timing Array (IPTA) have released their latest findings hinting at a significant evidence for stochastic gravitational waves (GW) background at nano-Hz frequencies. Similar evidence with larger statistical significance has also been reported by the Chinese Pulsar Timing Array (CPTA) collaboration [4]. While such a signal can be generated by supermassive black hole binary (SMBHB) mergers though with a mild tension, presence of exotic new physics alone or together with SMBHB can make the fit better [5]1. Several follow-up papers have also studied the possible origin or implications of this observation from the point of view of dark matter [7; 8], axions or axion-like particles [9; 10], SMBHB [11], first order phase transition [12; 13; 14; 15]2 and associated challenges [17], primordial black holes [18], primordial magnetic field [19], domain walls [20; 21], inflation [22; 23], cosmic strings [24; 25], scalar induced gravitational waves [5] including earlier works [26], astrophysical neutrino oscillation [27] and QCD crossover [28]. New physics possibilities leading to primordial GW in the nano-Hz regime can also be found in [29]. Footnote 1: Similar conclusions can also be found in [6]. Footnote 2: See [16] for earlier works. While GW from domain walls (DW) has already been studied as a possible new physics explanation for PTA results [5; 20; 21], we consider the consequence for two popular beyond standard model (BSM) scenarios namely, the minimal Dirac leptogenesis and the left-right symmetric model (LRSM). The first model is a type I seesaw realisation for light Dirac neutrino mass with the heavy vector-like neutral fermions being responsible for generating baryogenesis via leptogenesis [30] with light Dirac neutrinos, known as the Dirac leptogenesis scenario [31; 32]. GW probe of high scale leptogenesis models have received considerable attention in recent times. In most of these works [33; 34; 35; 36], cosmic string (CS) origin of GW has been studied by considering a \(U(1)_{B-L}\) framework with in-built heavy Majorana fermions responsible for generating Majorana mass of light neutrinos as well as leptogenesis. The scale of leptogenesis or \(U(1)_{B-L}\) breaking scale then decides the amplitude of the CS generated GW spectrum. However, in view of the latest PTA results preferring a positive slope of the GW spectrum, stable CS in such models no longer provide a good fit [5]. This raises the prospects for a Dirac leptogenesis model whose minimal version must have a softly broken \(Z_{2}\) symmetry leading to formation of DW followed by generation of GW due to annihilation or collapse. While a general study related to GW probe of minimal Dirac leptogenesis was carried out in [37] (and subsequently in [38] for Dirac seesaw), here we consider the implications of recent PTA findings on the scale of Dirac leptogenesis. On the other hand, GW probe of LRSM considering DW as the source have been studied in earlier works [39; 40]. DW arise due to spontaneous breaking of parity in such models. While earlier works considered the detection aspects of this model, we now constrain the scale of left-right symmetry considering the latest PTA data. While both the models can explain the latest PTA data, the allowed parameter space remains squeezed to a tiny window, which should face more scrutiny with future data. **Domain walls as source of GW:** Domain wall is a two-dimensional topological defect arising from spontaneous breaking of discrete symmetries [41; 42; 43]. With the expansion of the universe, the energy density of DW falls slower compared to that of radiation or ordinary matter, having the potential to start dominating the energy density of the universe and ruin the successful predictions of standard cosmology. Such a disastrous situation can be prevented if DW are made unstable or diluted or if they have asymmetric initial field fluctuations [44, 45]. In minimal model of Dirac leptogenesis [37] as well as left-right symmetric models [40], such DW arises due to the spontaneous breaking of a \(Z_{2}\) symmetry. If we consider a \(Z_{2}\)-symmetric potential of a scalar field \(\varphi\), it is straightforward to show the existence of two different vacua \(\left\langle\varphi\right\rangle=\pm u\). It is also possible to find a static solution of the equation of motion given the two vacua to be realized at \(x\to\pm\infty\), \[\varphi(\mathbf{x})=u\,\tanh\left(\sqrt{\frac{\lambda_{\varphi}}{2}}\,u\,x \right)\,, \tag{1}\] representing a domain wall extended along the \(x=0\) plane. Here \(\lambda_{\varphi}\) is the quartic self-coupling of the scalar field. The DW width is \(\delta\sim m_{\varphi}^{-1}=(\sqrt{2\lambda_{\varphi}}\,u)^{-1}\). Another key parameter, known as the DW tension is given by \[\sigma_{w}=\int_{-\infty}^{\infty}dx\,\rho_{\varphi}=\frac{2\sqrt{2}}{3}\, \sqrt{\lambda_{\varphi}}\,u^{3}=\frac{2}{3}\,m_{\varphi}\,u^{2}\sim u^{3}\,, \tag{2}\] where \(\rho_{\varphi}\) denotes (static) energy density of \(\varphi\) and in the last step, \(m_{\varphi}\sim u\) is used. Assuming the walls to be formed after inflation, the simplest way to make them disappear is to introduce a small pressure difference [41, 43, 46, 47, 48], a manifestation of a soft \(Z_{2}\)-breaking term. Such a pressure difference or equivalently, a bias term in the potential \(\Delta V\) needs to be sufficiently large to ensure DW disappearance prior to the epoch of big bang nucleosynthesis (BBN) that is, \(t_{\rm BBN}>t_{\rm dec}\approx\sigma_{w}/\Delta V\). It is also important to take care of the fact that the DW disappear before dominating the universe, requiring \(t_{\rm dec}<t_{\rm dom}\), where \(t_{\rm dom}\sim M_{P}^{2}/\sigma_{w}\) and \(M_{P}\) is the _reduced_ Planck mass. Both of these criteria lead to a lower bound on the bias term \(\Delta V\). However, \(\Delta V\) can not be arbitrarily large as it would otherwise prevent the percolation of both the vacua separated by DW. Such decaying DW therefore can emit GW [49, 50, 51, 52, 53, 54, 55, 56, 57, 58]. At peak frequency \(f_{\rm peak}\), the spectral energy density can be estimated as [49, 50] \[\Omega_{\rm GW}h^{2}\left(t_{0}\right)\Bigr{|}_{\rm peak} \simeq 5.2\times 10^{-20}\,\tilde{\epsilon}_{\rm gw}\,A_{w}^{4} \left(\frac{10.75}{g_{*}}\right)^{1/3} \tag{3}\] \[\times\left(\frac{\sigma_{w}}{1\,{\rm TeV}^{3}}\right)^{4}\, \left(\frac{1\,{\rm MeV}^{4}}{\Delta V}\right)^{2}\,,\] with \(t_{0}\) being the present time. Away from the peak, the amplitude varies as3 Footnote 3: The low-frequency spectrum of GW from melting DWs [54, 59] characterized by a _time-dependent_ tension, contrary to the constant tension DWs as discussed here, behaves as \(f^{2}\), without violating the causality. \[\Omega_{\rm GW}\simeq\Omega_{\rm GW}\Bigr{|}_{\rm peak}\times \begin{cases}\left(\frac{f_{\rm peak}}{f}\right)&\text{for}\ \ f>f_{\rm peak}\\ \left(\frac{f}{f_{\rm peak}}\right)^{3}&\text{for}\ \ f<f_{\rm peak}\end{cases}\,, \tag{4}\] where the peak frequency is given by \[f_{\rm peak}(t_{0}) \simeq 4\times 10^{-9}\,{\rm Hz}\,A^{-1/2} \tag{5}\] \[\times\left(\frac{1\,{\rm TeV}^{3}}{\sigma_{w}}\right)^{1/2}\, \left(\frac{\Delta V}{1\,{\rm MeV}^{4}}\right)^{1/2}\,.\] In the above expressions, \(A_{w}\) is the area parameter [60, 61]\(\simeq 0.8\) for DW arising from \(Z_{2}\) breaking, and \(\tilde{\epsilon}_{\rm gw}\) is the efficiency parameter \(\simeq 0.7\)[50]. Note that the above spectrum can be obtained from a general parametrisation \(S(f/f_{\rm peak})\) \[S(x)=\frac{(a+b)^{c}}{(bx^{-a/c}+ax^{b/c})^{c}} \tag{6}\] for \(a=3\) (required by causality [62, 63]) and \(b\approx c\approx 1\) (suggested by simulation [50]). Utilising these values of \(a,b\) and \(c\), togeteher with \(x\gg 1\) (or \(x\ll 1\)), we can produce Eq. (4). However, as noted in [5, 55], the values of \(b,c\) may depend upon the specific DW annihilation mechanism or regime all of which have not been explored in numerical simulations yet. This allows one to vary \(b,c\) to get a better fit with the PTA data [5]. When the GW production is ceased after the annihilation of the domain walls, the energy density of GW redshifts mimicking that of the SM radiation. As a result, GW itself acts as an additional source of radiation with the potential to alter the prediction of BBN. Thus, an excess of the GW energy density around \(T\lesssim\mathcal{O}({\rm MeV})\), can be restricted by considering the limits on the number of relativistic degrees of freedom from CMB and BBN, encoded in \(\Delta N_{\rm eff}\). This, in turns, puts a bound on the amplitude of GW spectrum, demanding [64, 65] \[\Omega_{\rm GW}\,h^{2}\lesssim 5.6\times 10^{-6}\,\Delta N_{\rm eff}\,. \tag{7}\] Here we consider several projected limits on \(\Delta N_{\rm eff}\) on top of the existing limit from Planck: \(\Delta N_{\rm eff}\lesssim 0.28\) at 95% CL [66]. This bound is shown by the solid gray horizontal line in Fig. 2. Once the baryon acoustic oscillation (BAO) data are included, the measurement becomes more stringent: \(N_{\rm eff}=2.99\pm 0.17\). A combined BBN+CMB analysis shows \(N_{\rm eff}=2.880\pm 0.144\), as computed in Ref. [67]. This constraint is denoted by the dashed horizontal line. On the other hand, upcoming CMB experiments like CMB-S4 [68] and CMB-HD [69] will be able to probe \(\Delta N_{\rm eff}\) as small as \(\sim 0.06\) and \(\sim 0.027\), respectively. These are indicated by dot-dashed and dotted lines respectively. The next generation of satellite missions, such as COrE [70] and Euclid [71], leads to \(\Delta N_{\rm eff}\lesssim 0.013\), as shown by the large dashed line. It should be noted that we are ignoring the friction effects between the domain walls and the thermal plasma [72; 73; 52]. Such friction effects can be important when the field responsible for symmetry breaking or constituting the wall has large couplings with the SM bath particles, leading to smaller GW amplitude than that without friction discussed here. Since the scalar fields responsible for symmetry breaking has tiny couplings with the SM plasma in the models discussed here, such effects can be ignored [54]. In Fig. 1 we summarize bounds on the VEV \(u\) and the bias term \(\Delta V\), where all the shaded regions are disallowed from (i) decay of the DWs post BBN (darker gray), where \(t_{\rm dec}>1\) sec, (ii) DW domination (lighter gray) or \(t_{\rm dom}<t_{\rm dec}\) and (iii) \(\Delta N_{\rm eff}\) bound from PLANCK on excessive GW energy density (light gray). This leaves us with the white region in-between that is allowed, from where we choose our benchmark points (BP), as indicated in Tab. 1. It is important to note here, we also consider \(\Delta V\ll u^{4}\) to prevent the percolation of both the vacua separated by DW. However, such a condition is trivially satisfied in the regime we are interested in. The GW spectrum corresponding to the BPs in Tab. 1 is illustrated in Fig. 2. As explained before, we distinctively see a blue-tilted pattern for \(f<f_{\rm peak}\), while the spectrum is red-tilted in the opposite limit. Here we project limits from BBO [74], LISA [75], DECIGO [76], ET [77], CE [78], THEIA [79], HL (aLIGO) [80], \(\mu\)ARES [81] and SKA [82]. In this plot, the range of GW spectrum from NANOGrav results [1] is shown by the red points. The gray-shaded region is completely disallowed from \(\Delta N_{\rm eff}\) bound on overproduction of \(\Omega_{\rm GW}\) as discussed before, depending on the sensitivity of a particular experiment. As one can already notice, BP3 is already ruled out from PLANCK bound, while BP1 is beyond the reach of any future experiments proposed so far. Corresponding to the two epochs \(t_{\rm dom}\) and \(t_{\rm ann}\) stated before, we define two temperatures \(T_{\rm dom}\) and \(T_{\rm ann}\). We are typically interested in the regime \(T_{\rm ann}>T_{\rm dom}\), i.e., the DWs disappear before they dominant the Universe. Following [21], \(T_{\rm ann}\) reads \[T_{\rm ann}\simeq 120\,{\rm MeV}\,\sqrt{\frac{\Delta V/{\rm MeV}^ {4}}{10^{8}}}\,\left(\frac{A_{w}}{0.8}\right)^{-1/2}\] \[\left(\frac{\sigma_{w}/{\rm GeV}^{3}}{10^{16}}\right)^{-1/2}\, \left(\frac{g_{*}(T_{\rm ann})}{10}\right)^{-1/4}\,, \tag{8}\] implying, for larger surface tension, it takes longer for \begin{table} \begin{tabular}{|c|c|c|} \hline & \(u\) (GeV) & \(\Delta V\) (MeV)\({}^{4}\) \\ \hline BP1 & \(2\times 10^{5}\) & \(10^{8}\) \\ BP2 & \(3\times 10^{5}\) & \(10^{8}\) \\ BP3 & \(4\times 10^{5}\) & \(10^{8}\) \\ \hline \end{tabular} \end{table} Table 1: Details of the benchmark points (BPs) used in Fig. 2 and Fig. 3. Figure 1: Bound on the size of VEV and the bias term. All shaded regions are disallowed (see text for details). Figure 2: Spectrum of gravitational wave from DW decay, where we show sensitivities of several GW experiments. The black curves correspond to the chosen benchmark points for \(a=3,b=1\,,c=0.3\). The gray region marked as “\(\Delta N_{\rm eff}\) constraint” is disallowed from overproduction of GW energy density (see text for details). The red points correspond to the NanoGrav 15 yr observation [1]. the walls to collapse, while for larger bias the opposite happens. We also define another quantity \[r_{w}=\frac{\rho_{r}}{1+\rho_{r}}\,, \tag{9}\] where \[\rho_{r}=\frac{\rho_{w}(T_{\rm ann})}{\rho_{R}(T_{\rm ann})}\simeq 0.14\,\left(\frac{A_{w}}{0.8}\right)^{2}\,\left(\frac{\sigma_{w}/\mbox{GeV}^{3}}{ 10^{16}}\right)^{2}\] \[\left(\frac{10^{8}\,\mbox{MeV}^{4}}{\Delta V}\right)\,, \tag{10}\] which quantifies the energy density contained within the DW compared to that of radiation. We show the compatibility of our relevant model parameters, namely, the bias term \(\Delta V\) and the VEV \(u\) (equivalently, the strain \(\sigma\)) in Fig. 3 with the NANOGrav data, utilizing Eq. (8) and Eq. (9). We superimpose the 1 and 2\(\sigma\) contours (shown by red and blue solid curves) provided by the NANOGrav result [1]. As one can see, BP1 lies well within the 1\(\sigma\) contour, while the other two BPs are well off. For BP1, \(a=3,b\in[0.5,1],c\in[0.3,3]\) is needed to be in compliance within 1\(\sigma\) of NANOGrav data for frequency \(f\in[2\times 10^{-9}\,,f_{\rm yr}]\) Hz, where \(f_{\rm yr}=1\,\mbox{yr}^{-1}\approx 3\times 10^{-8}\) Hz. It is possible to derive a lower and upper bound on the VEV for a fixed \(\Delta V\), as denoted by the gray dashed horizontal lines for \(\Delta V=10^{8}\,\mbox{MeV}^{4}\). Thus, for \(\Delta V=10^{8}\,\mbox{MeV}^{4}\), we find \(189\lesssim u/\mbox{TeV}\lesssim 225\), in compliance with NANOGrav \(2\sigma\) contour. On the other hand, the viable range of bias term, that lies within \(2\sigma\) CL of NANOGrav result, turns out to be \(4\times 10^{6}\lesssim\Delta V/\mbox{MeV}^{4}\lesssim 5\times 10^{11}\), as shown by the green dotted and orange dashed curves. Depending on the choice of \(\Delta V\), the upper limit on \(u\) can be pushed to larger values. One can also project the \(\Delta N_{\rm eff}\) bound on the same plane (by trading \(\{\sigma\,,\Delta V\}\) with \(\{T_{\rm ann}\,,r_{w}\}\)) ruling out the region of the parameter space that results in overproduction of GW [cf. Eq. (7)]. This is shown by the gray shaded region, where we have used the 2-\(\sigma\) bound from Planck. This rules out BP3, as already seen in Fig. 1. Note that, the limits obtained on VEV \(u\) together with that on the bias \(\Delta V\), satisfy \(t_{\rm dom}<t_{\rm dec}\) and \(t_{\rm dec}<t_{\rm BBN}\), obeying \(\Delta V\ll u^{4}\). **Consequence for Dirac leptogenesis:** In the minimal model of Dirac leptogenesis or Dirac neutrino seesaw [37], the standard model (SM) is extended by three copies of vector-like neutral singlet fermions \(N_{L,R}\) and three copies of right chiral part (\(\nu_{R}\)) of light Dirac neutrinos. A real singlet scalar field \(\varphi\) is introduced to couple \(\nu_{R}\) with \(N\). A \(Z_{2}\) symmetry under which \(\varphi,\nu_{R}\) are odd, prevents direct coupling of the SM lepton doublet \(L\) with \(\nu_{R}\) via SM Higgs \(H\). The relevant part of the Yukawa Lagrangian can be written as \[-{\cal L}_{Y}\supset Y_{L}\,\overline{L}\,\widetilde{H}\,N_{R}+M_{N}\, \overline{N}\,N+Y_{R}\,\overline{N_{L}}\,\varphi\,\nu_{R}+{\rm h.c.} \tag{11}\] After the neutral components of \(H\) and \(\varphi\) acquire VEV \(v,\,u\) respectively, light Dirac neutrino mass arises from the Type-I seesaw equivalent for Dirac neutrino as \[m_{\nu}=\frac{1}{\sqrt{2}}Y_{L}\,M_{N}^{-1}\,Y_{R}\,v\,u \tag{12}\] with \(M_{N}\) being the scale of Dirac seesaw. The same heavy fermions \(N_{L,R}\) can have out-of-equilibrium decay to achieve successful Dirac leptogenesis. Although no net lepton asymmetry is produced due to total lepton number conservation, it is possible to create equal and opposite lepton asymmetries in left and right sectors due to CP violating out-of-equilibrium decays \(N\to L\,H\) and \(N\to\nu_{R}\,\varphi\) respectively. The \(CP\) asymmetry parameter is given as [83] \[\epsilon\simeq-\frac{1}{8\pi}\frac{M_{1}}{uv}\frac{\mbox{Im}[(Y_{R}m_{\nu}^{ \dagger}Y_{L})_{11}]}{(Y_{R}Y_{R}^{\dagger})+(Y_{L}Y_{L}^{\dagger})}, \tag{13}\] where \(v=246\) GeV and \(M_{1}\) is the lightest heavy fermion mass. If a net lepton asymmetry is generated before the sphaleron decoupling epoch, it is possible to create a net baryon asymmetry. However, in order to prevent the left sector lepton asymmetry from being washed out, it is important to prevent the equilibration of left and right sectors, leading to a condition \[\Gamma_{L-R}\sim\frac{|Y_{L}|^{2}\,|Y_{R}|^{2}}{M_{1}^{2}}\,T^{3}<{\cal H}(T)\,, \tag{14}\] where \({\cal H}(T)=\frac{\pi}{3}\,\sqrt{\frac{\pi}{10}}\,\frac{T^{2}}{M_{P}}\) is the Hubble parameter. We do not compute the lepton asymmetry here and refer the Figure 3: Viable parameter space in the bi-dimensional plane of \(T_{\rm ann}\)-\(\tau_{w}\), based on the NanoGrav 15 yr dataset [1]. The black solid contour corresponds to the \(\Delta V\)-value relevant for the BPs in Tab. 1. The red and blue contours correspond to 1- and 2\(\sigma\) CL respectively. The gray shaded region is disallowed from \(\Delta N_{\rm eff}\) bound from Planck. reader to earlier works [37; 83; 84] where explicit Boltzmann equations were solved, and corresponding parameter space has been obtained. Depending upon the scale of leptogenesis \(M_{1}\), one can consider either quasi-degenerate or hierarchical heavy fermions to achieve the desired CP asymmetry while being consistent with sub-eV Dirac neutrino mass. As the \(Z_{2}\)-odd scalar \(\varphi\) acquires a non-zero VEV \(u\), it leads to the formation of domain walls4. One can introduce a bias term \(\Delta V\) (which breaks \(Z_{2}\) symmetry softly) in the scalar potential which eventually lead to the disappearance of domain walls. Now, for \(u\gtrsim 190\) TeV, preferred from NANOGrav 2023 data as discussed before, and considering light neutrino mass \(m_{\nu}\leq 0.1\) eV, we get \(M_{N}>y^{2}\times 10^{17}\) GeV. This implies that, for order one Yukawa couplings, the scale of Dirac leptogenesis is even above the upper limit on reheating temperature, disfavouring the possibility of thermal Dirac leptogenesis. If the Yukawa couplings are made as low as electron Yukawa coupling, we have \(M_{N}>10^{7}\) GeV, keeping it at intermediate scale. To summarize, the possibility of low scale \(Z_{2}\)-symmetric Dirac leptogenesis is disfavoured unless we tune Yukawa couplings involved in Dirac seesaw more than what we have in the SM. Footnote 4: Because of the non-zero VEV, \(\varphi\) can mix with the SM Higgs doublet (\(H\)) via a portal interaction \(\lambda_{p}\,|\varphi|^{2}\,|H|^{2}\), that leads to its decay into the SM. One can always tune the mixing parameter such that \(\varphi\) decays efficiently before the onset of BBN. It is worth stressing that Dirac leptogenesis can be realised without \(Z_{2}\) symmetry too and in those setups there will not be any DW formation. One simple alternative is to consider a \(U(1)\) global symmetry under which \(\phi\), now a complex scalar, and \(\nu_{R}\) transform non-trivially. Soft breaking of such global \(U(1)\) symmetry, required to generate light Dirac neutrino mass, leads to a light pseudo-Goldstone boson with interesting phenomenological consequences. On the other hand, for gauged \(U(1)\) symmetry, an additional massive gauge boson will arise in the spectrum. If such additional neutral bosons couple to the heavy fermions (\(N\)) responsible for generating lepton asymmetry, they can lead to dilution of asymmetry while keeping \(N\) in equilibrium for longer epochs [85]. As far as topological defects are concerned, such models with \(U(1)\) symmetries can lead to cosmic strings which can have their own GW signatures. Since the number of new degrees of freedom in \(Z_{2}\)-symmetric Dirac leptogenesis is less than scenarios with \(U(1)\) or higher symmetries, we have referred to it as the minimal Dirac leptogenesis. **Consequence for left-right symmetry:** Left-right symmetric models [86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 2021; 203; 204; 205; 206; 207; 208; 209; 210; 211; 222; 223; 224; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 252; 259; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 289; 288; 289; 291; 289; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 329; 331; 320; 333; 341; 335; 342; 343; 344; 345; 346; 347; 348; 359; 351; 352; 353; 354; 356; 357; 358; 360; 359; 361; 362; 363; 364; 365; 366; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 410; 411; 424; 435; 447; 448; 459; 460; 411; 436; 449; 471; 481; 482; 483; 484; 491; 492; 493; 494; 495; 496; 497; 403; 404; 405; 406; 407; 409; 411; 408; 412; 409; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 431; 423; 436; 437; 445; 459; 461; 472; 473; 474; 48; 48; 493; 494; 495; 496; 497; 404; 405; 406; 407; 408; 409; 411; 409; 412; 401; 413; 414; 416; 417; 419; 421; 42; 433; 444; 45; 462; 474; 48; 498; 499; 50; 51; 52; 53; 540; 53; 541; 542; 543; 55; 56; 57; 58; 59; 520; 544; 59; 53; 56; 58; 59; 50; 521; 544; 57; 59; 54; 58; 59; 50; 531; 59; 50; 54; 51; 55; 56; 57; 52; 58; 59; 53; 57; 59; 50; 54; 55; 57; 58; 59; 50; 59; 510; 50; 56; 59; 511; 52; 57; 58; 59; 50; 50; 57; 53; 58; 51; 59; 52; 59; 50; 51; 54; 52; 51; 52; 53; 53; 54; 55; 56; 57; 58; 59; 50; 59; 52; 51; 53; 57; 59; 50; 51; 53; 54; 58; 52; 59; 50; 51; 54; 59; 512; 52; 53; 54; 57; 58; 59; 50; 54; 59; 50; 52; 54; 53; 56; 57; 58; 59; 513; 59; 52; 54; 59; 514; 53; 57; 56; 58; 59; 51; 59; 50; 57; 59; 520; 58; 59; 50; 53; 59; 51; 54; 52; 55; 56; 57; 58; 59; 53; 59; 540; 55; 57; 59; 56; 58; 59; 57; 59; 58; 59; 50; 59; 53; 59; 50; 59; 51; 50; 59; 52; 51; 53; 59; 50; 53; 51; 54; 55; 56; 57; 58; 59; 50; 57; 59; 58; 59; 50; 59; 53; 59; 52; 54; 59; 50; 56; 59; 57; 58; 51; 59; 50; 58; 59; 51; 59; 50; 59 Unlike in the \(Z_{2}\)-odd scalar singlet model discussed above, the LRSM gauge symmetry does not allow arbitrary bias terms. It is possible to generate such bias term via higher dimensional operators invariant under gauge symmetry but explicitly breaking the parity symmetry. While Planck scale effects are expected to break any global symmetries like parity [100, 101, 102], the corresponding bias term can lead to DW disappearance [103, 104]. The minimal LRSM has three different types of scalars namely, \(\Phi\equiv(1,2,2,0),\Delta_{L}\equiv(1,3,1,2),\Delta_{R}\equiv(1,1,3,2)\) where the numbers in the brackets are the quantum numbers corresponding to the LRSM gauge group \(SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\). Left and right handed fermions transform as doublets under \(SU(2)_{L},SU(2)_{R}\) respectively. Quark and lepton fields are represented as \(Q_{L}\equiv(3,2,1,1/3),Q_{R}\equiv(3,1,2,1/3),\ell_{L}\equiv(1,2,1,-1),\ell_{R} \equiv(1,1,2,-1)\). Under parity \(\mathbb{P}\), left and right sector fields get interchanged as \[Q_{L}\leftrightarrow Q_{R},\ell_{L}\leftrightarrow\ell_{R},\Delta_{L} \leftrightarrow\Delta_{R},\Phi\leftrightarrow\Phi^{\dagger}.\] This not only ensures the equality of left and right sector gauge couplings \(g_{L}=g_{R}\), but also relates the Yukawa and scalar potential couplings of these two sectors. The neutral component of the scalar triplet \(\Delta_{R}\) acquires a non-zero VEV breaking \(SU(2)_{R}\times U(1)_{B-L}\times\mathbb{P}\) into \(U(1)_{Y}\) of the SM. At a later stage, the electroweak gauge symmetry gets spontaneously broken to \(U(1)_{\rm em}\) by the neutral components of scalar bidoublet \(\Phi\). Thus, the symmetry breaking pattern is \[SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\times\mathcal{P} \langle\underline{\Delta_{R}}\rangle\] \[SU(2)_{L}\times U(1)_{Y} \underrightarrow{\langle\Phi\rangle} U(1)_{\rm em}. \tag{15}\] While this is the desired symmetry breaking pattern, it is equally probable for left sector scalar field \(\Delta_{L}\) to acquire non-zero VEV. This leads to left and right sector vacua separated by domain walls. It is also possible to replace the pair of triplets \(\Delta_{L,R}\) by a pair of doublets \(H_{L,R}\) while achieving the same symmetry breaking pattern. In either of these minimal models, the bias term or soft \(\mathbb{P}\) breaking term can arise from dimension six operators given by \[V_{\rm NR}\supset f_{L}\frac{(\Sigma_{L}^{\dagger}\Sigma_{L})^{3}}{M_{\rm P}^ {2}}+f_{R}\frac{(\Sigma_{R}^{\dagger}\Sigma_{R})^{3}}{M_{\rm P}^{2}} \tag{16}\] where \(\Sigma_{L,R}\equiv\Delta_{L,R},H_{L,R}\) depending upon the type of LRSM. This leads to a bias term in the minimal model given by \(\Delta V\sim u^{6}/M_{\rm P}^{2}\), where \(u\) is the \(SU(2)_{R}\times U(1)_{B-L}\) as well as parity breaking scale. Due to the dependence of the bias term on the scale of left-right breaking, the constraint on left-right symmetry breaking scale is stronger than what we had on the scale of Dirac leptogenesis. As shown in Fig. 4, the scale of left-right symmetry should be approximately around \(\sim 10^{6}\) GeV (shown by the black solid horizontal line), in order to be in agreement with NANOGrav 15 yr data at \(2\sigma\). Similarly, one can also check the status of non-minimal LRSM frameworks in the light of the recent PTA results. Unlike the minimal LRSM, in the non-minimal scenario, the symmetry of LRSM is broken down to that of the SM in more than one steps. For illustrative purpose, we consider a two-step symmetry breaking chain leading to \[SU(2)_{L}\otimes SU(2)_{R}\otimes U(1)_{B-L}\otimes\mathbb{P} \xrightarrow{u}\] \[SU(2)_{L}\otimes U(1)_{R}\otimes U(1)_{B-L}\xrightarrow{v_{\rm BL }}\] \[SU(2)_{L}\otimes U(1)_{Y}\xrightarrow{v_{\rm EW}}U(1)_{\rm em}\,, \tag{17}\] where \(v_{\rm BL}\) is the intermediate symmetry breaking scale and \(v_{\rm EW}\) is the electroweak symmetry breaking scale. For example, the first stage of the above symmetry breaking chain can be achieved by \(SU(2)_{R}\) triplet of vanishing \(B-L\) charge while the second stage can be taken care of by a triplet of non-zero \(B-L\) charge. In such a scenario, the bias term can be written as dimension five operator involving both types of triplet scalars. Following [40], one can show that the bias term is related to the two scales of symmetry breaking via \(\Delta V\simeq u^{3}\,v_{\rm BL}^{2}/M_{P}\). In that case, one finds, \(T_{\rm ann}\propto v_{\rm BL}\) [cf.Eq. (8)]. We show constraint on the LR symmetry breaking scale \(u\), that allows to be within 2-\(\sigma\) of the NANOGrav 15 yr data in Fig. 5. Corresponding to each \(T_{\rm ann}\), the \(v_{\rm BL}\) is fixed, as mentioned in the upper axis label. We find, in order to be compatible with 1-\(\sigma\) contour of the NANOGrav 15 yr data, \(0.04\,{\rm GeV}\lesssim v_{\rm BL}\lesssim 1\) GeV, while \(10^{2}\,{\rm TeV}\lesssim u\lesssim 10^{3}\) TeV. However, such low scale \(v_{\rm BL}\) will lead to light \(Z^{\prime}\) gauge bosons, already ruled out by the large hadron collider (LHC) data [105, 106]. **Conclusion:** We have investigated the consequence of the recent PTA results on the scale of Dirac leptogenesis and left-right symmetric model. In minimal version of both these scenarios, domain walls arise due to spontaneous breaking of a discrete \(Z_{2}\) symmetry. While the bound on the scale of Dirac leptogenesis from PTA data depend upon the size of Dirac Yukawa couplings, for conservative choice of such couplings with fine-tuning at the level of the SM, we find a lower bound \(M_{N}>10^{7}\) GeV, keeping leptogenesis at intermediate scales. However, for order one Yukawa coupling, this bound is much stronger \(M_{N}>10^{17}\) GeV keeping only non-thermal Dirac leptogenesis option viable [cf. Fig. 3]. Due to the constrained structure of the minimal LRSM, we get much tighter constraint on the scale of left-right breaking namely \(M_{\rm LR}\sim 10^{6}\) GeV [cf. Fig. 4], in order to satisfy NANOGrav 15 yr data, keeping the model out of reach from direct search experiments like the LHC. For non-minimal LRSM, we can fit NANOGrav 15 yr data for \(M_{\rm LR}\simeq\{120-1050\}\) TeV [cf. Fig. 5] but with a very low \(B-L\) breaking scale ruled out by the LHC data. Future data from PTA or other GW experiments are expected to shed more light on the parameter space of this model, by constraining the spectrum at higher frequencies. ###### Acknowledgements. The work of D.B. is supported by the science and engineering research board (SERB), Government of India grant MTR/2022/000575.
2302.13792
The Asymptotic Structure of the Centred Hyperbolic 2-Monopole Moduli Space
We construct an asymptotic metric on the moduli space of two centred hyperbolic monopoles by working in the point particle approximation, that is treating well-separated monopoles as point particles with an electric, magnetic and scalar charge and re-interpreting the dynamics of the 2-particle system as geodesic motion with respect to some metric. The corresponding analysis in the Euclidean case famously yields the negative mass Taub-NUT metric, which asymptotically approximates the $L^2$ metric on the moduli space of two Euclidean monopoles, the Atiyah-Hitchin metric. An important difference with the Euclidean case is that, due to the absence of Galilean symmetry, in the hyperbolic case it is not possible to factor out the centre of mass motion. Nevertheless we show that we can consistently restrict to a 3-dimensional configuration space by considering antipodal configurations. In complete parallel with the Euclidean case, the metric that we obtain is then the hyperbolic analogue of negative mass Taub-NUT. We also show how the metric obtained is related to the asymptotic form of a hyperbolic analogue of the Atiyah-Hitchin metric constructed by Hitchin.
Guido Franchetti, Calum Ross
2023-02-27T14:12:42Z
http://arxiv.org/abs/2302.13792v2
# The asymptotic structure of the centred hyperbolic 2-monopole moduli space ###### Abstract. We construct the asymptotic metric on the moduli space of two centred hyperbolic monopoles by working in the point particle approximation, that is treating well-separated monopoles as point particles with an electric, magnetic and scalar charge and re-interpreting the dynamics of the 2-particle system as geodesic motion with respect to some metric. The corresponding analysis in the Euclidean case famously yields the negative mass Taub-NUT metric, which asymptotically approximates the \(L^{2}\) metric on the moduli space of two Euclidean monopoles, the Atiyah-Hitchin metric. An important difference with the Euclidean case is that, due to the absence of Galilean symmetry, in the hyperbolic case it is not possible to factor out the centre of mass motion. Nevertheless we show that we can consistently restrict to a 3-dimensional configuration space by considering antipodal configurations. In complete parallel with the Euclidean case, the metric that we obtain is then the hyperbolic analogue of negative mass Taub-NUT. We also show how the metric obtained is related to the asymptotic form of a hyperbolic analogue of the Atiyah-Hitchin metric constructed by Hitchin. ###### Contents * 1 Introduction * 2 Point particle dynamics in \(H^{3}\) * 2.1 Some facts about \(H^{3}\) * 2.2 The point particle approximation * 2.3 The asymptotic moduli space metric * 3 Further remarks and conclusions ## 1. Introduction Magnetic monopoles [27] are an interesting class of topological solitons defined on a Riemannian 3-manifold \(M\). The monopole data consists of a pair \((A,\Phi)\), where \(A\) is a connection on a principal \(SU(2)\)-bundle over \(M\) and \(\Phi\) is a section of the associated adjoint bundle. The pair \((A,\Phi)\) satisfies a system of first order PDEs known as the Bogomolny equations supplemented by suitable boundary conditions. In order for the Bogomolny equations to admit non-singular solutions \(M\) must be non-compact; the cases of Euclidean 3-space \(E^{3}\) and hyperbolic 3-space \(H^{3}\) have received the most attention. Hyperbolic and Euclidean monopoles share many similarities. For example, in both cases the space of solutions of the Bogomolny equations is a smooth manifold of dimension \(4|k|\), where \(k\) is a topological integer (the negative of the first Chern number of the bundle) which counts the total magnetic charge of the monopole solution. At least for well-separated configurations, \(|k|\) can be interpreted as the number of monopoles described by the solution. There are however a number of important differences between the two cases as we now discuss. First, for the class of boundary conditions usually considered, the Higgs field norm \(\|\Phi\|\) of both Euclidean and hyperbolic monopoles has a finite non-zero limit, known as the monopole mass \(p\), as we move to infinity which is independent of the direction. More precisely, both \(E^{3}\) and \(H^{3}\) admit a cohomogeneity one action of \(SO(3)\) with \(S^{2}\) as the typical orbit. Let \(r\) be a coordinate transverse to the \(SO(3)\) orbits such that the sphere volume increases with \(r\). Then ## 1. Introduction The theory of elliptic operators on a smooth manifold \(M\) is a generalization of the theory of elliptic operators on \(M\). The theory of elliptic operators on \(M\) is a generalization of the theory of elliptic operators on \(M\). \(M\) called mass. It is complete for non-negative values of \(M\) but becomes singular in the interior if \(M<0\). The metric found in [25, 26] is the negative mass version of Taub-NUT, which is indeed the asymptotic form of the \(L^{2}\) metric on \(\tilde{M}_{2}/\mathbb{Z}_{2}\), which is the complete Atiyah-Hitchin metric. Here we carry out the analysis for two monopoles in \(H^{3}\). The hyperbolic case is complicated by the fact that \(H^{3}\times\mathbb{R}\) has no analogue of the Galilean transformations. Therefore, it is in general not possible to factor out the centre of mass motion. In fact, in general it is not even clear what should be the centre of mass: even for two particles there are competing definitions which are inequivalent if the particles have different masses [14, 15, 16], and no point satisfies the property of being either fixed or moving along a geodesic for general configurations with pairwise attractive interactions [9, 16]. A general analysis would thus have to consider the full 6-dimensional configuration space of two particles on \(H^{3}\). However, it is possible to simplify the problem if we restrict our attention to specific configurations. The isometry group of \(H^{3}\) is the (orthochronous subgroup of the) Lorentz group, and acts by symmetries on the particle Lagrangian. The conserved quantities associated to boosts and rotations can be naturally identified with the total linear and angular momenta of the particle system. By the conservation of linear momentum, if the initial conditions are taken so that the two particles are at antipodal positions and have opposite velocities, then the particles will remain antipodal throughout their motion. For such configurations we thus reduce to a 3-dimensional configuration space. Applying the analysis of [25, 26] to antipodal configurations we obtain a Riemannian metric, which we like to call hyperbolic Taub-NUT [11] due to its manifest similarities with the Taub-NUT metric, first constructed in [23]. The hyperbolic Taub-NUT metric, just like the Taub-NUT one, depends on one effective parameter \(M\) called mass, is complete for \(M\geq 0\) and becomes singular in the interior if \(M<0\). In complete analogy with the results of [26], the metric that we obtain is hyperbolic Taub-NUT with negative mass. It is interesting to note that the metric we obtain corresponds, for \(k=2\), to the one found in [19] by considering the motion in \(H^{3}\) of a monopole in the background of \(k-1\) fixed ones. As already noted in [19], while fixing the positions of all but one monopole bypasses the need to deal with a higher dimensional configuration space, it is unphysical from the perspective of \(SU(2)\) monopoles dynamics since for well separated configurations the mass of each monopole is determined by the other charges and not a free parameter. Therefore, the analysis in [19] does not allow one to interpret hyperbolic Taub-NUT as a geodesic submanifold of the full moduli-space. Our results instead show that negative mass hyperbolic Taub-NUT does indeed capture the asymptotics of some metric on the moduli space of two centred hyperbolic monopoles. It is then natural to ask what is the metric which negative mass hyperbolic Taub-NUT is approximating, i.e. what is the hyperbolic analogue of the Atiyah-Hitchin manifold. As we discuss in Section 3, a metric in the conformal class of the Einstein metric constructed in [21] asymptotically reduces to hyperbolic Taub-NUT with negative mass, again in complete parallelism with the Euclidean case. The plan of the paper is as follows: In Section 2 we discuss our conventions and some useful properties of \(H^{3}\), summarise the basics of the point particle approximation, and finally derive the metric on the asymptotic moduli space of two centred monopoles in \(H^{3}\). In Section 3 we relate this metric to the hyperbolic analogue of the Atiyah-Hitchin metric constructed by Hitchin in [21] and discuss some open questions left for further study. ## 2. Point particle dynamics in \(H^{3}\) ### Some facts about \(H^{3}\) Perhaps the most straightforward model of hyperbolic space \(H^{3}\) is the "pseudosphere" \(L\) in Minkowski space \(E^{1,3}\), that is the (upper) hyperboloid \[L=\{(X,Y,Z,W\}\in E^{1,3}:X^{2}+Y^{2}+Z^{2}-W^{2}=-R^{2},\,W>0\} \tag{2.1}\] with the Riemannian metric induced as a submanifold of Minkowski space \(E^{1,3}\). The parameter \(R\) is related to the curvature \(\kappa\) of \(H^{3}\) via \(\kappa=-R^{-2}\). Since the constraints defining \(L\) are invariant under the subgroup \(O^{+}(1,3)\) of the Lorentz group consisting of orthochronous Lorentz transformations, it is clear that \(L\) has isometry group \(O^{+}(1,3)\). In these coordinates the Killing vector fields \((X_{i},Y_{i})\) generating rotations and boosts have very simple expressions, \[X_{1}=Y\partial_{Z}-Z\partial_{Y},\ \ \ \ X_{2}=Z\partial_{X}-X \partial_{Z},\ \ \ X_{3}=X\partial_{Y}-Y\partial_{X}, \tag{2.2}\] \[Y_{1}=X\partial_{W}+W\partial_{X},\ \ \ Y_{2}=Y\partial_{W}+W \partial_{Y},\ \ \ Y_{3}=Z\partial_{W}+W\partial_{Z}. \tag{2.3}\] The vector fields (2.2), (2.3) satisfy the \(\mathfrak{so}(1,3)\) Lie algebra relations, \[[X_{i},X_{j}]=-\epsilon_{ijk}X_{k},\ \ \ [X_{i},Y_{j}]=-\epsilon_{ijk}X_{k},\ \ \ [Y_{i},Y_{j}]=+ \epsilon_{ijk}X_{k}. \tag{2.4}\] Geodesics in this model are given by the intersection of \(H^{3}\) with 2-planes through the origin. The hyperbolic distance between two points \(\mathbf{X}_{1},\mathbf{X}_{2}\in L\), having coordinates \((W_{i},X_{i},Y_{i},Z_{i})\), is given by \[D_{L}(\mathbf{X}_{1},\mathbf{X}_{2})=R\ \text{arcCosh}\left(-\frac{g_{E^{1,3}} \left(\mathbf{X}_{1},\mathbf{X}_{2}\right)}{R}\right), \tag{2.5}\] where \(g_{E^{1,3}}\) is the inner product on Minkowski space \(E^{1,3}\). The Klein-Beltrami model \(K\) is obtained by gnomonic projection of \(L\): a point \(p\) on the hyperboloid is mapped to the intersection point between the straight line (in the Euclidean sense) from \(p\) to \((0,0,0,0)\in E^{1,3}\) and the hyperplane \(W=R\) tangent to the hyperboloid at \((R,0,0,0)\). Denoting by \((x,y,z)\) coordinates on \(K\) we thus have the relation \[(x,y,z)=\frac{R}{W}(X,Y,Z), \tag{2.6}\] and we see that \[K=\{(x,y,z)\in E^{3}:x^{2}+y^{2}+z^{2}<R^{2}\}, \tag{2.7}\] the open ball of radius \(R\). For reference (2.6) has inverse \[(X,Y,Z,W)=\frac{R}{\sqrt{R^{2}-|\mathbf{x}|^{2}}}\left(x,y,z,R\right), \tag{2.8}\] where \(|\mathbf{x}|^{2}=x^{2}+y^{2}+z^{2}\). We will often denote by \(\mathbf{x}\) a point in \(H^{3}\) having coordinates coordinates \((x,y,z)\) in the Klein model \(K\). In \(K\) the hyperbolic distance between two points \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\) is \[D_{K}(\mathbf{x}_{1},\mathbf{x}_{2})=R\ \text{arcCosh}\left(\frac{R^{2}-g_{E_{3}}( \mathbf{x}_{1},\mathbf{x}_{2})}{\sqrt{R^{2}-|\mathbf{x}_{1}|^{2}}\sqrt{R^{2}-| \mathbf{x}_{2}|^{2}}}\right), \tag{2.9}\] where \(g_{E^{3}}\) is the Euclidean metric on \(E^{3}\). The metric on \(K\) is obtained by pulling back that on \(L\) via (2.8), getting \[g_{K}=R^{2}\left(\frac{(R^{2}-|\mathbf{x}|^{2})\mathrm{d}\mathbf{x}\cdot \mathrm{d}\mathbf{x}+(\mathbf{x}\cdot\mathrm{d}\mathbf{x})^{2}}{(R^{2}-| \mathbf{x}|^{2})^{2}}\right). \tag{2.10}\] Due to the off-diagonal terms in \(g_{K}\), the Klein-Beltrami model may seem unappealing when compared to other models such as the half-space model or the Poincare one. However it shines in at least two respects. First, all the geodesics in \(K\) are straight line segments. Second, the Killing vector fields take a convenient form, \[X_{1}=y\partial_{z}-z\partial_{y},\ \ \ \ X_{2}=z\partial_{x}-x \partial_{z},\ \ \ X_{3}=x\partial_{y}-y\partial_{x}, \tag{2.11}\] \[Y_{1}=R^{2}\partial_{x}-xV,\ \ \ Y_{2}=R^{2}\partial_{y}-yV,\ \ \ \ Y_{3}=R^{2} \partial_{z}-zV, \tag{2.12}\] where \[V=x\partial_{x}+y\partial_{y}+z\partial_{z}, \tag{2.13}\] making the interpretation of conserved quantities transparent, cfr. equations (2.33), (2.34) below. A nice review of the properties of the most common models of hyperbolic space is contained in [8]. It can be useful to introduce other coordinate systems on \(K\). Defining polar coordinates \((\rho,\theta,\phi)\) as in \(E^{3}\), with \(0\leq\rho<R\), \(\theta\in[0,\pi]\), \(\phi\in[0,2\pi)\), \[x=\rho\sin\theta\cos\phi,\ \ \ y=\rho\sin\theta\sin\phi,\ \ \ z=\rho\cos\theta, \tag{2.14}\] (2.10) becomes \[g_{K}=R^{2}\left(\frac{R^{2}\mathrm{d}\rho^{2}+(R^{2}-\rho^{2})\rho^{2}\mathrm{d} \Omega^{2}}{(R^{2}-\rho^{2})^{2}}\right) \tag{2.15}\] for \(\mathrm{d}\Omega^{2}=\mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{d}\phi^{2}\) the round metric on \(S^{2}\). If we now redefine the radial variable by \[\sinh\left(\frac{r}{R}\right)=\frac{\rho}{\sqrt{R^{2}-\rho^{2}}}\quad \Leftrightarrow\quad\rho=R\tanh\left(\frac{r}{R}\right) \tag{2.16}\] we get \[g_{K}=\mathrm{d}r^{2}+R^{2}\sinh^{2}\left(\frac{r}{R}\right)\,\mathrm{d} \Omega^{2}, \tag{2.17}\] showing that \(r\in[0,\infty)\) is a geodesic coordinate. Spatial inversion \(A\) belongs to the isometry group of \(H^{3}\), so given any point \(p\in H^{3}\) we define its _antipodal_ point to be \(A(p)\). In the Klein model (2.7) we simply have \(A(x,y,z)=-(x,y,z)\). We will need to make use of parallel transport with respect to the Levi-Civita connection \(\nabla\) of \(H^{3}\) in order to identify tangent spaces at different points \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\). Parallel transport along a curve \(\gamma:[0,1]\to H^{3}\) from \(\mathbf{x}_{1}\) to \(\mathbf{x}_{2}\) is an isometry \(P_{21}^{\gamma}:T_{\mathbf{x}_{1}}H^{3}\to T_{\mathbf{x}_{2}}H^{3}\) obtained as follows. Let \(v_{1}\in T_{\mathbf{x}_{1}}H^{3}\), then the parallel transport along \(\gamma\) of \(v_{1}\) is the vector \(P_{21}^{\gamma}v_{1}\in T_{\mathbf{x}_{2}}H^{3}\) obtained evaluating at \(t=1\) the vector field along \(\gamma\) which solves the parallel transport ODE with initial condition \(V(0)=v_{1}\). With respect to a coordinate frame \(\{\partial_{i}\}\) the ODE reads \[\frac{\mathrm{d}}{\mathrm{d}t}V^{i}+\Gamma_{jk}^{i}U^{j}V^{k}=0, \tag{2.18}\] where \(\Gamma_{jk}^{i}\) are the Christoffel symbols associated to \(\nabla\) and \(U^{j}\) the components of the vector field tangent to \(\gamma\). The inverse of \(P_{21}^{\gamma}\) is \(P_{12}^{-\gamma}\) where \((-\gamma)(t)=\gamma(1-t)\) is the same curve with the opposite orientation. As it is well known, parallel transport in a curved space depends on the choice of \(\gamma\). An important property of hyperbolic space is that given any two points \(\mathbf{x}_{1},\mathbf{x}_{2}\in H^{3}\) there is a unique length-minimising geodesic connecting them. From now on whenever we need to compare vectors at different points we will parallel transport one of them along this geodesic and suppress \(\gamma\) from the notation. With respect to the coordinates \((x,y,z)\) on \(K\), the non-zero Christoffel symbols read, having set \(x^{1}=x\), \(x^{2}=y\), \(x^{3}=z\), \[\Gamma^{i}_{\,\,ij}=\Gamma^{i}_{\,\,ji}=\begin{cases}\frac{x^{j}}{R^{2}-| \mathbf{x}|^{2}}&\text{if }j\neq i,\\ \frac{2x^{i}}{R^{2}-|\mathbf{x}|^{2}}&\text{if }j=i.\end{cases} \tag{2.19}\] By solving (2.18) one finds that the vector at \(T_{\mathbf{x}_{2}}K\) obtained by parallel transport of \(v\) along \(\gamma\) has components \((\tilde{v}_{x},\tilde{v}_{y},\tilde{v}_{z})\) with respect to \((\partial_{x},\partial_{y},\partial_{z})|_{\mathbf{x}_{2}}\) given by \[\tilde{v}_{x}=\sqrt{\frac{R^{2}-|\mathbf{x}_{2}|^{2}}{R^{2}-|\mathbf{x}_{1}|^{ 2}}}\left[v_{x}+\frac{(x_{2}-x_{1})(\mathbf{x}_{1}\cdot\mathbf{v})}{\mathbf{x }_{1}\cdot(\mathbf{x}_{2}-\mathbf{x}_{1})}\left(\sqrt{\frac{R^{2}-|\mathbf{x} _{2}|^{2}}{R^{2}-|\mathbf{x}_{1}|^{2}}}-1\right)+\frac{(\mathbf{x}_{2}\times \mathbf{x}_{1})\cdot\left((\mathbf{x}_{2}-\mathbf{x}_{1})\times\mathbf{v} \right)}{\mathbf{x}_{1}\cdot(\mathbf{x}_{2}-\mathbf{x}_{1})}\right], \tag{2.20}\] \[\tilde{v}_{y}=\sqrt{\frac{R^{2}-|\mathbf{x}_{2}|^{2}}{R^{2}-|\mathbf{x}_{1}|^{ 2}}}\left[v_{y}+\frac{(y_{2}-y_{1})(\mathbf{x}_{1}\cdot\mathbf{v})}{\mathbf{x }_{1}\cdot(\mathbf{x}_{2}-\mathbf{x}_{1})}\left(\sqrt{\frac{R^{2}-|\mathbf{x} _{2}|^{2}}{R^{2}-|\mathbf{x}_{1}|^{2}}}-1\right)\right], \tag{2.21}\] \[\tilde{v}_{z}=\sqrt{\frac{R^{2}-|\mathbf{x}_{2}|^{2}}{R^{2}-|\mathbf{x}_{1}|^{ 2}}}\left[v_{z}+\frac{(z_{2}-z_{1})(\mathbf{x}_{1}\cdot\mathbf{v})}{\mathbf{x }_{1}\cdot(\mathbf{x}_{2}-\mathbf{x}_{1})}\left(\sqrt{\frac{R^{2}-|\mathbf{x} _{2}|^{2}}{R^{2}-|\mathbf{x}_{1}|^{2}}}-1\right)\right], \tag{2.22}\] where \(\cdot\), \(\times\) are the dot and cross product of Euclidean 3-space. Note that if \(\mathbf{x}_{2}=-\mathbf{x}_{1}\) then parallel transport reduces to the identity so that \(\tilde{v}_{i}=v_{i}\). Thus, we can compare vectors tangent to antipodal points of \(K\) by simply comparing their coordinates just as if we were in flat space. Moreover it can be checked that, denoting by \((P_{21})^{a}_{\,\,\,b}\) the components of the parallel transport operator with respect to the coordinate frame, so that \(\tilde{v}^{a}=(P_{21})^{a}_{\,\,\,b}v^{b}\), \[\frac{\partial(P_{21})^{a}_{\,\,\,b}}{\partial x_{1}^{i}}\bigg{|}_{\mathbf{x}_{ 2}=-\mathbf{x}_{1}}=\left.\frac{\partial(P_{21})^{a}_{\,\,\,b}}{\partial x_{2}^ {i}}\right|_{\mathbf{x}_{2}=-\mathbf{x}_{1}}\quad i=1,2,3. \tag{2.23}\] ### The point particle approximation A hyperbolic monopole \((\Phi,A)\) on \(H^{3}\) is a solution of the Bogomolny equations \[\mathrm{d}_{A}\Phi=-\star F, \tag{2.24}\] where \(\star\) is the Hodge operator with respect to the \(H^{3}\) metric. The Bogomolny equations are supplemented by the Prasad-Sommerfeld boundary conditions: \[p =\lim_{r\to\infty}\|\Phi\|, \tag{2.25}\] \[k =\lim_{r\to\infty}\frac{1}{4\pi}\int_{S^{2}_{r}}\mathrm{Tr}(\Phi F )\in\mathbb{Z}. \tag{2.26}\] Here \(S^{2}_{r}\) is a 2-sphere of geodesic radius \(r\) centred at some fixed point of \(H^{3}\), which may conveniently be taken as the origin \(r=0\) of the coordinates used in (2.17). The value of \(p\) is known as the monopole mass, and the integer \(k\) is the monopole (magnetic) charge. The framed moduli space \(\mathcal{M}_{k}\) of magnetic monopoles of charge \(k\) is the space of solutions of (2.24) satisfying (2.25), (2.26) quotiented by the group of framed bundle automorphisms. At least for \(2p\in\mathbb{Z}\), the moduli space \(\mathcal{M}_{k}\) is known to be a smooth manifold of dimension \(4k\)[1]. As discussed in Section 1, the \(L^{2}\) metric on \(\mathcal{M}_{k}\) is not well-defined. We know proceed to investigate the dynamics of two well-separated monopoles with the aim to understand if this dynamics can be interpreted as geodesic motion with respect to some metric on \(\mathcal{M}_{2}\). Two well-separated monopoles can be approximated by two point dyons having electric, magnetic and scalar charges. This is a familiar approximation in the case of Euclidean monopoles [18, 26] and has been applied to the study of hyperbolic monopoles in the case where one monopole is moving in the background of several fixed ones [19]. Here we consider two well-separated monopoles that are both free to move and view them as point particles of equal mass \(m\), with electric and magnetic charges \(q_{i},g_{i}\)\(i=1,2\), located at the points \(\mathbf{x}_{1},\mathbf{x}_{2}\in H^{3}\). As in the Euclidean case, the scalar charge of the \(i\)-th monopole is \(\sqrt{q_{i}^{2}+g_{i}^{2}}\). We will assume that the dyons have the same magnetic charge \(g_{1}=g_{2}=g\) and denote by \(q\) the difference between the electric charges, \(q=q_{2}-q_{1}\). The 2-particle dynamics can be described in terms of the Lagrangian formalism. The Euclidean case is discussed in [26], which we refer to for the details. The scalar charges modify the rest masses of the particles and the electric charge (respectively magnetic charge) of each particle couples to the Lienard-Wiechert 4-potential \(A^{\mu}\) (respectively dual 4-potential \(\tilde{A}^{\mu}\)) produced by the other one. The dual potential \(\tilde{A}^{\mu}\) is obtained from \(A^{\mu}\) via the electromagnetic duality transformation \(q_{i}\to g_{i}\), \(g_{i}\to-q_{i}\). Keeping terms up to quadratic order in the particle velocities \(v_{i}\) and the charge difference \(q\), in the Euclidean case the resulting Lagrangian is \[L_{\mathrm{E}}=-2m+\frac{m}{2}\left(|v_{1}|^{2}+|v_{2}|^{2}\right)+\frac{ \mathcal{V}_{\mathrm{E}}}{8\pi}\left(q^{2}-g^{2}|v_{2}-v_{1}|^{2}\right)+\frac {gq}{4\pi}\omega_{\mathrm{E}}(v_{2}-v_{1}). \tag{2.27}\] where \(\mathcal{V}_{\mathrm{E}}=|\mathbf{x}_{2}-\mathbf{x}_{1}|^{-1}\) and \[\omega_{\mathrm{E}}=\left(\frac{z_{2}-z_{1}}{|\mathbf{x}_{2}-\mathbf{x}_{1}|^{ 2}}\right)\left(\frac{\left(y_{2}-y_{1}\right)\mathrm{d}x-\left(x_{2}-x_{1} \right)\mathrm{d}y}{\left(x_{2}-x_{1}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}} \right). \tag{2.28}\] If we regard \(\mathcal{V}_{\mathrm{E}}\) as a function of \(\mathbf{x}_{2}=\mathbf{x}\) only, then \(\omega_{\mathrm{E}}=-\star_{\mathrm{E}}\mathrm{d}\mathcal{V}_{\mathrm{E}}\), where \(\star_{\mathrm{E}}\) is the Hodge star with respect to the \(E^{3}\) metric. Proceeding in a similar way, we find that the Lagrangian for a 2-particle system in \(H^{3}\) is (2.29) Some of the differences between (2.29) and (2.27) simply amount to the replacement of the Euclidean metric with the hyperbolic one: The Euclidean norm \(|\cdot|\) is replaced by the hyperbolic one \(\|\cdot\|\) and the Green's function \(\mathcal{V}_{\mathrm{E}}\) of the Euclidean Laplacian is replaced by the hyperbolic one \(\mathcal{V}\). With respect to the coordinates (2.6) of \(K\), \(\mathcal{V}\) is given by \[\mathcal{V}=\coth\left(\frac{D_{K}(\mathbf{x}_{1},\mathbf{x}_{2})}{R}\right)-1, \tag{2.30}\] where \(D_{K}\) is hyperbolic distance in the Klein model, see (2.9), and the one-form \(\omega\) by \[\omega=R\left(\frac{z_{2}-z_{1}}{|\mathbf{x}_{2}-\mathbf{x}_{1}|^{2}}\right) \left(\frac{\left(y_{2}-y_{1}\right)\mathrm{d}x-\left(x_{2}-x_{1}\right) \mathrm{d}y}{\left(x_{2}-x_{1}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}}\right). \tag{2.31}\] If we consider \(\mathcal{V}\) as a function of \(\mathbf{x}_{2}=\mathbf{x}\) only, we again have \(\omega=-\star\mathrm{d}\mathcal{V}\), where \(\star\) is now calculated with respect to the hyperbolic metric. The appearance of the parallel propagator \(P_{21}\) is due to the non-zero curvature of \(H^{3}\). As previously discussed, it denotes parallel transport along the unique length-minimising geodesic from particle \(1\) to particle \(2\) and its expression with respect to the coordinates (2.6) is given by (2.20). Since parallel transport is an isometry, \(\|v_{2}-P_{21}v_{1}\|^{2}\) is already invariant under the interchange of particle \(1\) and \(2\). However \(\left(gq/4\pi\right)\omega(v_{1}-P_{21}v_{1})\) is not invariant and needs to be symmetrised under \(1\leftrightarrow 2\) as we have done in (2.29) -- recall that \(q=q_{2}-q_{1}\) so \(q\to-q\) under \(1\leftrightarrow 2\). In the Euclidean case symmetrisation is not needed since parallel transport is trivial. We now turn to the special case of antipodal configurations, \(\mathbf{x}_{2}=-\mathbf{x}_{1}\). Antipodal configurations of two point dyons correspond to centred \(SU(2)\) monopoles. In fact, following [28] we take a hyperbolic monopole to be centred if it lies in the zero set of the moment map of the \(SO(3)\subset SO_{0}(1,3)\) action. More intuitively, if we embed the ball model of \(H^{3}\) in \(\mathbb{R}^{4}\) then a configuration is centred in the hyperbolic sense if it is centred in the "Euclidean" sense. For two monopoles the latter condition is equivalent to the two monopoles having antipodal centres. Restricting to antipodal configuration is justified since, as we will now show, dyons starting off at antipodal positions with opposite velocities remain antipodal. In other words antipodal configurations are preserved by time evolution. Let \(U\) be a vector field generating a symmetry of the Lagrangian \(L\), and \(\delta_{U}x_{a}^{i}\) be the infinitesimal change in the Klein-Beltrami coordinates (2.6) \(x_{a}^{i}\) of particle \(a\) along \(U\). By Nother's theorem the conserved quantity associated to \(U\) is \[C_{U}=\sum_{a=1}^{2}\sum_{i=1}^{3}\frac{\partial L}{\partial v_{a}^{i}}\delta _{U}x_{a}^{i}. \tag{2.32}\] For an interaction potential independent of the particle velocities, Nother's theorem applied to the symmetries (2.11) gives the conserved quantities \[C_{X_{i}} =\sum_{a=1}^{2}\left(\frac{x_{a}^{j}v_{a}^{k}-x_{a}^{k}v_{i}^{j}} {R^{2}-|\mathbf{x}_{a}|^{2}}\right), \tag{2.33}\] \[C_{Y_{i}} =\sum_{a=1}^{2}\left(\frac{v_{a}^{i}}{R^{2}-|\mathbf{x}_{a}|^{2} }\right), \tag{2.34}\] where \(v_{a}^{i}\) is the \(i\)-th component of particle \(a\) velocity with respect to the coordinate frame \(\partial_{i}\) and \((ijk)\) is a symmetric permutation of (123). We can clearly recognise (2.33), (2.34) for the angular and linear momentum, respectively, along the direction \(\partial_{i}\). Because of the velocity-dependent interactions, the rhs of (2.33), (2.34) has additional terms. However if we differentiate (2.34) with the additional terms included, evaluate at antipodal positions \(\mathbf{x}_{2}=-\mathbf{x}_{1}\) and make use of (2.23), we still find that the two particles experience opposite accelerations. Thus two particles starting at antipodal positions with opposite velocities will maintain antipodal positions throughout. This choice of initial conditions corresponds to taking the constants \((C_{Y_{i}})\) to be zero, i.e. to zero total linear momentum. ### The asymptotic moduli space metric On the basis of the results of Section 2.2, we would like to restrict \(L_{\mathrm{2P}}\) to antipodal configurations. While spatial inversion \(A\) is an isometry of \(H^{3}\) and a symmetry of \(L_{\mathrm{2P}}\), the two particles have different electric charges so \(A\) is not a symmetry of an antipodal configuration and we cannot invoke the principle of symmetric criticality. However, if \(L\) is a \(2\)-particle Lagrangian, and \(L_{\mathrm{a}}\) is the Lagrangian obtained by setting \(x_{2}=F(x_{1})\) in \(L\), it is easy to show that the Euler-Lagrange equations associated to \(L_{\mathrm{a}}\) are equivalent to those associated to \(L\) and restricted to configurations satisfying \(x_{2}=F(x_{1})\) if and only if \(F\) is an affine transformation, i.e. \(\partial^{2}F/(\partial x_{1}^{i})^{2}=0\) for all values of \(i\). In the present case \(F=A=-\operatorname{Id}_{3}\). For ease of notation we give the argument for \(i=1\), the general case is similar. Setting \(L_{\mathrm{a}}=L(x_{1},x_{2}=F(x_{1}),\dot{x}_{1},F^{\prime}\dot{x}_{1})\) the Euler-Lagrange equations associated to \(L_{\mathrm{a}}\) are \[\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial L_{\mathrm{a}}}{\partial\dot{x}_ {1}}-\frac{\partial L_{\mathrm{a}}}{\partial x_{1}}=\left[\left(\frac{\mathrm{ d}}{\mathrm{d}t}\frac{\partial L}{\partial\dot{x}_{1}}-\frac{\partial L}{ \partial x_{1}}\right)+\left(\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial L}{ \partial\dot{x}_{2}}-\frac{\partial L}{\partial x_{2}}\right)F^{\prime}+\frac {\partial L}{\partial\dot{x}_{2}}F^{\prime\prime}\dot{x}_{1}\right]_{x_{2}=F( x_{1})}=0. \tag{2.35}\] If \(F^{\prime\prime}=0\) then \(x_{2}=F(x_{1})=C_{1}x_{1}+C_{2}\), \(\partial/\partial x_{2}=\frac{1}{C_{1}}\partial/\partial x_{1}\) and (2.35) becomes \[\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial L_{\mathrm{a}}}{\partial\dot{x}_ {1}}-\frac{\partial L_{\mathrm{a}}}{\partial x_{1}}=2\left[\left(\frac{ \mathrm{d}}{\mathrm{d}t}\frac{\partial L}{\partial\dot{x}_{1}}-\frac{\partial L }{\partial x_{1}}\right)\right]_{x_{2}=F(x_{1})}=0, \tag{2.36}\] showing that the equations associated to \(L\) and restricted to \(x_{2}=F(x_{1})\) are equivalent to those associated to \(L_{\mathrm{a}}\). Let us thus consider the Lagrangian \(L_{\mathrm{2P}}\) restricted in such a way. Setting \(x_{2}^{i}=-x_{1}^{i}=x^{i}\), \(P_{12}=P_{21}=1\), and \(v_{2}^{i}=-v_{1}^{i}=v^{i}\) in (2.29) we obtain \[L_{\mathrm{a}}=\left(m-\frac{g^{2}\mathcal{V}_{\mathrm{a}}}{2\pi}\right)\|v\| ^{2}+\frac{\mathcal{V}_{\mathrm{a}}}{8\pi}q^{2}+\frac{gq}{2\pi}\omega_{ \mathrm{a}}\left(v\right), \tag{2.37}\] where \(\mathcal{V}_{\mathrm{a}}\) and \(\omega_{\mathrm{a}}\) are the scalar potential and 1-form (2.30), (2.31) with \(\mathbf{x}=\mathbf{x}_{2}=-\mathbf{x}_{1}\). It is now convenient to switch to the geodesic polar coordinates of (2.17) with \(r\) the geodesic distance between \(\mathbf{x}\) and \(-\mathbf{x}\). Then \[\mathcal{V}_{\mathrm{a}}=\coth\left(\tfrac{r}{R}\right)-1,\quad\omega_{\mathrm{ a}}=R\cos\theta\,\mathrm{d}\phi, \tag{2.38}\] satisfying \[\mathrm{d}\omega_{\mathrm{a}}=-\star\,\mathrm{d}\mathcal{V}_{\mathrm{a}}. \tag{2.39}\] The Lagrangian (2.37) is essentially the \(k=2\) case of the Lagrangian obtained in [19] by considering the motion of one monopole in the background of \(k-1\) other fixed ones. However whilst the derivation in [19] was based on an assumption which is incompatible with \(SU(2)\) monopole dynamics, our analysis shows that \(L_{\mathrm{a}}\) is also the Lagrangian associated to the dynamics of two centred well-separated \(SU(2)\) monopoles. The analysis to show that the dynamics associated to (2.37) can be reinterpreted as geodesic motion now parallels that of [19] and results in the hyperbolic Taub-NUT (hTN) metric, but for completeness we give the details here. First we add in the constant term \(-\frac{m}{4g^{2}}q^{2}\) so that (2.37) becomes \[L_{\mathrm{a}}=m\left(1-\frac{g^{2}\mathcal{V}_{\mathrm{a}}}{2\pi m}\right)\|v \|^{2}-\frac{mq^{2}}{4g^{2}}\left(1-\frac{g^{2}\mathcal{V}_{\mathrm{a}}}{2\pi m }\right)+\frac{gq}{2\pi}\omega_{\mathrm{a}}\left(v\right). \tag{2.40}\] Next we interpret the electric charge as the rate of change of a phase, \(q=\dot{\chi}\), and rewrite (2.40) in the form \[L_{\mathrm{red}}=m\left[U(r)\|v\|^{2}+W(r)R^{2}\left(\dot{\chi}+\omega_{ \mathrm{a}}\right)^{2}\right]. \tag{2.41}\] The dynamics associated to (2.41) is geodesic motion with respect to the metric \[\mathrm{d}s^{2}=Ug_{K}+WR^{2}\left(\mathrm{d}\chi+\omega_{\mathrm{a}}\right)^ {2}. \tag{2.42}\] The phase \(\chi\) is a cyclic variable in (2.41) with conserved momentum \[p_{\chi}=2mR^{2}W\left(\dot{\chi}+\omega_{\mathrm{a}}\left(v\right)\right) \coloneqq kq \tag{2.43}\] where \(k\) is a constant to be determined. Eliminating \(\dot{\chi}=\frac{kq}{2mR^{2}W}-\omega_{\mathrm{a}}\left(v\right)\) from \(L_{\mathrm{red}}\) using the Routhian procedure \[\begin{split} L_{\mathrm{red}}^{\prime}&=L_{\mathrm{ red}}-p_{\dot{\chi}}\dot{\chi}=m\left[U\|v\|^{2}+WR^{2}\left(\frac{kq}{2mR^{2}W} \right)^{2}\right]-kq\left(\frac{kq}{2mR^{2}W}-\omega_{\mathrm{a}}\left(v \right)\right)\\ &=mU\|v\|^{2}-\frac{k^{2}q^{2}}{4mR^{2}W}+kq\,\omega_{\mathrm{a}} \left(v\right).\end{split} \tag{2.44}\] The expression (2.44) matches the reduced 2-particle Lagrangian (2.40) if \[U=\left(1-\frac{g^{2}\mathcal{V}_{\mathrm{a}}}{2\pi m}\right),\qquad k=\frac{g}{2 \pi},\qquad\frac{1}{W}=\left(\frac{2\pi mR}{g^{2}}\right)^{2}U, \tag{2.45}\] so we obtain the metric \[\mathrm{d}s^{2}=\left(1-\frac{g^{2}\mathcal{V}_{\mathrm{a}}}{2\pi m}\right)g_{ K}+\left(\frac{g^{2}}{2\pi mR}\right)^{2}\left(1-\frac{g^{2}\mathcal{V}_{ \mathrm{a}}}{2\pi m}\right)^{-1}R^{2}\left(\mathrm{d}\chi+\omega_{\mathrm{a}} \right)^{2}. \tag{2.46}\] Condition (2.39) implies \(\mathrm{d}\omega_{\mathrm{a}}=\left(2\pi m/g^{2}\right)\star\mathrm{d}U\). Working in units where \(g^{2}=4\pi mN/R\), setting \[M=-N<0, \tag{2.47}\] and introducing the left-invariant 1-form on \(SU(2)\) \[\eta_{3}=\frac{1}{R}(\mathrm{d}\chi+\omega_{\mathrm{a}})=\mathrm{d}\psi+\cos \theta\,\mathrm{d}\phi, \tag{2.48}\] where \(\psi=\chi/R\) has range \(\psi\in[0,4\pi)\) to avoid conical singularities, we can rewrite (2.46) as \[g_{\mathrm{hTN}}=Ug_{K}+4M^{2}U^{-1}\eta_{3}^{2},\qquad U=1+\frac{2M}{R}\left( \coth\left(\tfrac{r}{R}\right)-1\right)=1+\frac{4M}{R}\left(\mathrm{e}^{\tfrac {2r}{R}}-1\right)^{-1}, \tag{2.49}\] which is the hyperbolic Taub-NUT (hTN) metric with negative mass \(M\). The metric (2.49) with positive \(M\) was first introduced in [23], see also [11, 19, 3] for a discussion of its properties. It is worth pausing to recall some facts about the Taub-NUT metric and its hyperbolic cousin (2.49). Both metrics can be expressed in terms of the so-called Gibbons-Hawking ansatz \[\mathrm{d}s^{2}=Ug_{3}+4M^{2}U^{-1}(\mathrm{d}\psi+\alpha)^{2}, \tag{2.50}\] where \(M\) is a constant, \(g_{3}\) is either the Euclidean metric on \(E^{3}\), for Taub-NUT, or the metric of hyperbolic 3-space \(H^{3}\), and the 1-form \(\alpha\) satisfies the equation \(\mathrm{d}\alpha=-\star\mathrm{d}U\), with \(\star\) the Hodge star with respect to \(g_{3}\). As a consequence, \(U\) is a Green function of the \(g_{3}\) Laplacian. In the case of \(E^{3}\) by taking \[U=1+\frac{2M}{r}, \tag{2.51}\] with \(r\) the usual radial coordinate, one obtains the Taub-NUT (TN) metric. If the mass parameter \(M\) is non-negative TN is a smooth complete metric defined on a manifold diffeomorphic to \(\mathbb{R}^{4}\). In the case of \(H^{3}\) by taking \(U\) to be the hyperbolic Green's function \[U=1+\frac{4M}{R}\left(\mathrm{e}^{\tfrac{2r}{R}}-1\right)^{-1}, \tag{2.52}\] with \(r\) the geodesic coordinate of (2.17), one obtains the hyperbolic Taub-NUT (hTN) metric (2.49). As its Euclidean relative, hTN is a smooth complete metric defined on a space diffeomorphic to \(\mathbb{R}^{4}\) if \(M\geq 0\) and singular otherwise. The geometry of hTN near the NUT is equal to that of TN and as \(R\to\infty\) the hTN metric with mass \(M\) converges to the TN one with the same mass. While in (2.49) we have kept the dependence on both the mass parameter \(M\) and the radius of curvature \(R\) of \(H^{3}\), up to homothety the hTN metric only depends on the ratio \(M/R\) as can be checked by substituting \(r\to Mr\). Clearly there are many similarities between TN and hTN. Besides the fact that they both arise from the Gibbons-Hawking ansatz and are defined on diffeomorphic spaces, they both have bi-axial Bianchi IX form, thus admitting a cohomogeneity one action of \(SU(2)\times U(1)\); they both are circle fibrations over a 3-manifold of constant curvature, \(E^{3}\) for TN and \(H^{3}\) for hTN, except at the NUT \(r=0\), a fixed point of the isometric \(U(1)\) action where the circle fibre collapses to zero size; they both have an asymptotic circle fibration with fibres of finite length, an asymptotic behaviour called ALF in the Euclidean case. Finally, both TN and hTN admit a multi-(h)TN generalisation with \(k\) NUTs obtained by taking \(U\) to be the superposition with equal weights of \(k\) poles. There is also a very important difference: while multi-TN is hyperkahler, hyperbolic multi-TN is half-conformally flat but is not even Einstein. ## 3. Further remarks and conclusions It seems clear that the hTN metric bears relevance to the dynamics of hyperbolic monopoles. Besides the results that we have presented here, [30] shows how hyperbolic multi-TN with \(k\) NUTs emerges as the moduli space of one \(SU(2)\) monopole with \(k\) singularities, a result which also follows from the analysis in [19] once we reinterpret the fixed monopoles as Abelian singularities. The double role of hyperbolic (multi-)TN with the appropriate value of the mass parameter as both the asymptotic moduli space metric of two centred \(SU(2)\) monopoles and the moduli space of one \(SU(2)\) monopole with singularities completely parallels what happens in the Euclidean case. Multi-TN is shown to be the moduli space of singular Euclidean monopoles in [22], where the singular monopoles are-interpreted as smooth circle-invariant instantons on multi TN. The construction in the hyperbolic case is completely similar [30] and has been used in [12] to construct singular as well as smooth hyperbolic monopoles. It is only natural to ask if the parallelism between the hyperbolic and Euclidean case extends to the full centred 2-monopole space \(\tilde{\mathcal{M}}_{2}/\mathbb{Z}_{2}\): Is there a complete metric on \(\tilde{\mathcal{M}}_{2}/\mathbb{Z}_{2}\) which asymptotically reduces to negative mass hTN? We are now going to show that, at least for a specific value of the monopole mass, the answer is yes and such a metric is in the conformal class of the family constructed in [21]. In [21] Hitchin constructed a family \(g_{k}\), for \(k\geq 3\) an integer, of \(SO(3)\)-invariant metrics defined on the non-compact space \(S^{4}\setminus\mathbb{R}P^{2}\). The metric \(g_{k}\) is half-conformally-flat and conformally equivalent to an Einstein metric \(h_{k}\) on \(S^{4}\) having positive scalar curvature \[s_{k}=2\tan^{2}\left(\tfrac{\pi}{k}\right) \tag{3.1}\] with a conical singularity of deficit angle \(\,\tfrac{2\pi}{k-2}\) along an embedded \(\mathbb{R}P^{2}\). For \(k=3\) there is no conical singularity and \(h_{3}\) is the round metric on \(S^{4}\). For \(k=4\) the metric \(h_{4}\) admits a smooth global branched cover isometric to \(\mathbb{C}P^{2}\) with the Fubini-Study metric. For \(k\geq 5\) the metrics are new. For our purposes, what matters the most is that for \(k\geq 5\) these metrics are naturally defined on the moduli space of centred \(SU(2)\) monopoles of charge \(2\) on \(H^{3}\). Taking \(H^{3}\) to have curvature \(-1\) the monopoles have mass \[p=\frac{k-4}{4}. \tag{3.2}\] Equivalently one could take the monopoles to have unit mass and the curvature of \(H^{3}\) to be \(-1/p^{2}\). Importantly, as \(k\to\infty\) the scalar curvature \(s_{k}\to 0\) and \(h_{k}\) converges to the Ricci-flat Atiyah-Hitchin metric. By [35], \(g_{k}\) is determined by a solution of Painleve's 6th equation and the conformal factor making \(g_{k}\) Einstein can be expressed as an algebraic function of the data determining \(g_{k}\). The main problem is thus solving Painleve's equation, which is done in [21] via twistorial methods. Referring to the original paper for all the details we just note that \(g_{k}\) is given by1 Footnote 1: Our conventions differ from those used in [21] by a different normalisation of the left-invariant forms on \(SU(2)\), \(\mathrm{d}\eta_{i}=-\tfrac{1}{2}\epsilon_{ijk}\eta_{j}\wedge\eta_{k}\) and relabelling. More precisely \(\tfrac{1}{2}\eta_{1}=\sigma_{3}\), \(\tfrac{1}{2}\eta_{2}=\sigma_{2}\), \(\tfrac{1}{2}\eta_{3}=\sigma_{1}\)\(\Omega_{1}=\Omega_{3}\), \(\Omega_{3}=\Omega_{1}\), where the quantities on the lhs (respectively rhs) are those used here (respectively in [21]). \[g_{k}=\frac{\mathrm{d}x^{2}}{x(1-x)}+\frac{\eta_{3}^{2}}{4\Omega_{3}^{2}}+ \frac{(1-x)\eta_{2}^{2}}{4\Omega_{2}^{2}}+\frac{x\,\eta_{1}^{2}}{4\Omega_{1}^ {2}}. \tag{3.3}\] The metric \(g_{k}\) is negative definite for \(x\in(1,\infty)\) and can be extended to \(x=1\), which is a bolt with the topology of \(\mathbb{R}P^{2}\). It is shown in [21] that for large values of \(x\), \[\Omega_{1}^{2}\simeq-\frac{(k-2)^{2}}{4k^{2}},\quad\Omega_{2}^{2}\simeq\frac{ 4^{1-4/k}x^{1-2/k}}{k^{2}},\quad\Omega_{3}^{2}\simeq-\frac{4^{1-4/k}x^{1-2/k }}{k^{2}}. \tag{3.4}\] Making the coordinate change \[x=2^{2-\tfrac{k}{2}}\,u^{-\tfrac{k}{2}} \tag{3.5}\] in (3.3), and \[\frac{2r}{R}=-\log\left(2^{3-\frac{12}{k}}\,u\right) \tag{3.6}\] in \(g_{\rm hTN}\), see (2.49), one finds that near \(u=0\), to leading order, \[-\frac{g_{k}}{k^{2}}\simeq\frac{g_{\rm hTN}}{R^{2}}\simeq\frac{{\rm d}u^{2}}{4 u^{2}}+\frac{2^{\frac{12}{k}-5}}{u}{\rm d}\Omega^{2}+\frac{\eta_{3}^{2}}{(k-2)^{2}}, \tag{3.7}\] where \({\rm d}\Omega^{2}=\eta_{1}^{2}+\eta_{2}^{2}\) is the round metric on \(S^{2}\), provided that the mass parameter takes value \[M^{2}=\left(\frac{R}{2(k-2)}\right)^{2}. \tag{3.8}\] While \(g_{k}\) and \(g_{\rm hTN}\) agree to leading order, the approximation (3.4) is not precise enough to determine the sign of \(M\). However the metric \(g_{6}\) is determined exactly in [21] and given by, for \(x=\frac{s^{3}(s+2)}{2s+1}\), \[\begin{split} g_{6}=&-\frac{36s(1+s){\rm d}s^{2}}{ (1+2s)^{2}(s^{2}+s-2)}-\frac{9s^{2}\left(s-1\right)\left(1+s\right)^{3}}{(1+2 s)(1+s+s^{2})^{2}}\eta_{2}^{2}-\frac{9s^{3}(1+s)}{(s-1)(2+s)(1+2s)}\eta_{1}^{2}\\ &-\frac{9s^{2}(1+s)}{(s-1)(1+2s)^{2}}\eta_{3}^{2}.\end{split} \tag{3.9}\] Making the coordinate change \(s=u^{-1}\) in (3.9) and using (3.6) with \(k=6\) in (2.49) we now find that for small \(u\) \[-\frac{1}{36}g_{6}\simeq\frac{g_{\rm hTN}}{R^{2}}\simeq\frac{{\rm d}u^{2}}{4 u^{2}}-\frac{{\rm d}u^{2}}{4u}+\frac{{\rm d}\Omega^{2}}{8u}+\frac{\eta_{3}^{2}}{16} \tag{3.10}\] provided that the hTN mass \(M\) takes value \[M=-\frac{R}{8}. \tag{3.11}\] Thus \(g_{6}\) is asymptotic to hTN with negative mass. The main point of this work was to show that negative mass hTN emerges as the asymptotic moduli space metric of two centred \(SU(2)\) monopoles, which was done in Section 2. This result and the relation between negative mass hTN and \(g_{6}\) which we just discussed invite many further questions which we leave for future work. First, the asymptotics of \(g_{k}\) for general \(k\) and its behaviour as \(k\to\infty\) need further study. In particular note how negative mass hTN converges in the zero curvature limit to negative mass TN, which is the correct asymptotic form of the Atiyah-Hitchin metric, while it is the Einstein metric \(h_{k}\) rather than \(g_{k}\) which converges to Atiyah-Hitchin as \(k\to\infty\). It is thus natural to ask what is the \(k\to\infty\) limit of \(g_{k}\) and how is it related to the Atiyah-Hitchin metric. Second, the metric \(h_{k}\) constructed in [21] is special by virtue of being Einstein, but what makes \(g_{k}\) special within its conformal class? At least at the asymptotic level the answer may lie with the Abelian monopole equations satisfied by \((U,\omega_{\rm a})\), \({\rm d}\omega_{\rm a}=-\star{\rm d}U\), which are not preserved by a conformal rescaling of the metric. In the Euclidean case the Abelian monopole equations imply that the three self-dual two forms \(\omega^{i}=\omega_{E}\wedge{\rm d}x^{i}+\frac{1}{2}\epsilon^{i}_{\ jk}{\cal V}_{\rm E}{\rm d}x^{j} \wedge{\rm d}x^{k}\) are closed and provide three hyperkahler forms; it is possible that in the hyperbolic case they also determine some special structure, although this remains to be explored. Many other questions along the lines of "what is the hyperbolic analogue of" some property of the Euclidean moduli space metric could be asked. We only mention the following one. Two families of (hyperkahler) gravitational instantons with ALF asymptotics are known: \(A_{k}\), which is the same as multi-TN with \(k+1\) NUTs, and \(D_{k}\), which includes the Atiyah-Hitchin manifold as \(D_{0}\). As shown in [31], ALF \(D_{k}\) manifolds with \(k\geq 1\) can be constructed by gluing NUTs to \(D_{0}\). Is it possible to obtain a hyperbolic analogue of ALF \(D_{k}\) by similar means? While the hyperbolic analogues of TN and \(D_{0}\) are at our disposal, the construction in [31] strongly relies on the hyperkahler structures on \(D_{0}\) and TN, which is not shared by their hyperbolic relatives. Finally the case of non-centred configurations of two monopoles is worth investigating. The problem is non-trivial already at the point particle level: While in Euclidean space the possibility of factoring out the centre of mass motion makes the reduced dynamics independent of the total momentum, in the hyperbolic case the dynamics does depend on the total momentum of the system, see [32] and references therein. ## Acknowledgements GF thanks the Simons Foundation for its support under the Simons Collaboration on Special Holonomy in Geometry, Analysis and Physics [grant number 488631]. CR thanks Michael Singer for useful discussions about the notion of centring for hyperbolic monopoles. The work of CR was supported by the Engineering and Physical Sciences Research Council [grant number EP/V047698/1].
2304.00306
CapsFlow: Optical Flow Estimation with Capsule Networks
We present a framework to use recently introduced Capsule Networks for solving the problem of Optical Flow, one of the fundamental computer vision tasks. Most of the existing state of the art deep architectures either uses a correlation oepration to match features from them. While correlation layer is sensitive to the choice of hyperparameters and does not put a prior on the underlying structure of the object, spatio temporal features will be limited by the network's receptive field. Also, we as humans look at moving objects as whole, something which cannot be encoded by correlation or spatio temporal features. Capsules, on the other hand, are specialized to model seperate entities and their pose as a continuous matrix. Thus, we show that a simpler linear operation over poses of the objects detected by the capsules in enough to model flow. We show reslts on a small toy dataset where we outperform FlowNetC and PWC-Net models.
Rahul Chand, Rajat Arora, K Ram Prabhakar, R Venkatesh Babu
2023-04-01T12:35:41Z
http://arxiv.org/abs/2304.00306v2
# CapsFlow: Optical Flow Estimation with Capsule Networks ###### Abstract We present a framework to use recently introduced Capsule Networks for solving the problem of Optical Flow, one of the fundamental computer vision tasks. Most of the existing state of the art deep architectures either uses a correlation oepration to match features from them. While correlation layer is sensitive to the choice of hyperparameters and does not put a prior on the underlying structure of the object, spatio temporal features will be limited by the network's receptive field. Also, we as humans look at moving objects as whole, something which cannot be encoded by correlation or spatio temporal features. Capsules, on the other hand, are specialized to model seperate entities and their pose as a continuous matrix. Thus, we show that a simpler linear operation over poses of the objects detected by the capsules in enough to model flow. We show reslts on a small toy dataset where we outperform FlowNetC and PWC-Net models. + Footnote †: Draft was completed as part of undergraduate thesis & submitted to WACV 2019 ## 1 Introduction Optical flow estimation is a fundamental computer vision problem. Given a pair of images, optical flow attempts to find dense motion field, assigning a vector displacement indicating where they moved to in the second image. Estimating the displacement field between the two images requires learning both the finer local details as well as the global structural information to match them at different locations In recent years, great progress has been achieved in estimating the optical flow using deep learning methods (Fischer et al., 2015; Ilg et al., 2016; Meister et al., 2017; Sun et al., 2017). Although existing approaches have achieved good performance, most methods rely on a multi-stage pipeline and calculation of correlation map or learning spatio-temporal features between image features to aid the matching process. Correlation typically works by calculating correspondences between a set of pixels within a given neighborhood, for a particular search space. Formally, given two feature maps \(\mathbf{f_{1}},\mathbf{f_{2}}\) with \(w,c,h\) being g their width, height and number of channels, correlation layer compares each patch from \(\mathbf{f_{1}}\) with each patch from \(\mathbf{f_{2}}\) in a fixed neighbourhood. The size of the path (**K**) is equal to \(2k+1\), where \(k\) is the kernel size. Also for reducing complexity, all patches are not matched. A patch centered at \(x_{1}\) in \(f_{1}\) is only correlated with patches in \(f_{2}\) whose center \(x_{2}\) lies in the neighbourhood whose size is \(2d+1\) and is centered at \(x_{1}\), where \(d\) is the displacement parameter. The complexity of the operation is \(D^{2}cwh\), and the output is of size \((w\times h\times D^{2})\). Thus, depending upon these hyper-parameters correlation may or may not be able to find an exact match. Similarly, learning spatio-temporal features would be limited by the receptive field of the filters used. Capsules on other hand are naturally suited to this task as position and orientation of any entity captured by a particular capsule is represented by a continuous motion vector which can model various types of complex rigid or non-rigid motion. To this end, we show the effectiveness of capsules on a toy dataset and also present first work, to the best of our knowledge, to extend capsules to dense prediction tasks. ## 2 Related Work Capsule Networks:Capsule Networks introduced by (Sabour et al., 2017) are an alternative neural net architecture as compared to Convolutional Neural Networks (CNN) that aim for viewpoint equivariance rather than invariance. Capsule Networks do so by using a group of neurons, called capsules, to encode both the presence as well pose of the entity with respect to the viewer. Furthermore, they use a dynamic routing mecha nism to transform poses from one capsule layer to the next Though initially, capsule networks used a single 8- dimensional vector to encode pose of an entity with its norm representing the probability of its presence, it was later modified by (Hinton et al., 2018) to use a 4x4 matrix to represent entity's pose and a separate scalar to denote presence. (Hinton et al., 2018) also replaced routing by agreement in (Sabour et al., 2017) with EM based routing. Since their introduction, capsules have been used for binary segmentation (LaLonde and Bagci, 2018), Action Recognition (Duarte et al., 2018), 3D point reconstruction (Zhao et al., 2018) and many other applications. **Optical Flow Methods:** Classically, the optical flow was estimated via energy minimization methods, particularly after the work of (Horn and Schunck, 1981). However, energy minimization methods fail to work for large displacement ((Brox et al., 2004)) and a series of works ((Barnes et al., 2010; Liu et al., 2011; Xu et al., 2012)) use descriptor or feature matching in conjunction with energy minimization, carried out in a coarse-to-fine manner, to alleviate the problem. (Weinzaepfel et al., 2013) blended the aforementioned work with deep learning, using manually selected convolutional filters to extract features at multiple scales before energy minimization CNNs have become ubiquitous for high-level vision problems and optical flow computation is no exception. (Fischer et al., 2015) introduced FlowNetC, which used a correlation over a set of learned features and refines it using CNN layers in a multi-scale fashion. SpyNet (Ranjan and Black, 2016) stacks both the input images together to learn spatio-temporal filters at multiple-scales and learns residual flow to hierarchically refine the output. PWC-Net (Sun et al., 2017) combines the two by using the correlation between learned features to get an estimate of the disparity at multiple scales, followed by hierarchical refinement to produce the final flow. Also, as getting real-world data for optical flow is a difficult task, (Meister et al., 2017; Wang et al., 2017; Zou et al., 2018) have explored unsupervised methods for the task and build on FlowNetC's architecture and use bidirectional warping based loss along with additional smoothness constraints to achieve satisfactory performance. Our work is a departure from this basic template as we try to leverage the better representational capability of capsule networks, owing to the presence of a motion vector, to implicitly model different kinds of transformations an object may undergo. ## 3 Overall Approach **Intuition:** A single neuron in a CNN captures the presence or absence of a feature in the image, on the other hand, a single capsule captures not only the existence of an entity but also outputs a 4x4 pose matrix which can learn the spatial relationship (translation, rotation, etc.) of an entity with respect to the viewer. We utilize this property by calculating the motion of an entity (object) as a transformation between its pose matrices. This reduces the computationally expensive correlation layer into a single operation. The intuition for the reformulation is that the pose matrix of a capsule is capable of learning positional parameters of the entity it describes, therefore using a suitable transformation (like subtracting the pose matrix for two images) one could capture how an entity moved between two images Capsules are ideally more suited to classification tasks since each class capsule tries to model a separate entity. But, real-world scenes and similarly traditional optical flow datasets do not have a setlist of classes. We thus show the effectiveness of capsule networks in case of class supervision by using a toy dataset consisting of five different shapes (or classes) of varying sizes and orientation as shown in Figure 3. ### Capsule network Before we explain our CapFlow architecture, we briefly explain the main components of a standard Capsule Network: **Convolutional encoder:** Capsule networks use few convolutional layers along with ReLU activation to extract basic features from the inputs. These features are then used as input to primary capsule layers. **Capsule Layers:** The output from the convolutional layers is individually passed to a set of capsules, which perform a convolution to output a 4x4 transformation matrix, M, representing pose of the detected entity and an activation probability, A. The output from the primary capsules is then routed into secondary capsule layers using a trainable transformation matrix W between each pair of capsules of consecutive layers. A pose matrix of capsule \(i\), in layer \(L\), is multiplied by \(W_{ij}\) to yield \(V_{ij}=M_{i}W_{ij}\) which represents its vote for the \(j^{th}\) capsule in layer \(L+1\). The poses in \(L+1\)th layer then uses a non-linear routing mechanism (explained next) to determine its poses and activation vector. Final capsule layer consists of a single capsule representing the final classification label. Routing between Capsules: Each of the element in 4x4 pose matrix of the parent capsule belongs to a Gaussian distribution whose parameters are estimated using the Expectation-Maximization (EM) algorithm. Broadly, Expectation step attempts at calculating the probability that each of the capsules in layer \(L\) is explained by capsules in layer \(L+1\). Maximization step then tries to maximize the activation of a given capsule in layer \(L+1\) depending on inputs from layer \(L\). ### CapsFlow network architecture The overview of Siamese CapsFlow architecture is shown in Figure 1. The input to the network is a pair of 128x128x3 RGB images, I1 and I2. The network begins with four convolutional layers with a kernel size of 3x3 and strides of 2 and 1 alternatively, each with a ReLU non-linearity. The resulting feature map of dimension **32x32x256** is transformed into a capsule layer of dimension **16x16x16x17**, (**KxHxWx4***4+1**, where \(K\)=number of capsule types) by applying a 3x3 convolutional operation with stride 2. This is followed by a second capsule layer with 16 capsule types and a 7x7 receptive field with stride 2. The second capsule layer is then connected to a final convolutional capsule layer (class capsule) with C capsule types, where C is the number of shapes in the dataset. The resulting output (**Cx6x6x(16+1)**) consists of two components, a Figure 1: First part of the architecture consists of a Siamese network where a pair of input images, Image 1 (\(I_{1}\)) and Image 2 (\(I_{2}\)), are passed through a convolutional encoder. Followed by 3 layers of capsules (including class capsules) to output a 4x4 pose matrix for each of the \(C\) (=5) possible classes. We select the matrices belonging to the classes with the highest average activation for both images. Then, the two encodings are passed through FC layers to form richer embeddings. We then subtract the two embeddings to get a motion embedding which is then passed to a convolutional decoder to get the output flow map. Pose matrices from the first image are also passed as skip connection to flow decoder to refine the flow. The pose matrices are also used to reconstruct back the input image \(I_{1}\) which acts as a regularizer. Figure 2: Examples of input images and the ground truth flow used for training and validation. pose matrix (**C**x**6**x**6**x**16) and an activation vector (**C**x**6**x**6**x**1). Note that, we do not perform coordinate addition to calculate the poses and activations of the final capsule layer since the penultimate and final capsule layers are not fully connected therefore there is no loss of spatial information ((Hinton et al., 2018; Duarte et al., 2018)). Coordinate addition is required when routing from a 2D capsule layer (**C** x **H** x **W** x **17**) to a 1D capsule (**C**x**17**) to preserve spatial information. The activation probability for each shape is calculated by averaging over the spatial dimensions (**6**x**6**) of the shape's activation vector. The capsule with the highest activation probability corresponds to the shape predicted by network. The output pose of the class capsule layer (C**x**6**x**6**x**16) is then masked to obtain the pose matrix corresponding to the ground truth shape. During training, we use the knowledge of ground truth to choose the correct class while during inference the class is chosen based on the highest mean activation vector. The masked output (6**x**6**x**16) is passed to a fully connected layer with RELU non-linearity to obtaine structural embedding of shape (8**x**8**x**16). The structure embeddings for both \(I1\) and \(I2\) are passed to a shared convolutional layer and then subtracted to calculate the motion embedding, which encodes the motion from \(I_{1}\) to \(I_{2}\). To help the network learn the correct pose parameters, we pass the structure embedding to a four layer transposed convolutional decoder to reconstruct the input image, \(I_{1}\). This is necessary since training without a reconstruction decoder results in the network learning a sub-par structure embedding, which in turn affects the motion embedding. To construct the final flow map, both the motion and structure embedding are passed to a decoder consisting of four transposed convolutional layers each with stride 2 and kernel size of 4**x**4. The output of the decoder (6**4**x**6**4**x**2) is extrapolated to obtain the predicted optical flow map of shape 128**x**128**x**2. ### Objective Function The objective function is made of three losses: a) Mean squared reconstruction loss to learn the correct pose embedding, b) Activation loss to learn the correct target class, and c) End-Point-Error (EPE) loss between ground truth flow map and predicted flow map. To calculate reconstruction loss, we compute mean square error between \(I_{1}\) and reconstructed image \(\hat{I}_{1}\). \[L_{mse}=||I_{1}-\hat{I}_{1}||^{2}\] Empirically, we found that reconstruction loss acts as a regularizer and helps ensure that the pose matrix learns all the relevant features. In absence of reconstruction loss, the network does not converge, probably due to the reason that there is not enough supervision to learn correct pose matrices (flow depends only on their difference and not actual values). We use margin loss to train the activation vector and is calculated as, \[L_{m}=\sum_{i\neq t}max[0,m-(mean(a_{t})-mean(a_{i})]]^{2}\] where \(a_{t}\) is the activation of the target shape class and ai is the activation corresponding to capsule \(i\). We take the mean of the activation before computing the loss. The capsule with the highest activation is used at test stage to determine the poses to be subtracted to estimate flow. The margin \(m\) is set to 0.95 during training. EPE loss is calculated as \[L_{epe}=||flow_{gt}-flow_{pred}||^{2}\] Thus, CapsFlow network is trained using the following objective function \[L=L_{epe}+\lambda_{1}L_{margin}+\lambda_{2}L_{mse}\] Figure 3: Few examples indicating FlowNetC underestimating flow while output of our proposed model is much closer to ground truth. (Best seen in color and zoomed in.) ## 4 Experiments To check the effectiveness of our approach we prepare a toy dataset consisting of simple shapes as shown in Figure 3 and compare our method to FlowNetC (Fischer et al., 2015) as well as PWCNet (Sun et al., 2017). We also explore ways to extend our method to real-world images by experimenting on Flying Chairs dataset ### Training The data is generated by sampling a random shape type (square, hexagon, triangle, circle, and hexagram), and a random \((x,y)\) location between 0 to 128. The shape is then randomly rotated and translated to obtain the corresponding second image. The data is generated on-fly to prevent overfitting. We have a fixed set of 2500 images used for testing. The architecture details of encoder and decoders used is provided in Table 1. The reconstruction and flow decoder can either be trained simultaneously or sequentially, starting with first with reconstruction and then flow. The values of \(\lambda_{1}\) and \(\lambda_{2}\) in equation 4 are chosen to be 0.05 and 2.5 respectively. All models were trained on an NVIDIA GTX 1080Ti GPU. CapsFlow was trained for a total of 30K iterations with a mini-batch size of 64. For sequential case, the reconstruction decoder was first pretrained for 10k iterations. The number of EM iterations during routing was set to 3 as in (Hinton et al., 2018). We used Adam optimizer (Kingma and Ba, 2014) with a fixed learning rate of 0.001. ### Results We compare our results with FlowNetC and PWCNet. Both of these models were trained using End Point Error for 90k iterations with a batch size of 16, using Adam Optimizer with learning rate 0.001, similar to proposed CapsFlow. The end-point error comparison for both CapsFlow models is shown in Table 2. Both variants of CapsFlow form better than both FlowNetC and PWC-Net while having 8x fewer parameters. But, despite lower number of parameters, capsules take significantly more time primarily due to iterative EM routing procedure. But as capsule networks develop and see wider adoption there is a high likelihood that hardware acceleration and better routing procedures will be able to reduce the time overhead. On visualizing flow outputs for CapsFlow and FlownetC, we notice that latter underestimates flow magnitude for a large number of samples as shown in Figure 3. To confirm this, we test both models on another validation set having an average flow magnitude 1.5 times more that of the training set. FlownetC saw a 3x jump in the EPE from 0.50 to 1.51 on this new set while CapsFlow EPE only increased from 0.39 to 0.81. This highlights a higher generalization capacity of CapsFlow. We hypothesize the reason for this observation to be the fact that while capsules model flow as a continuous motion matrix, correlation is sensitive to the choice of displacement and neighborhood size and thus may not be able to model the large magnitude of flows as well, particularly if they were not seen during training. We also try to see what pose matrices and the difference matrix model by scaling their magnitude uniformly in the range [-1,1] in with increments of 0.5. On changing the magnitude of the difference we see the flow \begin{table} \begin{tabular}{c|c c c c c} \hline \multicolumn{6}{c}{Encoder} \\ \hline Conv Stride & 3x3 & 3x3 & 3x3 & 3x3 \\ No.of filters & 2 & 1 & 2 & 1 \\ BatchNorm & Yes & Yes & Yes & Yes \\ \hline \multicolumn{6}{c}{Decoder} \\ \hline Transpose Conv & 4x4 & 4x4 & 4x4 & - & - \\ Convolution & - & - & - & 1x1 & 1x1 \\ Stride & 2 & 2 & 2 & 1 & 1 \\ No.of filters & 64 & 64 & 32 & 16 & 2 \\ \hline \multicolumn{6}{c}{Reconstruction Decoder} \\ \hline Transpose Conv & 4x4 & 4x4 & 4x4 & 4x4 & - \\ Convolution & - & - & - & - & 1x1 \\ Stride & 2 & 2 & 2 & 1 & 1 \\ No.of filters & 32 & 64 & 16 & 8 & 1 \\ \hline \end{tabular} \end{table} Table 1: CapsFlow network architecture \begin{table} \begin{tabular}{c c c c c} \hline Model & EPE & Params(in M) & Time(ms) \\ \hline CapsFlow (T) & 0.48 & 1.62 & 13.39 \\ CapsFlow (S) & 0.39 & 1.62 & 13.39 \\ FlowNetC & 0.50 & 39.17 & 3.1 \\ PWC-Net & 0.47 & 8.75 & 12.1 \\ \hline \end{tabular} \end{table} Table 2: Quantitative comparison on single shape dataset: T & S refers to simultaneous and sequential training respectively, for more information refer to Section 4.1. The average time is calculated for an input pair of resolution 128x128. The best result is highlighted in red color, while the second-best in blue color. changing from one extreme in the flow spectrum to total opposite but no change in overall structure (Figure 4). On the other hand, the changing magnitude of skip connection in the range [0.25, 2.5] in increments of 0.25; changes the scale of the shape as shown in Figure 5. This shows that flow and pose are essentially getting disentangled with the flow coming from the difference of poses while structure comes from Image 1's pose matrix. **Extending to multiple shapes:** We generate instances of our MultiShape dataset by sampling two different shapes on-fly for each image. The flow for both shapes is combined to produce the ground truth flow map. On average, the bounding box for each shape is of size 50x50, and the center of both shapes is bounded inside a region of size 90x90. Therefore the bounding boxes for both shapes on average have a 45% overlap. While training CapsFlow on multiple shapes we mask one class capsule at a time and use its pose matrix to construct the corresponding flow map. The final output of the network is obtained by superimposing both flow maps (in regions where both maps intersect, we consider the flow of only one shape based on the same priority that was used for creating the training set). The training parameters and losses for this experiment are the same as those for a single shape. We achieve a marginally better EPE of 1.78 as compared to FlowNetC's, 1.9. For test images, we pick the two most active class capsules and use their pose matrices to construct the individual flow map (which are later used to find the final flow map). One advantage of CapsFlow is that unlike other methods we can obtain the individual flow maps for each shape in the image. Since capsules enforce a strong constraint on shapes of each object, even in cases where one of the shapes is highly occluded, CapsFlow can extrapolate the whole object using its prior about the shape of the object as shown in Figure 6. ### Unsupervised Training We also attempt to test the effectiveness of the proposed model in an unsupervised setting. To train CapsFlow in an unsupervised setting, we average the outputs of all the capsules in the final layer, rather than masking them. We train CapsFlow using the same set of losses used in UnFlow [(Meister et al., 2017)], the losses are a) occlusion-aware photometric loss between the input images and their flow warped counterparts, b) second-order smoothness constraint to encourage co-linearity of neighboring flows, and c) forward-backward consistency penalty. Our model performs comparably to UnFlow method in shapes dataset with an EPE of 2.05 as compared to UnFlow method EPE of 1.8. But, despite good EPE score, we notice that in the absence of masking, only 1 out of the 5 classes ever remain active and comparable results can be achieved if other capsules are not considered. The reason for this is elaborated in the next section ## 5 Drawbacks and Future Work Though our proposed model gave promising results on a toy dataset, capsule network still cannot be naively used on real world samples. Capsules tend to perform poorly in tasks where latent capsules do not represent distinct classes [13]. The sub-par performance is due to capsules learning Figure 4: _Left_: Results from interpolating flow. We scale the magnitude of difference matrix between the poses of given images while keeping the skip connection from I1 same and notice that difference matrix only changes the flow without any change in structure. To be precise, scaling magnitude seems to be equivalent to moving from one end of flow spectrum to diametrically opposite. _Right_: Optical flow color coding. Figure 5: We change the magnitude of the pose matrix from \(I_{1}\) without changing the difference and notice only the output shape of the flow changes without any change to flow values. Figure 6: CapsFlow can successfully model individual flows even in case of high overlap. Also, at motion boundaries and overlapping regions, FlowNetC expectedly in absence of any prior on shape boundaries, assigns flow belonging to one shape to another (highlighted in the red box) Figure 7: Qualitative results on FlyingChairs dataset. Our CapsFlowBin (CapsFlow with binning) method obtained EPE of 3.6, while FlowNetC achieves EPE of 2.0. parameters using different paradigms. Rather than learning all parameters using EM-routing (generatively), capsules learn transformation matrices in a discriminative fashion through back-propagation in the absence of which we may get degenerate solutions, such as when all transformation matrices collapsing to zero Without concrete class capsules, transformation matrices have no incentive or gradient to differ from each other. In this case, the probability that a particular subset of lower level capsules are transformed into similar clusters in the pose space of two higher-level capsules \(j\) and \(k\) increases. This results in a breakdown in the part-to-whole relationship as both \(j\) and \(k\) now model very similar entities, which in turn results in lower level capsules that route to \(j\) and \(k\) to model similar entities too ([Rawlinson et al., 2018]). Therefore unmasked latent capsule networks require exponentially more capsules than their masked counterpart. This is a major shortcoming as such a requirement is very restrictive since the majority of dense prediction tasks have no (definite or explicit) classes/entities, e.g. Depth estimation, Super-resolution, 3D segmentation. Another drawback of capsule networks is that, unless explicitly trained, each class capsule models only single instance of that entity, thus will fail in cases there are multiple instances of same entity in a scene. Further capsules also cannot model global motion though that is not a major issue as there are other robust techniques available to estimate camera motion. One work-around for dense optical flow that we tried was to spatially bin the ground truth and treat those bins as class capsules. The input to these class capsules was obtained by subtracting poses from secondary capsule layer itself. This technique gave good results during training but fails during testing due to high mis-classification rate (See Figure 7). But despite these drawbacks capsules seem to be a promising direction for future research and can address the drawbacks faced by current CNN based deep methods in tracking and optical flow computation. ## 6 Conclusion From the experiments discussed above, we conclude that Capsule Networks, despite various weaknesses and still being in infancy, offer a great way for modeling motion and calculating optical flow. Though they outperform the current state of the art on a toy dataset for both single and multiple shapes scenarios and also demonstrate higher generalizability, they require more research to make them adapt to cases where a comprehensive list of entities may not be available or if there are multiple instances of the same entity is present in the scene. But despite that, we believe Capsule Network, due to their greater representational capacity and implicit encoding of spatial relationships between objects, are an exciting direction for further research on motion estimation and optical flow. We think that with further improvements in Capsule Networks, our framework can be adapted to obtain better results on real-world images.
2304.07469
Generating an interactive online map of future sea level rise along the North Shore of Vancouver: methods and insights on enabling geovisualisation for coastal communities
Contemporary sea level rise (SLR) research seldom considers enabling effective geovisualisation for the communities. This lack of knowledge transfer impedes raising awareness on climate change and its impacts. The goal of this study is to produce an online SLR map accessible to the public that allows them to interact with evolving high-resolution geospatial data and techniques. The study area was the North Shore of Vancouver, British Columbia, Canada. While typically coarser resolution (10m+/pixel) Digital Elevation Models have been used by previous studies, we explored an open access airborne 1 metre LiDAR which has a higher resolution and vertical accuracy and can penetrate tree cover at a higher degree than most satellite imagery. A bathtub method model with hydrologic connectivity was used to delineate the inundation zones for various SLR scenarios which allows for a not overly complex model and process using standard tools such as ArcGIS and QGIS with similar levels of accuracy as more complex models, especially with the high-resolution data. Deep Learning and 3D visualizations were used to create past, present, and modelled future Land Use/Land Cover and 3D flyovers. Analysis of the possible impacts of 1m, 2m, 3m, and 4m SLR over the unique coastline, terrain and land use was detailed. The generated interactive online map helps local communities visualise and understand the future of their coastlines. We have provided a detailed methodology and the methods and results are easily reproducible for other regions. Such initiatives can help popularise community-focused geovisualisation to raise awareness about SLR.
Forrest DiPaola, Anshuman Bhardwaj, Lydia Sam
2023-04-15T04:12:55Z
http://arxiv.org/abs/2304.07469v1
Generating an interactive online map of future sea level rise along the North Shore of Vancouver: methods and insights on enabling geovisualisation for coastal communities ## Abstract Contemporary sea level rise (SLR) research seldom considers enabling effective geovisualisation for the communities. This lack of knowledge transfer impedes raising awareness on climate change and its impacts. The goal of this study is to produce an online SLR map accessible to the public that allows them to interact with evolving high-resolution geospatial data and techniques. The study area was the North Shore of Vancouver, British Columbia, Canada. While typically coarser resolution (10m+/pixel) Digital Elevation Models have been used by previous studies, we explored an open access airborne 1 metre LiDAR which has a higher resolution and vertical accuracy and can penetrate tree cover at a higher degree than most satellite imagery. A bathtub method model with hydrologic connectivity was used to delineate the inundation zones for various SLR scenarios which allows for a not overly complex model and process using standard tools such as ArcGIS and QGIS with similar levels of accuracy as more complex models, especially with the high-resolution data. Deep Learning and 3D visualizations were used to create past, present, and modelled future Land Use/Land Cover and 3D flyovers. Analysis of the possible impacts of 1m, 2m, 3m, and 4m SLR over the unique coastline, terrain and land use was detailed. The generated interactive online map helps local communities visualise and understand the future of their coastlines. We have provided a detailed methodology and the methods and results are easily reproducible for other regions. Such initiatives can help popularise community-focused geovisualisation to raise awareness about SLR. sea level rise, land use land cover, geovisualisation, interactive maps, LiDAR DEM ## 1 Introduction During the last seventy years the population of coastal cities has expanded 4.5 times (Xu et al., 2021). 230 million people live within less than 1 metre (1m) above an open body of water whereas 1 billion live less than 10m above (Kulp and Strauss, 2019). It is likely that hundreds of millions of people will be displaced during the next centuries due to Sea Level Rise (SLR) (Murali and Kumar, 2015; Mahapatra et al., 2015). From 1901 to 2018 there was between 15-25cm increase in average sea level and 7.5cm of that increase occurred from 1993 to 2017 (Frederikse et al., 2020). Due to the number of people that will be affected, there have been numerous studies throughout the world on how SLR will impact various communities. Most of this research has been completed at a local level (Rwanga and Ndambuki, 2017). However, when it comes to developing effective geovisualisation of the SLR research results for coastal communities and policy makers to realise the seriousness of the situation, many efforts are still needed. Numerous studies have predicted that over the coming years an increase of SLR will occur because of the global temperature increase due to Climate Change, the extent of that increase varies greatly (Sahin et al., 2019). The Intergovernmental Panel on Climate Change (IPCC) predicts that at a Representative Concentration Pathway (RCP) of 2.6 (one of the best-case scenarios) there will likely be a global mean SLR of 0.30-0.65m by 2100 and 0.54-2.15m by 2300. At RCP 8.5 (one of the worst case scenarios) there is likely to be an increase of 0.63-1.32m by 2100 and 1.67-5.61m at 2300 (Horton et al., 2020). Several researchers have discussed issues of properly accounting for the overall uncertainty inherent in SLR assessments (Gesch, 2018; Sirianni et al., 2012). These authors state that best practices should be used to increase the accuracy of Digital Elevation Models (DEMs) and Land Uses/Land Covers (LUICs) (the identification of major surface cover classes) in the analysis of SLR. There are many free, publicly accessible satellite global DEMs that are widely used by SLR studies including; SRTM, ASTER GDEM, ALOS World 3D, TanDEM-X, NASADEM, and MERI. Each of these have resolutions that are coarser than 10m in horizontal and their vertical accuracies can vary from several meters to tens of meters, whereas ideal SLR models require fine increments (<=1m) of data (Gesch, 2018). In the study carried out by Gesch (2018), he concluded, "higher resolution data with better vertical accuracy significantly improve assessment results." However, this is not the case for the publicly accessible satellite global DEMs mentioned above which is a significant limitation to the ability to accurately discover what areas are at risk to be inundated (Xu et al., 2021). Aforementioned global DEMs usually are also of coarser spatial resolutions, varying between 10m and 90m per pixels, thus limiting their capabilities for SLR research. Due to these spatial resolution and accuracy limitations of free-to-access global DEMs, there is a growing need to employ higher resolution and better accuracy regional-scale DEMs to enable reliable SLR estimates. In this study, we have used a high resolution (1 m/pixel) and high accuracy publicly available airborne Light Detection and Ranging (LiDAR) DEM. Such LiDAR DEMs are usually better than 3m/pixel in spatial resolutions (Rwanga and Ndambuki, 2017; Breili et al., 2020). Besides resolution, LiDAR DEMs have high vertical accuracy which is very important in estimating land area and property vulnerability to SLR (Sirianni et al., 2012). Lastly, LiDAR is better at penetrating the vegetation canopy which creates more reliable DEMs (Vernimmen et al., 2020), a valuable improvements for deriving the terrain of vegetated coasts. There are many types of models of SLR predictors including Coastal Impact Visualization Environment (CIVE), Coastal Risk Assessment Frame (CRAF), Dynamic Interactive Vulnerability Assessment (DIVA) and Bathtub Model. CIVE and CRAF are both models that are used for specific regions in the USA and the UK. DIVA that is very expensive to use requires a significant amount of data to create simulations such as tidal patterns which are not part of the scope of this paper. By contrast, the bathtub model is one of the most popular simulations for predicting SLR due to its quick development and its ease of use for the public to understand. This model uses simple mass balance equations based on the premise of water entering a tub. From this concept a relatively reliable model of SLR can be created in a short period of time. Moreover, when Hydrologic connectivity (inundation only occurs in areas that are directly connected to the ocean (Fu and Song, 2017)) is added to the bathtub method, it can perform at levels close to the accuracy of more complex models. As GIS data collection advances, an expanding volume of publicly accessible LiDAR mapping data will become more available. Our study area, the coastline of Vancouver, is predicted to display 1.0-1.4m SLR by 2100, and 2m SLR by 2200 on the website of the City of Vancouver at: [https://vancouver.ca/green-vancouver/climate-change-and-sea-level-rise.aspxH](https://vancouver.ca/green-vancouver/climate-change-and-sea-level-rise.aspxH):":text=Based%20on%20sea%20level%20rise.metres%20(6%20feet%20bv%202200 (Vancouver, 2023). A study (Malik and Abdalla, 2016) predicted the worst case of SLR would be 4m for the region by 2300. However, it is worth noting that such estimates are still based on lesser-to-moderate impact scenarios, and any unforeseen or extreme climate scenario can lead to far-reaching impacts on this coastline. Utilising the free-of-cost availability of LiDAR data and computationally efficient algorithms for SLR predictions, thus becomes relevant to generate more reliable models for this coastline, which can further be visualised and interacted with by the local communities. Geovisualisation enables visual analysis of geospatial data and is achieved through convergence of cartography, geographic information system (GIS), and geomodelling methods (Kraak and Ormeling, 2010). In contemporary geovisualisation, digital maps are the base datasets, and improving internet connectivity and online map hosting platforms are further aiding to increasing number of researchers thinking in this direction. Coastal communities are one of the most vulnerable to natural disasters, and SLR and associated impacts have the potential to alter community socioeconomicly drastically. Thus, a focus of SLR research should also be on designing and advancing efforts to enable digital geovisualisation of findings to the community and local policymakers. With free-to-access geovisualisation platforms such as Environmental Systems Research Institute (ESRI) Story Map becoming popular, now is the time to ensure that complex SLR results can be made accessible online in an easy to understand format for the public. Even policymakers are often not from geophysical backgrounds and easily understandable spatial geovisualisation can help their understanding for quick and informed decision making. However, coastal and SLR geovisualisation is still in its initial years with only several significant publications and that too mostly post-2015 (e.g., Minano et al., 2018; Newell and Canessa, 2017; Vulfa et al., 2018). Thus, there is a need to form a framework that implements geovisualisation on various SLR scenario over a platform that can be made easily accessible to public. Keeping in view, the aforementioned research gaps and needs, the aim of our paper is to create a more precise, publicly accessible interactive online map using modern GIS methods that allows the public to explore the impacts of SLR due to Climate Change with its direct impact to key well-known places of the North Shore of Vancouver, Canada using new high-resolution data and Deep Learning. To achieve this aim, there are three main objectives: 1. Demonstrate which areas will be inundated on the coast at 1m, 2m, 3m, and 4m SLR by using 1m resolution LiDAR Digital Elevation Model (DEM) implementing the bathtub method model with hydrologic connectivity. 2. Create a land use map of these areas and predict future changes in land use with modern methods including Deep Learning. 3. Create an interactive online map detailing findings for the public to explore negative SLR impacts to local major infrastructure, commercial areas, and residential areas. ## 2 Study Area The study area is the North Shore of Vancouver (Figure 1), British Columbia, Canada. This region is surrounded by three bodies of water: Howe Sound to the west, Burrard Inlet to the south and Indian Arm to the east. The northern third of the District of North Vancouver and West Vancouver was left out of this study since most of the inhabited areas of this mountainous area are ski resorts which will not be affected by SLR. The Greater Vancouver area is considered to be amongst the most vulnerable cities to SLR (Lyle and Mills, 2016). Compared to the rest of the Metro Vancouver area, the North Shore relies heavily on its coastal economy and includes part of the largest commercial port in Canada. Unlike the City of Vancouver (Lyle and Mills, 2016) and some of its suburbs such as Richmond (Malik and Abdalla, 2016), there have been no major focused studies completed on the effects of SLR on the North Shore. The temperate climate of the Greater Vancouver area is oceanic, humid, and cool with a significant rainy season normally lasting from October to March. The average temperature is 11.0\({}^{\circ}\)C. This region is susceptible to flooding from the Fraser River as well as to earthquakes and windstorms. Figure 1 Maps of the overall location of the study area (top) and the local area with boundaries of the Municipalities (bottom) ## 3 Methodology ### Creating SLR images Unlike most DEMs used in the past for SLR created from typically 10m+ satellite images, this project used airborne LiDAR that is 1m in resolution. LiDAR data has a higher resolution, higher vertical accuracy, and can penetrate tree cover at a higher degree than most satellite imagery (Sirianni et al., 2012; Vernimmen et al, 2020). The bathtub method model with hydrologic connectivity was used to delineate inundation zones. This allowed for a not overly complex model and process, making it possible to obtain results by using standard tools like ArcGIS at levels close to the accuracy of more complex models especially with the high-resolution data. LiDAR DTM dataset and the created Coastline shapefile were the two layers used to create the four SLR levels. High resolution LiDAR DTM also allowed the creation of a more accurate coastline with better vertical accuracy. A polygon was created by hand of the high tide coastline of the study area which was used for a mask throughout this Methodology. The Raster Calculator was an important tool to create a new raster image to show 0 to 1m of SLR using the equation "DEM" <=1. This demonstrated all the areas that would be inundated if the sea level rose to 1m. 1m was used since the City of Vancouver predicts that there will be 1.0-1.4m SLR in the region at the current trajectory for global temperature rise by 2100. Moreover, 1m is the smallest increment in the LiDAR DEM. The above was repeated for 2m, 3m and 4m. Since the four equations that were used produced all areas between 0m and 4m above sea level, several of the areas that were within this range were located inland without any connection to the coastline of Burrard Inlet. To rectify this, all areas that did not connect to the inlet were deleted by using hydrologic connectivity, accomplished by converting the raster image to a vector format. Even though there might be some degradation of data, whenever converting between the two, vector data was needed for the land use map. After the four new polygon layers (1m, 2m, 3m, and 4m) were created, removing inland inundation zones was possible. This was accomplished by using the four polygon layers with the Coastline shapefile and using Select by Location with Are Crossed by the Outline of the Source Layer Feature as the selection method. This allowed for areas that touch the borders between 0 and 4m to be the only areas. This approach of hydrologic connectivity worked for this study area since three of the four sides of the borders are water bodies and the last is a mountainous area with very high elevation. These new polygons were the end product for the four SLR layers used throughout the next sections of the Methodology. ### Creating past, present, and future Land Use A 2021 LULC, part of the Sentinel-2 10m Land Use/Land Cover Time Series, was taken from Esri Living Atlas. The LULC was clipped with the Coastline shapefile that was already created. This allowed for a similar study area of the region. However, since this raster had a resolution of 10m and the SLR data had one of 1m, the clipping and results were less accurate. The four SLR layers clipped the output of the LULC to demonstrate four different LUICs of the four different heights of SLR for analysis to examine what landcover would be inundated at each SLR height. To test the accuracy of ESRI's 2021 LULC, an accuracy assessment was completed by using ArcGIS Pro and Google Earth Images. In ArcGIS Pro, 500 stratified random points were used to observe if the classification was the same as the ground truthing. For this LULC ground truthing was completed by carrying out a small number of field observations, coupled with Google Earth Images for the year of 2021. From this a Confusion Matrix was created which demonstrates a high overall accuracy of 97% and a high Kappa Coefficient of 94% for the 2021 LULC. Since there will be changes for the current LULC of the study area when 1m to 4m SLR might occur, this study used past land cover changes to predict future ones in TerrSet Idrissi. This was particularly important because of the long-term time range to achieve 1m to 4m rises in sea level, if at all. In TerrSet LULC maps were created from Landsat 5 images from the years 1991, 2006 and 2011. By using Land Change Modeler of this program with the first two land cover maps as well as other maps such as distance from roads, the modeler loosely predicted future LULC. The 2011 image was used for validating the results from the MLP/MC Deep Learning prediction that was created by the first two images of 1991 and 2006. Landsat 5 images from 1991, 2006 and 2011 were used. All the satellite images that were downloaded were from Tier 1, the highest quality of data. Moreover, Level 2 data was used since Bottom of Atmosphere corrects atmospheric effects resulting in surface reflectance images. These satellite images were taken over the summer period to avoid the presence of snow in the data. The Bands 2 (green), 3 (red), and 4 (near infrared) were used for the three time periods and created into composite images. A false colour image was created with Band 2 used as the blue image, Band 3 as the green, and Band 4 as the red. False colour image was used since it allows for easier training to distinguish the difference between Grassland (light red), Woodland (dark red), Bare Earth (white to very light blue) and Urban Areas (light blue). From a supervised classification (Maximum Likelihood) of the data, five different classes were sampled to be trained and saved for each of the composite images of the three time periods. 10% of each image was trained for the remaining 90%. The five classes were named: 1-Waterbodies, 2-Trees, 3-Grassland, 4-Buildings or Roads, and 5-Bare Earth. The bands used to create the composites for each year were then used to allow the system to analyse the sampled data. These five classes were used since each of these was defined for a program to train, and each was differently affected by SLR. Using Maximum Likelihood Classification tool (MAXLIKE), a landcover map was created for these three time periods by classifying the images with the samples. MAXLIKE was used for this study since it is one of the most popular methods of classification in remote sensing that predicts to which class each pixel belongs by the closest indicator. To assert the accuracy of these three LULCs, three other accuracy assessments were completed on ArcGIS Pro of the respective years. However, unlike the assessment of the 2021 LULC, only Google Earth Images for each of the past years were used as the ground truths equivalent. Since there were no Google Images for 1991, the visual analysis of satellite image used to create the LULCs was the basis of the ground truth validations for that year. Each of the years had a high overall accuracy (96% for 1991, 99% for 2006, and 97 for 2011) and a high Kappa Coefficient (95% for 1991, 98% for 2006, and 94 for 2011). The lowest user accuracy was Grassland for 1991 and Bare Earth for 2006 and 2011. The reason Grassland might have been misclassified the most for the 1991 LULC is due to only having a 30-metre resolution satellite image as a reference to conduct ground truthing. From these 3 LULCs, the Land Change Modeler of TerrSet was used to predict future change of land cover for the study area. The 1991 and 2006 LULCs were the two inputs for Land Change Modeler from which TerrSet ran a change analysis of the gains and losses of the classes by pixel for the two time periods as well as of spatial trends of change. These will be further discussed in the Results section. Since twenty transitions can occur between the five classes, only transitions of 5000 pixels or greater were used for this study. This high threshold for transition area was used since it only allowed for one transition to occur which was Grassland to Buildings or Roads. If a lower threshold were used, then the accuracy for this Land Change Model would be less than 50%. This is discussed more in the Results section. Six variables in the form of rasters were created to be used for the MLP AI technique Sub-Model for predicting transitions of the LULC classes. These six rasters were Elevation Distance from Roads, Slope, Distance from Disturbance, Distance from Urban, and Distance from Rivers. These six variables were used with the 1991 and 2006 LULCs to predict how transitions would occur in this study area by creating Transition Potential with MLP. Once this was created, it allowed for the prediction of land change for future dates with the machine learning tool Markov Chain. For this study four dates were created which were 2011, 2100, 2200, and 2300. 2011 was used to validate the results with the formerly created 2011 LULC whereas the other three were used to predict what LULC will look like in the future. ### Public Engagement Demonstrating Effects of Sea Level Rise An interactive online web map was created that allows the public to view, interact, and engage with the 1m, 2m, 3m, and 4m SLR visualised output data to understand how flooding will affect their communities, including important local landmarks, sites, and infrastructure. For the online interactive map, the same four SLR vectors were used with the additional 2021 study area LULC layer that was used using ArcGIS Online. These five layers were important for public viewing since they demonstrate which areas and land cover will be affected by SLR. A semi-transparent version of all the layers was used for this online map to allow the public to observe which areas of the satellite image will be affected such as their homes and other personal locations. The layer Municipal Boundaries was also added to the online map to show the extent of the study area. ## 4 Results ### Results of the four study levels of SLR Figure 2 demonstrates the output of the different elevations of SLR from 1m to 4m with each level of inundation being darker than the prior level. As can be seen in this map that visualizes the results of the project, all the layers combined inundate only a small swath of the study area with 1m-2m SLR flooding mainly beaches. Along the eastern and western sides of this map there is a lack of inundated area due to the rapid elevation from sea level either because of the presence of cliffs or hills adjacent to beaches. Table 1 Inundation area and total percentage of each SLR height \begin{tabular}{c|c c} _SLR_ & _Inundation area in km2_ & _Percent of total study area [150.06 km2]_ \\ _height_ & & \\ \hline _1m_ & 0.78 &.5\% \\ _2m_ & 1.09 &.7\% \\ _3m_ & 2.93 & 2.0\% \\ _4m_ & 6.39 & 4.3\% \\ \end{tabular} Unlike the first two SLR levels, 3m SLR inundates large swaths of land that are not beaches, mostly flooding Ambleside Park in West Vancouver as well as many commercial and residential sites located along Burrard Inlet. There are also small pockets of residential areas along the coast of all three districts that become inundated. 3m to 4m increase of sea level has the greatest difference in increased area of inundation. Whereas less than 3m SLR started to inundate areas other than beaches, at less than 4m SLR most of these same areas would become completely inundated. This is the case for the North Shore Auto Mall as well as the surrounding light industrial area. Interestingly, a large portion of the area that is inundated in this level is closer to the water than the previous level, specifically regions that are near the two Narrows. This is most likely due to the raised platforms used for shipping having a higher height than the regions directly inland. There are still numerous areas that would become islands if inundation occurred at this level many of which are either railroad lines or large piles of resources such as sulphur. ### Comparison of the Study Data to Readily Available Online Map Figure 3 (left) is sourced from CoastalDEM, a free worldwide Interactive Sea Level Rise map created by Climate Central. Based on search results, this is one of the most widely used public maps for observing SLR globally. The horizontal accuracy for this online map for locations outside the United States (not including Alaska) is above 30 metres. Specifically for the North Shore of Vancouver, the online map has a horizontal resolution close to 30 metres throughout. CoastalDEM uses the same bathtub approach with hydrologic connectivity as this study. Compared to the 4m that was created in this study, CoastalDEM 4m SLR seems to overpredict areas. This seems to especially be the case for areas considered to be islands in this study's data. In contrast to the results of this study, CoastalDEM predicts these islands as being inundated such as the surrounding regions. An explanation of this would be the difference in horizontal resolution between the two models. Since CoastalDEM has a pixel resolution close to thirty times larger than the model that was created for this study, areas that are distinctly either inundated or not included in this study model would be considered mixed in CoastalDEM and therefore inundated. This demonstrates the importance of having the highest horizontal accuracy for SLR since the lower the accuracy, the greater the tendency to overpredict flooding (Gesch, 2018). The image of Figure 2 is an example of the perceived overprediction and underprediction of the CoastalDEM model which predicts the whole south side to be underwater at 4m SLR and almost none of the north side inundated, and further stresses upon making SLR geovisualisation more realistic and informative for communities. By contrast the right image of Figure 3 which is the model created in this study predicts that the raised rail lines will not be flooded, whereas surrounding regions will be. The results of the regions inundated on the coast at 1m, 2m, 3m, and 4m SLR do not show a consistent increase of the areas flooded but demonstrate that with each metre of SLR the percentage of total area inundated steeply increases. ### Results of 2021 LULC 10N area was downloaded, masks were used on this Esri LULC that comprised each SLR layer from 1m to 4m as well as the created Coastline for the whole study area. There were five different LULC classes: Water, Trees, Grassland (which Esri names "Rangeland," but for this study was termed "Grassland" since almost all these areas are either large grass areas of parks Figure 31: Left is a picture from the CoastalDEM online map; Right is an example of the same location from this study or golf courses), Bare Earth, and Built Areas. Each of these classifications are self-explanatory besides Built Areas, which includes single houses, apartments, industrial, and commercial areas. Table 2 demonstrates the LULC (Figure 4) classes for each metre of SLR. For 1m SLR the two most numerous classes were Built Areas and Water. Even though a mask was used of a high tide coastline, there were still some pixels for LULC that were Water from the surrounding saltwater bodies of the Burrard Inlet, Howe Sound, and Indian Arm. This is likely due to mixed pixels that were Water and another class. Moreover, instead of the resolution being 1m like the DEM that was used, the LULC used was 10m. These Water class pixels seemed also to be the mouths of rivers and streams. For 1m SLR there were barely any pixels that were the classes Grassland, Bare Earth, or Trees. With each of these levels of SLR the pixel percentage of the class increased with the exception of Water. This is due to the possibility that most of the pixels that were classified as Water were the coastline or river mouth pixels that were already calculated in 1m SLR. There was only an increase of less than 300 pixels of water from 1m to 2m SLR, which is the third lowest. Even though the least was less than 100 pixels with the class Bare Earth, the increase for this class was over 0.2%. Also, there was even more of a case for the class Grassland which increased by 1.5%. This is the opposite for Water where the class decreased by over 10% for this level of SLR. The main increase in 2m SLR was Built Areas where over 50% of the LULC was this class. Just like 2m SLR, 3m SLR has similar classification of pixel change. Built Areas increased in this level of SLR by almost 16000 pixels or an increase of over 20% which was 71.19% of all pixels being part of this class. Besides the percentage of Water decreasing, the percentage of Grassland also decreased for this level of SLR compared to 2m SLR. Most of the new area that would be inundated from 3-4m of SLR would be Built Areas. Pixels that were classified as Built Areas were 82.81% of the total area that will be inundated if 4m SLR occurred in this study area. Other than the Built Areas, each of the other classes decreased in overall percentage with both Grassland and Water decreasing by over half in their overall percentage compared to 3m SLR. Moreover, for the first time there were more pixels classified as Trees than ones classified as Water. When compared to the whole study area, there was significantly more of the class Trees compared to the areas that would be inundated. This is due to this class being mainly found at very high elevation since these regions are usually part of one of the three major mountains found in the study area. \begin{table} \begin{tabular}{c|c c c c c c} \multicolumn{1}{l}{_SLR_} & _Water_ & _Trees_ & _Built Areas_ & _Bare_ & _Grassland_ & _No Data_ \\ & & & & & _Earth_ & & \\ \hline \(1\) & 3728 at & 203 at & 3553 at & 235 at & 57 at \\ \(m\) & 47.94\% & 2.61\% & 45.71\% & 3.02\% & 0.73\% \\ \(2\) & 4039 at & 735 at & 5544 at & 326 at & 235 at \\ \(m\) & 37.13\% & 6.76\% & 50.96\% & 3.00\% & 2.16\% \\ \(3\) & 4390 at & 2827 at & 1079 at & 436 at & 596 at \\ \(m\) & 14.97\% & 9.64\% & 71.19\% & 1.49\% & 2.03\% \\ \(4\) & 4573 at & 5224 at & 52868 at & 574 at & 607 at \\ \(m\) & 7.16\% & 8.18\% & 82.81\% & 0.90\% & 0.95\% \\ _Study_ & 17818 at & 71966 at & 74101 at & 6211 at & 15866 at & 9 at \textless{}0.01\% \\ _Area_ & 1.19\% & 47.96\% & 49.38\% & 0.41\% & 1.06\% \\ \end{tabular} \end{table} Table 2: Number and percentage of 10m resolution pixels per LULC class for each SLR height Figure 4 2021 LULC with inundated areas highlighted ### Results from Future LULC in Inundated Areas LULC is an important part of SLR based studies because it demonstrates what types of areas will be affected by inundation (Rwanga and Ndambuki, 2017; Lentz et al., 2019). Artificial Intelligence (AI) techniques like Deep Learning have been used in analysing remote sensing data at a rapidly increasing rate. The techniques that have been created using Deep Learning algorithms have improved the ability to categorize remote sensing images (Campos-Taberner et al., 2020). These general systems use a Deep Learning Convolutional Neural Network (CNN) approach which is the main method used in AI-based visual systems for many tasks including segmentation and identification (Solorzano et al., 2021). Land cover change is the loss of natural areas, usually forests, grasslands and other natural environments to urban development (Mihailescu and Cimpeanu, 2020). The AI technique called Multi-Layer Perceptron (MLP) is used by Land Change Modeler to predict these future changes. MLP itself is a feedforward artificial neural network (ANN) which is a group of algorithms that try to identify relationships in particular data sets by a process that mimics the workings of the human brain (Camacho Olmedo et al., 2018). Markov Chain (MC) techniques, used to find the probability of change, is a simple model that in LULC predicts the change over time by using past trends to predict future ones. TerrSet's Land Change Modeler uses a combination of MLP and MC to quantify spatiotemporal change; MLP models the transitions whereas MC models the future predictions (Shen et al., 2020). Since there is a likelihood of there being a change in the LULCs of the study area if SLR occurs, a Land Change Model was created to predict what the LULC would be in 2100 (Figure 7), 2200 (Figure 8), and 2300 (Figure 9). These three dates were used because the City of Vancouver predicts that around 2100 there will be between 1.0-1.4m rise, and in approximately 2200 there will be 2m rise. The date 2300 was used since the IPCC predicts that RCP of 8.5 would be between 1.67-5.61m around 2300 (Horton et al., 2020). To predict future changes, three past LULCs were created from their satellite images (1991, 2006, and 2011) since two maps were needed to compare differences and the last map was used to validate the TerrSet AI Change Model (...). This AI Change Model allows users to train machine learning models by using time-series satellite imagery. The 1991 LULC that was created with MAXLIKE with Landsat 5 bands of 234 was masked with the shapefile Coastline to have the same area as that of the study area (150km\({}^{2}\)). Trees had the largest area of 72km\({}^{2}\) which is almost the same as the 2021 LULC that was created by Esri. On the other hand, the Waterbodies class had a much smaller area in the 1991 map compared to the 2021 LULC. This is probably due to the following two reasons. The first is that the 2021 LULC used Sentinel-2 images with a resolution of 10m whereas the 1991 LULC had a resolution of 30m; rivers and other waterbodies which are quite narrow would be more likely to be identified for the 2021 LULC. Secondly, some of the surface that was in the study area was classified as sea water in the 2021 LULC whereas none was for the 1991 LULC, probably demonstrating different tidal levels between the two LULCs. Moreover, Grassland and Buildings are quite different compared to 2021 LULC with Grassland being 20km\({}^{2}\) (compared to 1.5km\({}^{2}\) in 2021) and Buildings being 55km\({}^{2}\) (compared to 74km\({}^{2}\) in 2021). Once more there are perceived to be two major differences between these two LULCs for these classes. Since there was a great deal of development in the study area during the last thirty years, it makes sense that the Buildings class would be much lower in 1991 compared to 2021. Moreover, it seems that most of the area that is part of the class Grassland in 2021 is either golf courses or parks whereas the 1991 map also includes residential lawns. This is probably due to the difference of the type of classification between the two LULCs. Bare Earth is a class that had hardly any pixels for the area with less than 0.01km\({}^{2}\) and this is similar for the 2021 LULC. The LULCs for the other two years of 2006 and 2011 are quite similar in class distribution compared to the 1991 LULC. The only major differences are the continued increase of the class Buildings and the decrease of the class Grassland. The area of the Buildings class increased from 3km\({}^{2}\) to 58km\({}^{2}\) in 2006 and increased another 6km\({}^{2}\) to 64km\({}^{2}\) in 2011. On the other hand, class Grassland decreased by 4km\({}^{2}\) to 16km\({}^{2}\) in 2006 and to 11km\({}^{2}\) in 2011. The overall trends of the increase of Building areas and the decrease of Grassland are predictable since the most likely change would be for vegetation to become buildings due to urbanization. However, the rapid increase in five years of urbanization between 2006 to 2011 is surprising when considering that the rate of change doubled between these two dates compared to the fifteen years between 1991 to 2006. There are a few reasons that this rapid urbanization could have occurred between these dates. Firstly, there was rapid development in this location between these periods for the 2010 Winter Olympic Games including the expansion of the North Shore Highway connecting Vancouver to the Whistler ski resort. Moreover, this region has undergone densification with single house properties being replaced by condomiums or apartments creating less of an area for lawns and gardens. Lastly, there were many recent cut-blocks into forested higher elevations in the study area in 1991 classified as Grassland which in the following years were classified as Trees. From the 1991 and 2006 LULCs and the six Structure Variables a Transition Sub-Model was created using MLP. Before creating the Sub-Model a change map was created which ignored any transitions that were less than 5000 cells. The only transition that fulfilled this requirement was the Grassland to Buildings or Roads. The reason that such a high-level of cells was used is that the MLP transition Sub-Models created would have an accuracy of less than 50% if transitions of less than 5000 cells were used. The MLP that was created ran through 6014 samples per class and the overall accuracy of this Module was 67.60%. The overall skill measure was 0.3520 whereas the Transition: Grassland to Buildings or Roads had a skill measurement of 0.4952, and Persistence: Grassland had a measurement of 0.2087. The tabulated results are shown in Tables 4 and 5. Out of all the variables the most influential one in this model is Distance from Roads. The overall accuracy of the model decreased by over 6% when it was removed. By itself it had an accuracy of 63.52% which was almost 7% higher than the next highest. On the other hand, Distance from Rivers was the least influential since, when it was changed to a constant, the overall accuracy only dropped by less than a percentage. When it was used as the sole variable, the module had an accuracy of 48.67%. Distance from Disturbance had the lowest accuracy when it was used as the only variable, but when it was turned into a constant, it was the third most influential variable. The difference between these two accuracy percentages could be due to Distance from Disturbance relying on other variables to function well. The other variables' percentages of accuracy had a similar mid-level ranking when changed to a constant and when being the sole variable. \begin{table} \begin{tabular}{c|c} _Input layer neurons_ & 6 \\ \hline _Hidden layer neurons_ & **7** \\ _Output layer neurons_ & **2** \\ _Requested samples per class_ & **6014** \\ _Final learning rate_ & 0.0005 \\ _Momentum factor_ & **0.5** \\ _Sigmoid constant_ & **1** \\ _Acceptable RMS_ & 0.01 \\ _Iterations_ & 10000 \\ _Training RMS_ & 0.4536 \\ _Testing RMS_ & 0.4565 \\ _Accuracy rate_ & 67.60\% \\ _Skill measure_ & 0.3520 \\ \end{tabular} \end{table} Table 4: General model information - parameters and performance \begin{table} \begin{tabular}{c|c} _Class_ & _Skill measure_ \\ \hline _Transition : Grassland to Buildings or Roads_ & **0.4952** \\ _Persistence : Grassland_ & **0.2087** \\ \end{tabular} \end{table} Table 5: Model breakdown for transition and persistence Table 6 Forcing a single independent variable to be a constant \begin{tabular}{c|c c c} _Model_ & _Accuracy (\%)_ & _Skill measure_ & _Influence order_ \\ \hline _With all variables_ & **67.60** & **0.3520** & **N/A** \\ _Var. 1 constant_ & **64.87** & **0.2974** & **2** \\ _Var. 2 constant_ & **66.73** & **0.3347** & **6** (least influential) \\ _Var. 3 constant_ & **65.35** & **0.3071** & **3** \\ _Var. 4 constant_ & **61.14** & **0.2229** & **1** (most influential) \\ _Var. 5 constant_ & **66.68** & **0.3337** & **5** \\ _Var. 6 constant_ & **66.05** & **0.3210** & **4** \\ \end{tabular} Figure 6 Graph demonstrating if only one variable is used From this MLP a prediction LULC can be made. The first prediction created was of 2011 because this allows for the past mapped LULC of the same year to be used to validate the MLP model result. The validation that was created showed three different outcomes: hits, misses and false alarms. Hits indicates the MC predicts correctly that the cell will change; misses shows that the MC did not predict the LULC would change although a transition did occur; and, false alarms indicates that MC predicted a transition that did not occur. In general, there were not many hits that occurred throughout the predicted map vs the actual LULC map. This is probably due to there being only one transition being tested, whereas there were many actual different transitions occurring during this time period. However, when a lower accuracy model was created with six transitions, there were still only a few more hits. The largest number of false alarms occurred when a transition was predicted to go from Grassland to Building but stayed at Grassland. These occurred around parks and golf courses where it would not make sense for an expansion to occur. There were two transitions that accounted for most of the misses in this model. One transition was Trees that were predicted to persist as that class but instead turned to Buildings; the other was Grassland that was also predicted to stay as Grassland but instead turned to Buildings. There were more misses of the latter. In general, the model is not complex enough to understand building permits, densification, or expansion of highways. Therefore, even if this model had a higher accuracy rate with more transitions, there still would be many pixels that would either be false alarms or misses due to the complexity of the study area. The 2100 LULC (Figure 7), 2200 LULC (Figure 8) and 2300 LULC (Figure 9) rasters were calculated in TerrSet's Land Change Modeler. These LULCs for the North Shore are used to compare what the predicted land change will be when the SLR rises to a certain height based on the results of the MLP-MC. When these three LULCs are compared with the current LULC that Esri created for 2021, a one-to-one comparison cannot be used since Esri used Sentinel-2 bands (10m resolution) whereas the data that was created in TerrSet used Landsat-5 bands (30m resolution). When a mask of the areas that will be inundated is used, the pixels defer between the 2021 LULC and the future predicted LULCs in areas with 1m and 2m SLR showing a larger area with the 2021 LULC. By contrast, 4m SLR has a larger area with the predicted 2300 LULC. Moreover, these two LULC types use different classifications for each type of land cover, where beaches considered to be Water in the Esri LULC are Buildings or Roads in the TerraSet LULCs. This heavily impacts the 1m SLR since the 2100 predicted LULC has 0% of pixels as part of the Waterbodies classification and 97% as part of the Buildings or Roads Classification; the 2021 LULC has 47.94% of pixels as the Water class and 48.71% as Built Areas. However, even with beach area as a large part of the 1m SLR, over the more than 275 years of predicted SLR for 4m to occur, there will not be a large increase in the class Buildings or Roads, most of which in the study area will not be inundated due to the rapid increase of elevation away from the coastline. For example, when the area of 4m SLR is used to compare the number of pixels that are part of the class Buildings or Roads between the two LULCs of 2100 and 2300, there only were 12 more pixels that were classified as Buildings or Roads in the 2300 LULC compared to the 2100 LULC. This demonstrates there is not a large increase of predicted built area in this inundated zone. \begin{table} \begin{tabular}{c|c c c c c} _SLR_ & _Water_ & _Trees_ & _Buildings_ & _Bare_ & _Grassland_ \\ _(year)_ & & & _or Roads_ & _Earth_ & \\ \hline _1m_ & 0 at & 2 at & 473 at & 3 at & 9 at 1.85\% \\ _(2100)_ & 0\% & 0.41\% & 97.12\% & 0.62\% & \\ \end{tabular} \begin{tabular}{c|c c c c} _2m_ & 0 at & 19 at & 669 at & 5 at & 66 at 8.70\% \\ _(2200)_ & 0\% & 2.50\% & 88.14\% & 0.66\% & \\ _4m_ & 19 at & 423 & 5543 at & 135 & 308 at 4.79\% \\ _(2300)_ & 0.30\% & at 6.58\% & 86.23\% & at 2.10\% & \\ \end{tabular} \end{table} Table 8: Number and percentage of 30m resolution pixels per predicted future LULC class for each SLR height Figure 7 Predicted LULC for when 1m SLR is likely to occur in the study area Figure 8: Predicted LULC for when 2m SLR is likely to occur in the study area Figure 9: Predicted LULC for when 4m SLR is likely to occur in the study area Even though this LULC land change process had a high accuracy rate, the validation between LULC prediction and actual LULC displayed numerous misses. This might be due to the unusual land use situation of the North Shore of Vancouver where both its unusual topography and population growth produce quite an unpredictable land use direction. This region, surrounded by coastal waters and steep mountains with minimal suburban sprawl, has several golf courses and numerous parks that probably will not transition into buildings. While it has some residential growth up the sides of the three mountains, this is quite limited. Instead, the region mainly develops in its densely populated urban areas by replacing single-family houses with condominiums and apartments. All these factors make it difficult to predict future land use. ### Results from Online Map A layer for buildings of the study area was created so that a 3D visualization would not only include the DTM and SLR layers, but also individual buildings affected by inundation. Ships were sometimes mistaken for buildings, this error did not affect the final product since the current study area is land-based, only analysing buildings that would directly be affected by flooding. This included areas that barely touched any part of the 4m inundation zones. Although overall 1000 polygons were created, some of these polygons were parts of one building. In general, the heights of most of the buildings were between 5m and 20m with a few apartment buildings being over 40m. Only several buildings would be completely inundated if SLR of 4m does occur. Showing individual buildings to the public that will be affected by SLR demonstrates the scale of businesses and homes that will be destroyed in the future due to Climate Change. The impacts of inundation due to Climate Change were created using: 3D Local Scene of the layers of SLR, the 1m resolution DTM, buildings that will be partially inundated by SLR as well as a satellite image of the study area. Each of the SLR layers and buildings were extruded to show the height of inundation that could occur and how it will affect the buildings situated in this zone. Besides the online maps this 3D visualization allows the public to view major hotspots for flooding by using the interactive map to focus on specific areas. Popups of important locations were created on the online map (Figure 10) for this study to demonstrate to the public the number of important features or sites that will be affected by SLR. Nine popups were created which include a nature reserve, a ferry port, international shipping ports, as well as one of the largest shopping malls in Canada. Other than Horseshoe Bay which houses the ferry port and town in the far western section of this region, all of the other popups are part of the two southern areas of the Narrows that will be severely affected by SLR. Seven of the major features will be only partially or fully inundated at 4m SLR whereas Ambleside Park will be fully flooded at 3m SLR. Even though most of the flooding will occur in Horseshoe Bay at 2m SLR, only a small portion will be inundated mostly on the west coast, not significantly affecting the ferry terminals. The interactive map popups which include a non-copyrighted image, a text description and a link to the respective websites demonstrate to the public well known or important local environmental or economic locations that will be inundated due to SLR. Raising awareness by showing to the public future locations at risk of inundation can create a personal reaction to the impacts of Climate Change (Retchless, 2018). This interactive map was then turned into an Esri Story Map. The link to the Story Map is: [https://storymaps.arcgis.com/stories/1d1ee6d911b84e66bb76714964cef6af](https://storymaps.arcgis.com/stories/1d1ee6d911b84e66bb76714964cef6af). Besides the addition of this map, a 3D flyover video, a small blurb of the location as well as a description of the data in the interactive map was included. ## 5 Discussion Using the results of the data a more precise, publicly accessible interactive online map was successfully created that allows the public to explore the impact of SLR due to climate change with its direct impact to key well-known places of the North Shore of Vancouver, Canada. Retchless (2018) discussed how these devastating SLR predictions are difficult for the public to fully understand and noted the importance of visualizing which local areas will be inundated since the height of several metres is difficult for the public to conceptualize. Through the interactive online map that was created the local population of the North Shore of Vancouver will be able to observe that most of the inundation that will occur in this study area is between 3m and 4m SLR. Moreover, through present and predicted future LULCs most of the areas that will be inundated will affect built areas such as infrastructure, roads, buildings and or other unnatural areas. Through using Deep Learning to find the building footprint of the region, this study found that hundreds of buildings will be affected due to the predicted inundation, but this mainly occurs at over 2m SLR. Even though results of this data indicate that most of the study area does not inundate until 3m SLR, the municipalities of the study area will be in a dire situation in the future without actions taken to mitigate climate change globally. This is due to the number of important industries that would be partially or fully inundated if 4m SLR occurs in this region. Besides the two Figure 10: Section of the interactive map showing part of the First Narrows with a popup of Pembina Canada Terminals being open. shipping terminals in this study area that were used as popups in the Story Map there are several other terminals that comprise the Port of Vancouver which will be fully flood. Moreover, these terminals in the North Shore that transport mostly raw resources such as petroleum coke and sulphur would pollute Burrard Inlet as is also the case for the waste of The North Shore Waste and Recycling Centre. Even if all these resources and waste were moved, there still would trace amounts of substances to pollute the inlet and surrounding waterways. Other than its rail line, the G3 Terminal will be fully flooded at 4m. This terminal exports most of Canada's grain, not transported to the United States, to Asia and the rest of the world. If this area were flooded, it will not only cause severe economic repercussions to the North Shore but also to the rest of Canada. In addition, most of the rail lines which are close to the coast are used by freight trains to transport grain and resources to the terminals. These lines could be moved to higher elevation as a precaution, but they would still have to be connected somehow to the Port of Vancouver. Even though most of the industrial and economic transportation will be affected by 3m and 4m SLR in this region, this is not the case for road transportation since the two bridges as well as the major highway of this region would not be significantly inundated. It is also important to discuss our results in light of the key coastal geovisualisation studies. As the first such perspective article on SLR-related geovisualisation, Richards (2015) offered some useful insights on employing effective research communication through risk-based interactive geovisualisation technologies as productive usability of online, participatory technologies that promote citizen engagement in science. Risk Finder tool ([https://sealevel.climatecentral.org/about/](https://sealevel.climatecentral.org/about/)) launched in 2013 by non-profit climate communication and research group Climate Central is the most noticeable initiative on enabling SLR geovisualisation for USA coastal communities using high-resolution LiDAR datasets. Similar LiDAR-based geovisualisation was lacking for our study area, which we have achieved through this work. Newell and Canessa (2017) came up with a place-based concept on developing geovisualisations for coastal planning, as they rightly acknowledged how different user groups relate to coastal environments as "places" of values and meanings, rather than simply'spaces' which a traditional spatial analysis in GIS focusses at. This concept of "places" receives relevance if the study is performed at a local or regional scale, an approach that we have adopted in our work. Wherever possible, we have discussed our results in terms of local areas and buildings of relevance, offering a context to the local user groups of our interactive map. In Canadian context, Minano et al. (2018) developed a Geoweb tool called AdaptNS for supporting local climate change adaptation efforts in coastal communities of Nova Scotia. AdaptNS as a web-based geovisualization tool displays interactive inundation maps generated using LiDAR data, local climate change projections of SLR, and storm surge impacts between the years 2000 and 2100 (Minano et al., 2018). In our paper, we have added similar geovisualisation for another long Canadian coastline and the usability of LiDAR data for making the geovisualisation reliable and effective was displayed. ## 6 Conclusion The aim of this study is to produce an interactive online map that is not only accessible to the public, but also allows them to interact with the newest high-resolution data and techniques possible within the scope of this study. Besides the interactive online map, 3D visualizations of 1m, 2m, 3m, and 4m SLR layers and a newly created 3D extruded building layer were used to create a 3D flyover animation to also engage the public. This study effectively meets the three primary objectives described in the Aims and Objectives section of the Introduction by utilizing an interactive map (3) and land use data (2) to showcase the impact of sea level rise (1) on the North Shore of Vancouver. This project demonstrates how 4.3% of the study area in the North Shore of Vancouver due to SLR could be inundated with major industry, protected areas and commercial sites being destroyed. Through the data that was created, the public local community can observe the extent and severity that might occur if mitigation is not implemented to curb Climate Change. The interactive map setup encourages user interaction, exploration, and reflection of SLR impacts on their communities. While this 4.3% SLR inundation will flood a great deal of the coastline of the North Shore of Vancouver, it should be noted that most of the major flooding that might occur would disrupt commercial, environmental, and industrial areas only at 3m and 4m SLR. This level of SLR is some of the worst-case scenarios and will not likely occur until the end of the 2200s or early 2300s. However, to avoid such scenarios, mitigation efforts to curb the effects of Climate Change need to be taken and not delayed so that such a future is not a reality. Throughout this paper, we have taken steps to provide a detailed and easily understandable description of methods, so that it becomes easier to reproduce these steps for other regions, aiding SLR geovisualisation for a wider coastal community. The next step for us is to work towards effectively combining geo and demographic datasets to develop risk maps for a wider study area, resulting in a publicly-accessible interactive GIS capable of sourcing local datasets on SLR and social variables (e.g., demography and property values).
2308.07964
Quantum computing for chemistry and physics applications from a Monte Carlo perspective
This Perspective focuses on the several overlaps between quantum algorithms and Monte Carlo methods in the domains of physics and chemistry. We will analyze the challenges and possibilities of integrating established quantum Monte Carlo solutions in quantum algorithms. These include refined energy estimators, parameter optimization, real and imaginary-time dynamics, and variational circuits. Conversely, we will review new ideas in utilizing quantum hardware to accelerate the sampling in statistical classical models, with applications in physics, chemistry, optimization, and machine learning. This review aims to be accessible to both communities and intends to foster further algorithmic developments at the intersection of quantum computing and Monte Carlo methods. Most of the works discussed in this Perspective have emerged within the last two years, indicating a rapidly growing interest in this promising area of research.
Guglielmo Mazzola
2023-08-15T18:01:28Z
http://arxiv.org/abs/2308.07964v3
# Quantum computing for chemistry and physics applications ###### Abstract This Perspective focuses on the several overlaps between quantum algorithms and Monte Carlo methods in the domains of physics and chemistry. We will analyze the challenges and possibilities of integrating established quantum Monte Carlo solutions in quantum algorithms. These include refined energy estimators, parameter optimization, real and imaginary-time dynamics, and variational circuits. Conversely, we will review new ideas in utilizing quantum hardware to accelerate the sampling in statistical classical models, with applications in physics, chemistry, optimization, and machine learning. This review aims to be accessible to both communities and intends to foster further algorithmic developments at the intersection of quantum computing and Monte Carlo methods. Most of the works discussed in this Perspective have emerged within the last two years, indicating a rapidly growing interest in this promising area of research. ###### Contents * I Introduction * II Quantum hardwares * III Algorithms and quantum advantage * IV Fermionic hamiltonians * V Variational quantum algorithms * V.1 The variance problem * V.2 The noisy optimization problem * VI Fault-tolerant quantum chemistry * VII The local energy in variational Monte Carlo * VII.1 The local energy in practice * VII.2 Pauli measurements versus local energy * VII.3 Beyond variational: projective methods * VIII Quantum computing meets quantum Monte Carlo * VII.1 The local energy in quantum computing * VII.2 Classical-inspired circuits for VQE, quantum-inspired ansatze for VMC * VII.3 Variational real-time dynamics and updates in parameters space * IX Classical Monte Carlo meets quantum computing * IX.1 Autocorrelation of a Markov chain * IX.2 Sampling from a classical Boltzmann distribution using wavefunction collapses * IX.3 Quantum walks and quantum Metropolis algorithms * IX.4 Sampling with quantum annealers or simulators * X Conclusions ## I Introduction The solution of quantum many-body problems in chemistry and physics is one of the most anticipated applications of a quantum computer, as first proposed by Feynman.[1] Over time, it has been proposed that many other classes of problems can benefit from quantum speedup, including cryptography, data science, machine learning, finance, linear algebra, and optimization.[2] However, physics and chemistry remain among the main candidates for demonstrating practical quantum advantage over conventional methods because they contain classes of problems with the following characteristics: (i) they are very challenging for classical computation, and exponential quantum speed-ups are possible, and (ii) they are defined by a small number of variables, thus featuring a limited cost of data loading and reading.[3] Among all possible problems in physics, here we will focus on electronic structure and spin models (including classical spin models), as their implementation requires a relatively lower cost compared to models like high-energy physics. Excellent review articles on quantum algorithms for quantum chemistry[4, 5] and materials science[6] have already been published a few years ago, the latest one in 2020. The purpose of this manuscript is not to duplicate such presentations but rather to concentrate on a more frontier topic that is becoming relevant due to several works appearing in recent months. However, notice that about 40% of references cited in this Perspective are pretty recent, i.e. from 2021 onwards. This indicates how fast the whole field of quantum computing is growing. We will analyze points of contact between quantum computing and Monte Carlo (MC), quantum Monte Carlo (QMC) methods.[7] There are many common themes between the two worlds. Shot noise arising from the measurements of the quantum register finds a parallel in the statistical root of Monte Carlo. Both methods require extracting and utilizing expectation values computed in the presence of statistical noise. The existence of shot noise is one of the major issues for near-term simulations: in variational setting this implies a problematically large number of circuit repetitions.[8] On the other hand, such uncorrelated wave function collapses can have computational value if used as importance sampling in Monte Carlo. We will also report attempts of cross-fertilization between the two fields in designing variational ansatze and optimization methods for ground state and dynamical problems. Additionally, we discuss the requirements that a classical-quantum hybrid QMC algorithm, relying on a quantum computing subroutine, must meet. Regarding classical applications, we review several proposals for accelerating the Metropolis algorithm using quantum hardware and examine their practicality under realistic hardware constraints. Therefore, the purpose of this manuscript is to review various Monte Carlo techniques that can be useful for creating new quantum algorithms or designing new applications of already-known quantum primitives. Conversely, this Perspective also aims to be an accessible presentation of the potential and limitations of quantum computing, for Monte Carlo experts and, more broadly, computational physicists. This Perspective is timely as (1) on the experimental side, the first steps toward fault-tolerant hardware have been made.[9; 10; 11; 12] Moreover, experiments at the threshold of quantum advantage for quantum dynamics are now possible given the existence of \(\sim 100\) qubits hardwares, albeit noisy.[13] (2) On the quantum variational algorithm side, we have increasing evidence that the quantum measurement noise -the focus of this Perspective- is the major, unavoidable, bottleneck of near-term quantum algorithms.[8; 14] (3) In the last year, thorough resource assessment papers for quantum chemistry have appeared,[15; 16; 17] which clearly reaffirms the threshold for quantum advantage for ground state problems, at least with today's algorithms, to be deep into the future fault-tolerant quantum computing regime, and question the previously claimed exponential advantage for ground-state electronic structure application.[18] (4) Finally, we observe the emergence of a new class of hybrid quantum algorithms revisiting classical and quantum Monte Carlo, opening completely new possibilities for quantum advantage in these areas. In this Perspective we report about twenty recent (i.e which appeared in the last two years) works that aim to complement quantum computing and Monte Carlo in several sub-fields: hybrid quantum computing-QMC, variational circuits development, parameter optimization, time-dependent simulations, and classical sampling. The manuscript is organized as follows. In Sec. II we briefly mention the types of quantum hardware and their fundamental limitation, namely the hardware errors in the NISQ regime, and the fairly long (compared to conventional CPUs) gate time of future, error-corrected machines. In Sec. III we introduce general quantum algorithms for physics and the requirements for quantum advantage. In Sec. IV we review the basics of the encoding of a fermionic hamiltonian into a quantum computer. After these introductions, in Sec. V, we discuss the variational method, the simplest kind of algorithm for physics and chemistry, and its limitation due to the shot noise. In Sec. VI we instead describe how the same calculation could be done in the fault-tolerant era. The more technical Sec. VII introduces the local energy estimator that is central in QMC and explains why these algorithms, while still being stochastic, do not suffer from the severe variance problem of variational quantum algorithms. In Sec. VIII we review attempts to create hybrid quantum-classical QMC algorithms, as well as other points of contact between QMC and quantum algorithms. Finally, in Sec. IX, we reverse our perspective and discuss quantum computing methods to speed-up classical sampling, using digital machines and quantum simulators, annealers. The concept map of the Perspective is shown in Fig. 1. ## II Quantum Hardwares Unlike conventional reviews on algorithms for quantum chemistry, it is necessary to briefly introduce the hardware on which these must be executed. Understanding the possibilities and limitations of the hardware is crucial to get an idea of the feasibility of current and future algorithms. There are many types of quantum computers and quantum simulators. The difference between the two classes is that a quantum computer is built with the idea of being universal, therefore able to support any type of program. A quantum simulator is designed to perform a narrower range of tasks, such as optimizing classical cost-functions,[19; 20] or simulating particular Hamiltonians.[21] To tend towards universality, a quantum computer must support the execution of basic local operations, called _quantum gates_, just like a regular computer. For example, an architecture capable of synthesizing the following set of gates \(\{\mathsf{CNOT},\mathsf{H},\mathsf{S},\mathsf{T}\}\) or \(\{\mathsf{R}_{\mathrm{x}}(\theta),\mathsf{R}_{\mathrm{y}}(\theta),\mathsf{R}_ {z}(\theta),\mathsf{S},\mathsf{CNOT}\}\) that act on a maximum of two qubits (see textbooks on quantum computing for the definition of these gates[22]), is capable of approximating any possible unitary operation on \(n\) qubits.[22] On the other hand, a special-purpose quantum simulator can implement the global operation of interest directly, such as \(e^{iHt}\), where \(H\) is the quantum many-body Hamiltonian operator, without the problem of having to compile it using a gate set.[23] Currently, the greatest engineering effort is focusing on gate-based "digital quantum computers", although it is not excluded that algorithms of interest for chemistry and materials science can be executed on quantum simulators. There is then a second distinction that is important to keep in mind. At present, the size of digital quantum computers is on the order of \(n\approx 100\) qubits.[13; 24] In principle, these computers have access to a \(2^{\mathcal{O}(100)}\) dimensional Hilbert space. However, practical quantum advantage has not been achieved yet. The reason is that these machines are not properly digital ones but are subject to hardware noise. For example, each gate has a finite insuccess probability. As we will see, circuits necessary to write a quantum algorithm for chemistry require a considerable number of gates, and therefore even a small infidelity propagates devastatingly and the total error accumulates until it completely compromises the success of the algorithm. Current hardwares are built with the idea of executing a universal gate set, but are still affected by hardware noise. They generally go by the name of NISQ (Noisy Intermediate-Scale Quantum) machines.[25] The final step to achieving a true digital quantum computer is to realize hardwares capable of executing gates without errors, just like their classical counterpart. Many detractors of quantum computers base their skepticism on the impossibility of maintaining a macroscopic coherent wavefunction for an arbitrary number of operations.[26] Fortunately, there is a theorem that does not exclude the possibility of a digital universal computer: below a certain noise threshold, it is possible to correct this hardware noise faster than it can accumulate during runtime (the same would not hold for analog computers).[22] Practical proposals to realize this idea include using multiple physical qubits to realize a logical qubit and more operations to realize a single "error-corrected" logical gate.[27; 28] It is important to understand which types of algorithms have the hope of being executed on a NISQ machine and which will require a fault-tolerant machine to properly contextualize the ever-growing literature on quantum computing for quantum chemistry. At present, sub-communities have formed that are dedicated to developing NISQ algorithms, and others, that are increasingly growing in number, which are developing algorithms for the fault-tolerant era. This situation is unprecedented. In classical computing, to attempt a comparison, it would be as if in the 1960's there were a community developing algorithms for chemistry on punched cards, and another preparing for exascale computing without a clear idea of whether and how an HPC facility would be built. However, the technological progress toward a fault-tolerant machine is steady. Several experiments published in 2022-2023 already demonstrated some building blocks necessary for quantum error correction.[9; 11; 12; 29; 30; 31] Clearly, we are still in the infancy of fault-tolerant hardware, and it is not yet clear when a large-scale error-corrected machine, able to accommodate electronic structure calculations, will appear. This also depends on progress in the algorithmic side of compressing memory and runtime resources. The last concept that needs to be presented in this brief account of hardware is related to the clock frequency of a quantum computer, which will always be necessarily slower than classical gates. This is because the execution of a quantum gate requires the manipulation of a Figure 1: Map of the main links, between quantum algorithms and Monte Carlo methods, contained in this Perspective. Connected green links indicate that fruitful information flow between the two area has already been established. Disconnected red links indicate topics that still require more investigation or where proposed solutions are not completely satisfactory. wave function by an external control: the quantum gate can never be faster than the classical electronic apparatus that controls it. At present, the execution time of a noisy CNOT or noisy single qubit rotation is of the order of 100 ns in the NISQ era, for superconducting qubits, corresponding to 100 MHz.[24] The expected logical clock rate in the fault-tolerant regime is much slower, of the order of 10-100kHz, because every error-corrected gate requires a large number of elementary operations that involve a large number of physical qubits.[32] Often, in literature, one hears about \(\mathsf{T}\)-depth as a proxy for the complexity of an algorithm.[33; 17; 34] The reason is that truly digital hardware can only operate using a set of discrete gates that can be error-corrected. A rotation of an arbitrary angle does not belong to this category, and therefore every continuous rotation must be compiled into a series of discrete operations, such as \(\mathsf{S}\), \(\mathsf{H}\), and \(\mathsf{T}\) gates. The \(\mathsf{T}\) gates are the most expensive to synthesize (i.e 100-10000 more costly than a \(\mathsf{CNOT}\)),[32] and therefore their number, and how many can be executed in parallel, determines the runtime. Since in chemistry, rotations of an arbitrary angle may be necessary for orbital basis rotations, or to realize infinitesimal Trotter step operations, are ubiquitous, fault-tolerant quantum algorithms generally include a large number of \(\mathsf{T}\) gates. It is interesting to note that in the NISQ era, the opposite is true: rotations are comparatively simple gates to perform, while efforts are made to reduce the number of \(\mathsf{CNOT}\) gates, which are currently the most noisy. Recent proposals include the possibility of retaining analog rotation gates and error-corrected Clifford gates, which are easier to synthetize.[35; 36] This hybrid approach is interesting but yet to be demonstrated in practical algorithms. Moreover, the gate times depend on the specific hardware architecture. Other platforms such as spin qubits, trapped ions, or photonic hardwares will imply different hardware constraints. ## III Algorithms and Quantum Advantage After this essential overview of hardware, we are now in a position to introduce more concretely the most popular algorithms for the quantum many-body problem. The application where quantum advantage appears most clear and easy to justify is quantum dynamics. In this case, an exponential advantage can be obtained by virtue of the exponential compression of the Hilbert space into a linear memory with the number of particles.[37] If we consider, for simplicity, spin-1/2 lattice models composed of \(n\) spins, it is easy to see that an _exact_ simulation becomes unfeasible as soon as \(n\sim 50\). Storing a quantum state of 50 qubits with double-precision coefficients for each of the \(2^{50}\) possible components requires 16 PB of memory. To perform arbitrary discrete-time evolution, we would need to manipulate such an array thousands of times. For instance, direct matrix exponentiation \(e^{iHt}\) for a typical many-body quantum Hamiltonian, \(H\), and 50 spin-1/2 particles would require an array made of \(10^{15}\) entries undergoing a matrix-vector multiplication of size \(2^{50}\times 2^{50}=10^{30}\). The memory requirement for evolving a system of \(N\) electrons in second quantization using \(n\) orbitals is the same. Second quantization offers an optimal memory usage if a Fock encoding is used, where each binary string represents a possible configuration of orbital occupation (see Sec. IV).[38] As we will see, the choice of second quantization introduces non-locality in the Hamiltonian, which is translated as a sum of tensor products of Pauli matrices (qubits are not fermionic particles) and therefore requires very long circuits. From an asymptotic point of view, first quantization would be a better choice, as it preserves the locality of interactions at the cost of introducing expensive arithmetic operations to calculate the Coulomb term. In addition, an antisymmetric function in real space must be provided.[39] Simulating lattice spin models, therefore, appears to be the most obvious choice in the search for a quantum advantage in Hamiltonian simulations. Beverland et. al[17] shows that a fault-tolerant simulation of a \(10\times 10\) plaquette of the two-dimensional quantum Ising model requires on the order of \(10^{5}-10^{6}\) physical superconducting qubits, using the digital Trotter algorithm. This example is instructive, as it clearly shows that, although the system requires a priori a memory of \(n=100\) logical qubits (1 logical qubit = 1 spin), error correction and extra resources for distilling \(\mathsf{T}\) gates (often called \(\mathsf{T}\)-factories) push the total computation of qubits up to a million. This system sets a lower bound on the resources needed to simulate fermionic systems that are more complex than spin-1/2 models, for the same number of particles/spin-orbitals. Simulating lattice models is also possible with quantum simulators, although with calibration errors, and it is therefore likely that there will be competition between the two quantum computing paradigms towards the first simulation, beyond what is possible classically.[23; 40] Indeed, it is also possible that the first demonstration of practical quantum advantage in real-time simulations will be achieved before the fault-tolerant regime, for example in simulations of dynamical phase transition.[41] At the time of writing this review, researches at IBM showcased a record-sized real-time dynamics of a 2D heavy-hexagon lattice Ising model using Trotterization, 127 qubits, and error mitigation techniques.[13] Just ten days later, approximate tensor network simulations achieved the same result,[42; 43] raising the bar once again to declare a quantum advantage, similar to what happened with the first claim of quantum supremacy, on random circuit sampling.[44; 45; 24] Demonstrating a definitive quantum advantage in quantum dynamics tasks is a less well-defined goal, as classical limits are still not thoroughly explored. One can expect that increasingly sophisticated classical methods will be adopted to counter the new claims of quantum advantage. The case of chemistry is different since classical methods are fairly established, and it is much clearer which electronic structure Hamiltonians are beyond the reach of classical computing. Coming back to our chemistry problems, if the required answer requires a precision that can only be achieved with a long circuit, then we must prepare ourselves for the fact that our algorithm will only be available in the digital era, which could take a decade or more, when a technology capable of controlling millions of qubits will be available. As mentioned, fault-tolerant hardware has a lower clock frequency than a conventional computer, a non-exponential asymptotic speed-up may not be sufficient to guarantee actually shorter runtimes, for reasonable system sizes. Babbush et. al. [46] recently discuss how a quadratic speed-up is insufficient for practical advantage in many applications. To conclude, quantum advantage in chemistry problems can be obtained either in several years using a fault-tolerant algorithm with a superquadratic speed-up, or in a heuristic way using NISQ hardwares. In this case, only short-depth algorithms can be used, which often require a classical feedback loop, such as optimization of gate parameters that define the circuit, and repeated executions of the circuit. Variational methods fall exactly into this class of solutions. [47; 48] We will see in the next chapter how these can also be used and their limitations, especially their relationship with a type of noise that cannot be eliminated even in the fault-tolerant regime. ## IV Fermionic Hamiltonians Most applications in chemistry are related to solving the many-electron Schrodinger equation. To clearly understand the problem it is necessary to formalize it to some extent. The field of quantum simulations of electronic structure problems is quite well established, and we refer to reviews [4; 5; 6] for a general introduction and more details. The most general fermionic hamiltonian reads \[H=\sum_{p,q}t_{pq}a_{p}^{\dagger}a_{q}+\sum_{p,q,r,s}u_{pqrs}a_{p}^{\dagger}a _{q}^{\dagger}a_{r}a_{s}, \tag{1}\] where \(a_{p}^{\dagger}\) and \(a_{p}\) are the fermionic creation and destruction operators, which create (annihilate) a particle in the spin-orbital \(p\). To proceed, the next step is to define an encoding from fermionic Hilbert space to qubit space, so that a base vector of the latter, for example 1100, has a unique correspondence in the fermionic space. In second quantization, this is trivial using Fock states. A bit-string denotes the occupation numbers of spin-orbitals, in a chosen ordering. For instance, the string 1100 can represent the Hartree-Fock state of an \(H_{2}\) molecule described with two spatial molecular orbitals. Here, there are two electrons, of opposite spin, occupying the first two spin-orbitals, \(\{\ket{\psi_{g,\uparrow}},\ket{\psi_{g,\downarrow}}\}\), while leaving empty the two higher energy ones \(\{\ket{\psi_{u,\uparrow}},\ket{\psi_{u,\downarrow}}\}\). For ease of notation, we can label this string with its binary index (from left to right, here), \(\ket{1100}=\ket{3}\). The (doubly) excited state, compatible with the symmetries of the molecule, is \(\ket{0011}=\ket{12}\). In the \(H_{2}\) case, the exact ground state (in this small atomic basis) can be expressed as a linear combination of these two strings alone, but in the general case, the ground state of a molecule with \(N\) electrons and \(n\) spin-orbitals is written as a linear combination, \[\ket{\psi}=\sum_{i=0}^{2^{n}-1}c_{i}\ket{i}\, \tag{2}\] where \(c_{i}\) are complex coefficients, and \(\ket{i}\) is a basis vector, which binary format denotes the occupation number of the spin-orbitals in a chosen ordering. Although many \(c_{i}\)'s are zero due to particle and spin conservation, as is known, an exponentially large number of them remains finite when \(N\) and \(n\) increase. The main advantage of a quantum computer is therefore clear. With only \(n\) qubits, it is possible to store in memory a wavefunction with \(2^{n}\) complex coefficients. The fundamental question now is whether it is possible to devise a quantum algorithm with polynomial complexity, capable of manipulating these \(2^{n}\) coefficients to find the ground state of a fermionic problem, or at least with a better approximation than the best classical method. Before going further, it is necessary to show what kind of Hamiltonian is produced after the fermion-to-qubit mapping is applied. Since a qubit is not a fermion, the hamiltonian in Eq. 1 needs to be translated into a qubit operator. This is usually done with the Jordan-Wigner transformations, \[\begin{split}& a_{p}=\frac{1}{2}(X+iY)_{p}\otimes Z_{p-1}\otimes \cdots\otimes Z_{0},\\ & a_{p}^{\dagger}=\frac{1}{2}(X-iY)_{p}\otimes Z_{p-1}\otimes \cdots\otimes Z_{0},\end{split} \tag{3}\] where the \(\frac{1}{2}(X+iY)_{p}\) and \(\frac{1}{2}(X-iY)_{p}\) spin _minus_ and _plus_ operators change the occupation number of the target mode \(p\), while the string of \(Z\) operators are needed to enforce antisymmetrization. Therefore, each fermionic operator in Eq. 1 translates into a combination of tensor product of Pauli operators. The full Hamiltonian of Eq. 1 then takes the general form of linear combination of products of single-qubit Pauli operators \[H=\sum_{j=1}^{N_{P}}h_{j}P_{j} \tag{4}\] where \(h_{j}\) is a real scalar coefficient, and \(N_{P}\) is of order \(\mathcal{O}(n^{4})\) since there are \(n^{4}\) terms to transform in Eq. 1. Each term \(P_{j}\) in the Hamiltonian is typically referred to as a _Pauli string_, and is a tensor product \(P_{j}=\bigotimes_{i=1}^{n}\sigma_{i}^{\alpha}\) of \(n\) Pauli matrices \(\sigma^{\alpha}\in\{I,Z,X,Y\}\). What is important for our discussion is that (1) the coefficient \(h_{p}\) can take very large values (in modulus). Just to give an idea, \(\sum_{j}|h_{j}|\) is of the order of 40 Ha, already for a moderate example of a \(H_{2}O\) molecule described in STO-6 basis [8]. Then (2) the operators \(P_{j}\) can have a number of non-identity gates, which is \(\mathcal{O}(n)\) due to the non-locality introduced by the Jordan-Wigner trasformation. This implies that the circuit for real-time dynamics are longer compared to local quantum spin Hamiltonians, [38] and when measured the expectation value \(\langle P_{j}\rangle\), is exponentially susceptible to bit-flip measurement errors [49]. The form of Eq. 4 is however more general than the Jordan-Wigner workflow, so we assume it as a starting point of our discussion. ## V Variational quantum algorithms In short, variational methods aims to use shallow parametrized circuits that can be optimized to minimize the calculated energy. [50; 51; 48] The \(N_{par}\) variational parameters can be the angles \(\theta\)'s of rotation gates defined above. This strategy takes the name of Variational Quantum Eigensolver (VQE) but it is nothing more than a common variational calculation on a quantum computing platform. First, of all, it is important to notice that even shallow circuits, i.e. featuring constant depth vs \(n\), can display quantum advantage, although on quite artificial tasks, [52] or cannot be efficiently simulated classically. [53] Therefore, variational algorithms are reasonable candidates for quantum advantage in the near term. As of today, the literature features many small-scale hardware demonstrations, still away from the quantum advantage threshold. The most notable either use heuristic circuits, [54], or more structured physically inspired circuit ansatze. [55; 56] The current largest variational simulation of a chemical system reaches a system size of about 20 qubits. [57] Performing variational calculations of many-body quantum systems has advantages in principle, but also many limitations in practice. Current technology allows the execution of circuits with more than 100 qubits and a depth of about 60 two-qubit gates. [13] While error correction is not yet available, there are error mitigation methods that enable unbiased estimation of the expectation value of operators. [58; 59] It seems, therefore, that all the ingredients for enabling NISQ variational methods are present, as such a circuit can define a variational ansatz that could outperform the best classical ansatz for a given problem. A central point of this Perspective is thus what is missing to translate this potential into practical variational computation and, hopefully, achieve quantum advantage. The first point to establish is what kind of circuit can be used to create a ground state of our target many-body quantum Hamiltonian. As we have seen, the advantage of a quantum computer is the mere possibility of storing an exponentially large wavefunction with a linear number of qubits in memory (cfn. Eq. 2). But this gives us no guarantee that (1) a quantum circuit with a finite, and possibly small, depth can give us a better approximation than the best classical method, and (2) even more importantly from a conceptual point of view, that it is possible to optimize the parameters even assuming that the ground state is contained in the variational ansatz. An obvious drawback of variational calculations in the NISQ regime is the presence of noise. A gate error of 0.1% propagating through a circuit with a depth of 100, composed of 100 qubits, produces a state with fidelity on the order of \(10^{-3}-10^{-4}\) compared to the noiseless case. [13] However, there exist error mitigation methods that can be applied to obtain unbiased estimates of expectation values. Assuming, as a working hypothesis, that the noiseless version of a quantum circuit can generate a variational ansatz that is better than the best classically available, or even the exact one, everything becomes a game of exponents, as error mitigation incurs an exponential cost in circuit repetitions. [57; 58; 59] It remains to be determined whether the exponent of this post-processing step is mild enough to guarantee reasonable runtimes for classically intractable molecules. In this Perspective, we will not focus on hardware error mitigation, which is introduced in the recent Ref. [57] but on another complementary issue. As we will see, a major problem with quantum variational methods is even more surprising: even assuming that we have prepared exactly the target state, computing its energy is -so far- an inefficient procedure. Although this inefficiency is not the same as in complexity theory definition, where an algorithm is said to be inefficient if it is exponentially scaling, it is from a practical point of view. This is a completely new condition that does not happen, for example, in variational calculation using QMC. ### The variance problem The fundamental (and obvious) concept at the root of this is that a quantum computer obeys the postulates of quantum mechanics: we cannot access the state that is created by the circuit exactly, but only through measurements. We can measure the expectation value of the Hamiltonian by post-processing the read-out of each measurement, and then prepare exactly the same state and repeat the measurement, and so on. Let us suppose to have prepared a quantum state \(\ket{\psi}\): following Eq. 4, we see that the mean value of the Hamiltonian is the linear combination of the expectation values of the \(N_{p}\) Pauli strings, \[\bra{\psi}H\ket{\psi}:=\bra{H}=\sum_{j=1}^{N_{P}}h_{j}\langle P_{j}\rangle \tag{5}\] For simplicity we assume that the expectation values of the \(P_{j}\)'s are obtained from \(N_{P}\) independent sets of measurements: the error on the estimate is then given by \[\epsilon=\sqrt{\sum_{j}|h_{j}|^{2}\mathrm{Var}[P_{j}]/M_{j}}, \tag{6}\] where \(\mathrm{Var}[P_{j}]=\langle P_{j}^{2}\rangle-\langle P_{j}\rangle^{2}\leq 1\) is evaluated using \(M_{j}\) repeated measurements, or _shots_. Since we need to evaluate each \(\langle P_{j}\rangle\) independently, their statistical fluctuations are not correlated, so one needs to reach chemical accuracy by resolving each expectation value with very high accuracy as it gets multiplied by a possibly very large prefactor \(h_{j}\). In Wecker et. al.[8], it is estimated that \(M=\sum_{j}M_{j}=10^{9}\) measurements would be needed to reach \(\epsilon\sim 10^{-3}\) Ha, for \(H_{2}O\), and up to \(10^{13}\) for \(Fe_{2}S_{2}\) decribed with 112 spin-orbitals and STO-3G basis. This happens even in the case where we prepare the exact ground state, thus violating the _zero-variance_ property of the ground-state if the energy is calculated this way. This issue is often called _the variance problem_, and is one of the most overlooked issues in the VQE community, which seems to be more active in new circuit ansatze development. However, several works aiming to mitigate this problem have been put forward. The simplest one consists in grouping all \(P_{j}\)'s that commute qubit-wise and that can be measured simultaneously.[54] Other methods aim to find better grouping schemes, introducing general commutativity rules at the expense of longer measurement circuits[49; 60; 61; 62; 63]. For the evaluation of two-body reduced density matrices for fermionic systems, it is possible to devise an asymptotically optimal measurement scheme[64; 63], with a number of unique measurements circuits scaling as \(N_{\text{groups}}\sim\mathcal{O}(n^{2})\). However, there is still the problem that all these correlators, estimated stochastically, need to be summed up to \(\langle H\rangle\). Therefore, \(N_{\text{groups}}\) is not a faithful representation of the number of total measurements to be performed to achieve chemical accuracy, since each partition needs to be measured \(M_{\text{per groups to $\epsilon$ times}}\), as for the sum to achieve a total error of \(\epsilon\). There are also methods based on a different philosophy, namely to systematically _approximate_ the electronic Hamiltonian (Eq. 1) to reduce the number of terms in the sum, using a low-rank representation of the Hamiltonian.[65; 66] This technique finds application also to real-time dynamics simulations, and may considerably reduce the runtime of error-corrected algorithms[16]. While it can certainly mitigate the variance problem in the NISQ era, it does not qualitatively solve the problem for the very same reason outlined above. We observe that several works aim to reduce the number of basis states required to represent the Hamiltonian. However, this may not necessarily improve the number of measurements, as these fewer terms may have a larger variance. An interesting example of this is seen in variational quantum algorithms applied to classical cost functions. This scenario is important for optimization, and in this case, the most popular variational method is called the Quantum Approximate Optimization Algorithm (QAOA). Despite the fact that the cost function, by definition, only needs to be measured in one basis--the computational basis--the impact of shot noise is still quite detrimental to the overall performance even in this case.[67] Finally, we also report the celebrated _shadow tomography[68]_ method, which may be useful for estimating local qubit operators but has an exponential scaling for non-local ones, such as our \(P_{j}\)'s. In general, the total number of circuit repetitions during an optimization run based on energy optimization, featuring \(N_{\text{iter}}\) optimization steps is \[M_{\text{VQE}}=N_{\text{groups}}\times M_{\text{per groups to $\epsilon$}}\times N _{\text{iter}}. \tag{7}\] Let us consider a concrete example of a \(H_{2}O\) molecule, in a very minimal basis consisting of 12 spin-orbitals. Following a state-of-art variance reduction method[49], we quote a number of \(10^{8}\) circuit repetitions to compute a single point energy within \(10^{-3}\) Ha accuracy (notice that this is the error from the exact value obtained with this minimal basis set, not the exact value using a converged basis set). Assuming a circuit execution time, with measurements, of \(1\mu s\), we are limited to about hundreds of optimization steps per day (totally neglecting classical communications and reset times). Notice that NISQ hardware means that hardware noise will always be present, and this noise usually varies with time, that's why an optimization run that lasts for more than one day will likely never converge: the optimal parameters \(\mathbf{\theta}\) which yields the lowest energy with the hardware configuration of today, may not be optimal the day after. The fact that hardware errors are device and time-dependent realistically excludes the possibility to parallelize the shots using several machines, as is customary in conventional QMC. In this case, one would claim the possibility to prepare always the same trial state on different noisy machines. ### The noisy optimization problem The second step of any numerical variational calculation is optimization. In this perspective, we aim to maintain a high-level tone and will not be concerned with details such as which optimization method is better or worse depending on the type of circuit. In general, optimizing the parameters is a complicated task that is delegated to the classical part of the algorithm. Heuristic circuits are very short but have a much larger number of variational parameters than those inspired by chemistry, such as the unitary coupled cluster, or physics, such as the Hamiltonian variational ansatz.[4] The latter have longer circuits but allow for a more stable optimization. A wealth of literature focuses on theoretical roadblocks, such as the existence of barren plateaus[69] and the fact that optimization itself is an NP-hard problem.[70] However, we observe that even in the conventional case, the optimization of parameters occurs in a corrugated landscape. Nevertheless, it is almost routine to optimize thousands of variational parameters in variational Monte Carlo (VMC).[71] Moreover, barren plateaus are a concept borrowed from the quantum machine learning community and is likely not relevant, or at least not the real bottleneck, in the case of using structured ansatzes[72; 14; 73] (i.e. non-random or heuristic), which should be the norm for studying physical systems. The reason why this concept has never arisen in conventional VMC is that no one has ever tried to optimize molecular systems from quasi-random trial states. In this Perspective, we remain faithful to our practical approach and briefly analyze the problems arising from the simple existence of statistical noise. First, we observe that since the zero variance property does not hold, we can only use the energy and not its variance as a cost function. Optimizing using finite differences is inefficient since each step is affected by statistical noise. Typically, in VMC, this problem can be solved through correlated sampling [7], which is not possible in this case. The other possibility is to calculate the expectation value of the generalized forces \(f_{i}\), defined as the derivative of the energy with respect to the variational parameters. For simple circuits, calculating the force is possible thanks to a technique called the parameter shift rule [5], and extensions are possible for more structured circuits. [14] Due to the no-free-lunch theorem, the statistical error that we had using energy translates into statistical error on the forces. The optimization effectively becomes a stochastic gradient descent, which resembles a discrete Langevin equation at finite effective temperature, \[\theta_{i}^{{}^{\prime}}=\theta_{i}+\delta f_{i}+\eta_{i}^{\rm shot}, \tag{8}\] where \(f_{i}=-\partial_{i}\langle H\rangle\), \(\eta_{i}^{\rm shot}\) is a Gaussian distributed random variable, and \(\delta\) is a finite integration step. Astrakhantsev et. al. [14] have shown that the statistical error defines an effective temperature, \(T^{\rm shot}\), proportional to the variance of the random variable \(\eta_{i}^{\rm shot}\). Below a certain number of samples \(M^{*}\), therefore above a certain effective noise temperature the search is unsuccesful. Above, the optimization becomes possible. Moreover in the \(M\gg M^{*}\) regime, the infidelity of the state preparation seems to scale as \(1/\Delta^{2}\), where \(\Delta\) is the energy gap between the ground and the first excited state. These numerical results have been obtained on a challenging \(j_{1}-j_{2}\) Heisenberg models (cfn. Ref. [74]), and it would be interesting to check how they generalize to chemistry problems. On the optimistic side, the critical number of samples \(M^{*}\) seems to not scale exponentially with the system's size, though allowing in principle efficient VQE optimization in the presence of quantum measurement noise. Moreover, in this state-of-the-art VQE study, barren plateaus are not observed. Another point of contact between variational quantum computing and conventional Monte Carlo is that techniques well-known in the latter for decades are slowly being adopted in this new field. Perhaps one of the most important is the use of the so-called quantum information matrix to precondition the gradient at each step with the following matrix, \[S_{ij}=\left\langle\partial_{i}\psi|\partial_{j}\psi\right\rangle-\left\langle \partial_{i}\psi|\psi\right\rangle\left\langle\psi|\partial_{j}\psi\right\rangle, \tag{9}\] where \(|\partial_{j}\psi\rangle\) is the derivative of \(|\psi\rangle\) as a function of the \(j\)-th variational parameter. Although most of the community believes that Eq. 9 comes from machine learning, where it is used in the _natural gradient_, [75] it has actually been used for more than twenty years in VMC to optimize trial wave functions for chemistry and condensed matter. [7] It was introduced by Sorella in the stochastic reconfiguration method [76; 77] and later given the geometric meaning of metric of the space of variational parameters. [78] A weak regularization of the diagonal is sufficient to obtain stable and effective optimizations, as has also been shown in the quantum case. [79] However, the measurement problem also heavily affects the calculation of algorithm efficiency in this case. In VMC, the matrix \(S\) can be evaluated with negligible overhead using the same samples \(x\)'s, distributed as \(|\psi(x)|^{2}\) (which can be evaluated classically there), that were already generated for the energy calculation. In the quantum case, each matrix element must be statistically evaluated using (uncorrelated) repetitions of a specific circuit for the pair \(i,j\). Moreover, it is not trivial to obtain a circuit for each element \(S_{ij}\), and only a block-diagonal approximation of \(S\), where \(i\) and \(j\) belong to the same block, is the most feasible solution. [79] At the moment, an interesting development to overcome this problem is a heuristic combination of stochastic reconfiguration with the SPSA optimizer. In this case, the matrix S, which would require \(N_{par}^{2}\) circuits, is approximated by a Hessian calculated using only two random directions in the parameter space. [80] However, the numerical benchmarks proposed to validate the method are too small to fully understand the real possibilities of this simplified optimizer. Finally, another connection between VQE and QMC arises in the context of using noisy ionic forces in molecular dynamics (MD) simulations. In such cases, it becomes impractical to follow an energy-conserving trajectory. Sokolov et al. [81] utilize a similar technique proposed in QMC-powered MD simulations from many years ago. [82] They use shot noise to define an effective Langevin MD, enabling unbiased simulations at constant temperature. ## VI Fault-tolerant quantum chemistry At this point, we would face the conundrum that a quantum computer can in principle store an exact wavefunction, but we cannot practically evaluate its energy or other expectation values using the variational methods introduced so far. However, quantum computing admits an efficient method for calculating energy, even more efficient than Monte Carlo methods. These methods are based on the quantum phase estimation (QPE) algorithm or its successive variants. The standard version of this algorithm, [22] which finds applications far beyond chemistry, works as follows. Suppose we have a unitary operator \(U\) and one of its eigenstates \(|\psi_{n}\rangle\). We can evaluate its eigenvalue \(\lambda_{n}=e^{i2\pi\phi_{n}}\) (expressible as a function of its phase since \(U\) is uni tary) using an extra register of \(r\) qubits and \(r\) controlled operations \(\mathfrak{c}U\), where the first operation controls the application of \(U\), the second of \(U^{2}\), the third \(U^{4}\), and so on, until the final \(U^{2^{r-1}}\). Finally, it is sufficient to apply a quantum Fourier transform to read the phase value in binary form, truncated to \(r\) bits. To arrive at an implementation that interests us, we simply identify this generic unitary operator \(U\) with \(e^{iHt}\), i.e., a Hamiltonian evolution operator, and the starting state as an eigenstate of \(H\), for example, the ground state. In this case, the phase we read is \(E_{n}t\). The operation \(U^{2^{r-1}}\) translates into an evolution of time \(t\)\(2^{r-1}\). It is possible to show that the error (due to truncation in \(r\) bits) we make on the phase \(E_{n}t\) scales as \(1/r\), where \(r\) is the total number of applications of \(U\). This is because the discretization error scales as \(2^{-r}\), but each application of the controlled unitary doubles the length of the circuit. In literature, this is often quoted as a quadratic speed-up compared to a Monte Carlo evaluation, whose error scales as \(1/\sqrt{M}\), where \(M\) is the number of samples, and thus the number of function calls of the function to be evaluated on the generated distribution. If we interpret \(r\) as the number of "function calls" i.e., \(\mathfrak{c}U\) operations applied to the state \(|\psi_{n}\rangle\), the asymptotic comparison with Monte Carlo can be made, keeping these specifications in mind. There are now two complications to consider. The controlled operation \(e^{iHt}\) cannot be implemented exactly but requires approximations. The most established is the Trotter step decomposition, which has the advantage of not requiring additional qubits.[37; 83] Recently, other methods that have better asymptotic scaling but require additional qubits, such as the linear combination of unitaries[84] and qubitization[85], have surpassed Trotter's method in popularity. For example, the first runtime and resource estimation works for chemistry, by Troyer and coworkers[33], assumed that QPE with Trotter had to be used, while now more recent works use qubitization (plus many other tricks to shorten the circuit).[16] Notice however that the empirical performance of Trotter methods can be better than predicted upper bounds, and this is still an active area of research.[86; 87; 88] The first crucial observation is that now even a ground-state calculation requires a black-box that implements real-time dynamics, or closely matching objects. This brings us back to the initial discussions about the complexity of implementing Hamiltonian simulations of fermionic systems, much more complex than their spin-lattice counterparts. Given that we require quantitative precision on the time evolution, the Hamiltonian evolution algorithm requires a full fault-tolerant implementation. The second observation is that these algorithms require the challenging assumption of having the ground state as their input. What happens if the state on which we apply QPE estimation is not the ground state? Here there is good news and bad news. Let's start with the good news: unlike the classical case if we input a generic state \(\Phi\) (classically, the equivalent would be preparing a generic ansatz and sampling with Metropolis from it), when we read the phase register, the state must collapse onto an eigenstate of \(|\psi_{n}\rangle\) and the read-out phase is \(\phi_{n}\). Therefore, the energy readout in the auxiliary register determines the collapse onto an eigenstate of \(H\) of the previously initialized state \(\Phi\). The good news is therefore that we will not read a random number, but one of the possible eigenvalues. The bad news is that we do not know which one. In general, we will read the eigenvalue \(n\) with a probability given by the overlap \(|\bra{\Phi}|\psi_{n}\rangle|^{2}\). It then becomes crucial that, if we are interested in the ground state, the initial state is not completely random, but has a sizable overlap with the ground state. Generally, in chemistry and materials science, we are interested in the runtime scaling with size, i.e., number of electrons, basis set, etc. If the overlap vanishes exponentially with size, the entire procedure becomes exponentially long, nullifying the exponential advantage that could be initially envisioned, given the compression in memory and the possibility of efficiently reading the energy. A recent study focused on this aspect, showing that this issue, though known in principle but often forgotten in practice, could seriously undermine the claim of exponential advantage for electronic structure.[89] It should be noted, however, that even a polynomial advantage could be sufficient to solve problems that are still intractable, as can be seen from how Density Functional Theory has revolutionized chemistry and materials science, thanks to its improved \(N^{3}\) scaling compared to the \(N^{6},N^{7}\) scalings of coupled-cluster. To conclude, the absence of an exponential speed-up does not rule out the existence of a practical quantum advantage, which is more difficult to identify a priori, but on a case-by-case basis. In this context, resource estimation studies focusing on particular molecular systems are of great importance. Goings et. al.[16] perform resource estimates to simulate a challenging system for classical methods, the cytochrome P450 enzyme. The estimates depend on the hardware noise that needs to be corrected. To simulate the ground state using an accurate active space, \(\sim\) 5\(\times\)10\({}^{6}\) (5\(\times\)10\({}^{5}\)) physical qubits with error rates of 0.1 % (0.001 %) would be needed. Concerning materials science systems, state-of-the-art studies are represented by Rubin. et. al.[90], and Ivanov et. al.[91], which move away from the plane-wave basis set and combine Bloch, or Wannier orbitals, respectively, with most recent techniques such as sparse subitization or tensor hyper-contraction. Resource estimates applied to Lithium Nickel Oxide battery cathode[90], and transition metal oxides[91] indicate longer fault-tolerant runtimes compared to molecular systems such as P450. Clearly, such estimates are based on state-of-art algorithms, including the most efficient way to encode fermionic Hamiltonians for phase estimation, and the current state of error correction algorithm. Further algorithmic developments will improve the cost of the simulations[15]. Orders of magnitude in efficiency have been gained compared to just ten years ago,[38] and therefore the threshold for quantum advantage could shift in one direction or another, approaching when new quantum strategies are invented or perhaps moving away thanks to the constant progress of "conventional" methods such as DMRG or QMC. ## VII The local energy in variational Monte Carlo After extensively introducing quantum computing algorithms for chemistry and specifically discussing the practical limitations of variational approaches, let's move on to the classical case. It's very instructive to understand why classical variational methods do not suffer from the same variance problem, to guide us in inventing equally efficient energy estimators. Let's start again from the formal definition of the energy's expectation value over a general (unnormalized) state \(\ket{\psi}\) \[E=\bra{H}=\frac{\bra{\psi}H\ket{\psi}}{\bra{\psi}}=\frac{\sum_{x}\bra{\psi}x \bra{x}H\ket{\psi}}{\sum_{x}\bra{\psi}x\ket{\psi}^{2}}, \tag{10}\] where we insert \(\sum_{x}\ket{x}\bra{x}\) in the denominator and numerator. Notice that here we use the notation for a discrete Hilbert space, but the formula can be generalized to continuous models by replacing the sum with the integral (\(\sum_{x}\rightarrow\int dx\)). Some steps are necessary to transform the Eq. 10 into the typical Monte Carlo format, where we integrate the product of a probability distribution from which we can sample form, \(p(x)\), and an objective function. This is achieved formally by dividing and multiplying by \(\psi(x)=\bra{x}\psi\). Eq. 10 then becomes \[E=\frac{\sum_{x}\ket{\bra{\psi}x}^{2}E_{L}(x)}{\sum_{x}\bra{\psi}x\ket{2}^{2} }=\sum_{x}p(x)E_{L}(x), \tag{11}\] where the _local energy_ is defined as \[E_{L}(x)=\frac{\bra{x}H\ket{\psi}}{\bra{x}\psi}, \tag{12}\] and is a quantity that can be evaluated locally for the configuration \(x\) (see below). The probability distribution is defined as \(p(x)=|\bra{\psi}x\ket{2}^{2}/\sum_{x}|\bra{\psi}x\ket{2}^{2}\). Now, if we assume that we can sample configurations \(x\sim p(x)\), i.e. using a Markov-chain algorithm, the energy can be evaluated stochastically as \[E\approx\frac{1}{M_{\mathrm{VMC}}}\sum_{i=1}^{M_{\mathrm{VMC}}}E_{L}(x_{i}), \tag{13}\] using \(M_{\mathrm{VMC}}\) decorrelated samples taken from a Markov-chain algorithm, such as Metropolis[92]. This technique is called Variational Monte Carlo (VMC)[7] and is efficient as long as _(i)_ computing \(E_{L}(x)\) is efficient, and _(ii)_ it is possible to run the Metropolis algorithm also efficiently. Since classical trial functions \(\psi(x)\) can be evaluated with numerical precision, for each \(x\), then it is also the ratio \(|\psi(x^{\prime})|^{2}/|\psi(x)|^{2}\) for each pair \(x,x^{\prime}\), which is needed to perform a Metropolis update.[93] One of the most important features of the local energy is that its variance is zero when \(\ket{\psi}\) is an eigenstate of \(H\). Indeed, if \(H\ket{\psi_{0}}=E_{0}\ket{\psi_{0}}\), then \(E_{L}(x)=\bra{x}E_{0}\ket{\psi_{0}}/\bra{x}\psi_{0}=E_{0}\). In practice, this means that the local energy function will be closer to a constant value as the trial state \(\ket{\psi}\) approaches the ground state. This results in reduced statistical fluctuation in Eq. 13. At the same time, it is possible to use the variance of the local energy as a cost function for the optimization. This allows in principle to certify the success of the minimization, as the ground state is signaled by zero statistical error. ### The local energy in practice The calculation of the local energy depends on the model and the wave function. In continuous space, the wave function can be given by a Slater determinant ansatz (for fermionic systems), usually complemented with an explicit correlation operator like the Jastrow factor.[7; 94] In this case, evaluating the local energy reduces to applying the Laplacian operator to the function in real space and dividing by the function itself. For the sake of clarity, let's consider a toy example. The local energy for a (unnormalized) Gaussian trial ansatz \(\psi(x)=e^{-\theta x^{2}}\), in continuous space, and a typical one-dimensional Hamiltonian \(H=-1/2(\partial^{2}/\partial x^{2})+V(x)\), reads \[E_{L}(x)=\theta-2\theta^{2}x^{2}+V(x). \tag{14}\] The local energy depends on \(x\) and the variational parameter \(\theta\), which can be optimized in an outer loop. In this case, it can be observed that if the external potential is a harmonic oscillator \(V(x)=\omega/2\)\(x^{2}\), the local energy becomes \[E_{L}^{\mathrm{h.o.}}(x)=\theta+x^{2}\left(\frac{\omega}{2}-2\theta^{2}\right). \tag{15}\] The local energy no longer depends on \(x\) when the variational parameter takes the value \(\theta=\omega/2\), for which the variational ansatz becomes exact. Moreover, it also takes the value \(E_{L}^{\mathrm{h.o.opt.}}=\omega/2\) with zero statistical fluctuations in Eq. 13 as \(E_{L}\) does not depend on the sampled point \(x_{i}\) anymore. Modern codes for solving the many-electron Schrodinger equation in chemistry or materials science feature sophisticated trial ansatz, which are in turn functions of atomic orbitals.[71] While in the past, introducing a new ansatz required coding new functions for the evaluating the derivatives, now, the evaluation of the local energy can be delegated to algorithmic differentiation routines. This allows for the adoption of fairly sophisticated ansatze in VMC.[71; 95] The local energy shows up every in VMC calculation including lattice models such as spin and Hubbard models. In this case, the spatial derivatives are replaced by non-diagonal quantum operators such as spin-flip or hopping operators, \(H_{x,x^{\prime}}=\langle x^{\prime}|H|x\rangle\), where \(x,x^{\prime}\) can represent a specific spin configuration or an occupation state of fermions or bosons on a lattice. In this discrete basis, the local energy is written as follows, \[E_{L}(x)=\frac{\langle x|H|\psi\rangle}{\langle x|\psi\rangle}=\frac{\sum_{x^{ \prime}}H_{x^{\prime},x}(x^{\prime}|\psi)}{\langle x|\psi\rangle}. \tag{16}\] and can be computed efficiently as long as the number of states \(x^{\prime}\) such that the Hamiltonian matrix elements \(|H_{x,x^{\prime}}|\neq 0\), at fixed \(x\), is only polynomially increasing. ### Pauli measurements versus local energy To better illustrate these concepts, it can be instructive to perform a numerical experiment on a toy model, the one-dimensional transverse field Ising model, \[H=H_{1}+H_{2}=-J\sum_{k=1}^{L}\sigma_{k}^{z}\sigma_{k+1}^{z}-\Gamma\sum_{k=1} ^{L}\sigma_{k}^{x}, \tag{17}\] where \(\sigma^{\alpha}\) are Pauli matrices, and consider the critical transition point at \(J=\Gamma=1\). We also consider a short chain of \(L=10\). We denote a generic computational basis configuration as \(|x\rangle=(s_{1},\cdots,s_{L})\), where \(s_{k}\) are eigenvalues \(\{1,-1\}\) of the \(\sigma_{j}^{z}\) operator. In this case, the spin Hamiltonian is already expressed in Pauli terms (one just needs to re-define the eigenvalue of the spin-\(z\) operator from \(\{1,-1\}\) to \(\{0,1\}\)). Regarding the VQE approach, the energy can be measured in only two bases: the computational basis and the "\(XX\cdots X\)" basis, obtained by applying a Hadamard gate, H, on each qubit at the end of the circuit that prepares the variational state. In this numerical experiment, we use the variational Hamiltonian form, with a sufficiently deep circuit of up to 24 layers, resulting in up to 48 variational parameters (see Appendix A). By optimizing ansatze characterized by different circuit depths (without shot noise for simplicity), it is possible to obtain trial states systematically closer to the exact ground state of the model.[74] In Fig. 2, we use depths ranging from 12 to 24, and we can reach a relative error on the energy of \(10^{-5}\) compared to the exact ground state energy, \(E_{0}\). However, the statistical error on the energy, which is evaluated with Eq. 5, does not improve. In fact, if we had tried to optimize the circuit using the noisy energy estimator, we would not have been able to obtain such accurate optimized trial states. This clearly demonstrates that the estimator does not possess the zero variance property, as opposed to the VMC calculation. To obtain the standard deviation in Fig. 2 we repeat the estimation of the variational energy, Eq. 5 (Eq. 13 for the VMC case described below), 100 times to obtain a population of variational energies that could be obtained with the given variational setting, \(M_{j}\) (\(M_{\rm VMC}\) for the VMC case) setups. We use a number of shots \(M_{j},M_{\rm VMC}\), which is smaller (\(10^{2}\)), equal (\(10^{3}\)), and larger (\(10^{5}\)) than the Hilbert space of the model, i.e. \(2^{10}=1024\). For the VMC comparison, we deliberately use a fairly simple classical ansatz, a long-range Jastrow state, which features only 5 variational parameters for \(L=10\) (see Appendix A). Although this classical ansatz only reaches a moderate relative accuracy of \(10^{-3}\), at best, the statistical error on the energy consistently improves, outperforming the statistical error obtained with the quantum circuit. Notice that this is an easy model for VQE: the number of measurement basis is the minimum possible for a genuine quantum many-body system. Electronic structure Hamiltonians unfold into thousands of Pauli operators, which in turn require similar numbers of basis. This numerical example demonstrates the power of the local energy-based estimator compared to the Pauli measurement one. From this example we can understand also the following lesson: even finding the smallest set possible of basis to measure \(H\) will not solve all our problems, as this estimator still lacks the zero-variance property. A hybrid solution has been proposed by Torlai et. al. [96]. They use quantum state tomography, using neural-networks [97], and a tomographically incomplete basis set, to obtain a classical reconstruction of the quantum state. Classical VMC can be then applied to this classical approximation to calculate, precisely, the energy. This method solves the variance problem but it introduces a bias stemming from a possibly, and likely, imperfect reconstruction of the quantum state. Moreover, it raises the question of finding the range of applicability of the method. If the quantum states can indeed be represented by a classical ansatz, then one could directly reach the ground state by optimizing that, without the need of a quantum computer. ### Beyond variational: projective methods The local energy is a central concept in quantum Monte Carlo beyond the simplest VMC method because, in practice, every projective QMC method requires a trial wave function \(\psi\) to alleviate the sign problem in fermionic simulations or, more generally, to reduce statistical fluctuations. These projective methods, such as Diffusion Monte Carlo [7] or Auxiliary Field Monte Carlo (AFQMC) [98], improve upon the VMC energy but still rely on a variational state for importance sampling. Thus, the local energy resurfaces in these contexts as well. An accurate QMC simulation is rarely seen without a good variational starting point.[99] One can therefore see a similarity between the importance of VMC, which is foundational for a more accurate calculation with projective QMC, and the significance of the initial state preparation for a successful execution of QPE in quantum computing. It is highly likely that this duality between variational and projective methods (in imaginary time in the classical case and in real-time in the quantum case) will extend to quantum computing. In that case, algorithms like VQE or its variants, despite being considered already old-fashioned by some, will remain central even in the fault-tolerant regime as state preparation subroutines. ## VIII Quantum computing meets quantum Monte Carlo ### The local energy in quantum computing The existence of a local energy estimator in quantum computing would eliminate any variance problem in VQE. However, it is not as straightforward to apply this trick in quantum computing simply because evaluating the ratio becomes extremely demanding in general.[100] Here \(\psi(x)=\left\langle x|\psi\right\rangle\) needs to be evaluated from quantum measurements, hence is affected by statistical noise. While evaluating \(\psi(x)\) to additive precision is possible, the local energy involves ratios of amplitudes. Maintaining a fixed precision on the ratio (18) is costly, because quantum states have generally an exponentially large support. This translates into exponentially vanishing amplitudes at the denominator of Eq. 16. These statistical fluctuations are different compared to those found in a standard Monte Carlo calculation. In VMC, the local energy can always be computed with numerical precision and the fluctuations arise from a finite number of samples, \(M_{\rm VMC}\) in Eq. 13 (in the presence of an approximate trial state). Here uncontrolled statistical fluctuations arise solely from the estimation of the local energy at a fixed \(x_{i}\). We are witnessing an increase in works that aim to combine quantum computing and quantum Monte Carlo. Huggins and coworkers proposed an interesting combination of quantum computing and AFQMC.[101] In this work, they use a circuit to generate the trial wave function, from which samples are drawn (in this representation, the configuration \(x\) is a Slater determinant). The AFQMC algorithm then proceeds unchanged, and the supposed advantage of the method lies in using a circuit to generate an ansatz that could be inaccessible classically. Mazzola and Carleo[102] showed that the procedure, when adapted to many-body lattice models at criticality, thus using Green's function Monte Carlo instead of AFQMC, exhibits an exponentially scaling behavior with a hard exponent. This is due to Eq. 18 and that strongly-correlated states have vanishing overlaps on the configuration's basis, necessitating an exponentially increasing number of samples to compute the local energy. It is estimated that a reasonably accurate ground state calculation of a 40-sites transverse-field Ising model (Eq. 17) requires the order of \(10^{13}\) measurements. Inserting a gate frequency of 10 kHz (i.e. assuming a fault-tolerant implementation, see Sec. II) and a circuit depth of \(\mathcal{O}(10)\) layers to generate an accurate trial state, this implies a runtime of a few thousand years, for a system in reach of exact classical diagonalization. Other works appeared almost simultaneously last year on this topic. Zhang et. al[103] introduced a quantum computing adaptation of FCIQMC[104]. In this work a quantum circuit \(U\) is used to create a 'quantum' walker \(\left|\tilde{x}\right\rangle=U\left|x\right\rangle\), i.e. a linear combination of Slater determinants \(\left|x\right\rangle\), undergoing the FCIQMC subroutine. The idea is interesting as it could counter the exponential explosion of the determinants/walkers during the imaginary time projection, by compressing logaritmically the memory to store them. A possible major drawback of this method is that the Hamiltonian \(H_{\tilde{x},\tilde{x}^{\prime}}\) in this new basis is not sparse anymore. Xu and Li[105] proposed to use Bayesian inference to reduce the number of shots required to compute the local energy. Kanno et. al.[106] further combines the ideas of Ref.[101] with tensor networks. Yang et. al.[107] propose a way to speed-up _real_-time path-integral MC already on NISQ hardware. Tan et. al.[108] Figure 2: Standard deviation of the energy estimator using a quantum circuit and Pauli measurements, Eq. 5 (circles), and with a simple classical ansatz but using the local energy, Eq. 16 and Eq. 13(diamonds). Different colors indicate different sampling sizes. In the quantum case, the dataset is made of \(M_{j}\) wavefunction collapses per basis (which are two for the model considered), while for the classical case, it is made of \(M_{\rm VMC}\) spin configurations, \(x_{i}\), sampled from the trial state \(\left|\psi(x_{i})\right|^{2}\). In both cases, we prepare different ansatze, within the same ansatz class, but having different accuracies. For each trial state, we plot the standard deviation of the energy vs. the relative error of its variational energy \(E_{\rm var}\) (computed exactly). The zero variance property only holds in the VMC case, since the statistical error of the quantum energy estimator remains finite even when the trial state approaches the exact limit. devise instead the integration with the Stochastic Series Expansion, another flavour of QMC used for spin models. Finally, two recent works propose to use quantum data in a conventional VMC framework. In this case the local energy is calculated in conventional hardware. Montanaro and Stanisic [109] propose the usage of a VQE circuit as importance sampler to speed-up the first iteration of a VMC simulation. Moss et. al. [110] use quantum data from Rydberg atom simulators to train a classical neural-network ansatz (as in Ref. [96]) and further optimize it in a VMC fashion. Overall, it is likely that an efficient way to estimate the local energy is possible only for sparse states, i.e for which the number of non-zero overlaps \(\langle x|\psi\rangle\neq 0\) grows only polynomially with the system's size. However, it remains to be understood if a quantum computer is really needed to tackle such systems at this point. [111] Furthermore, if a suitable basis transformation \(U\) can be found to reduce the support of such states, then (1) this transformation should not spoil the sparsity of the Hamiltonian \(H_{x,x^{\prime}}\) to keep the evaluation of Eq. 16 efficient. (2) Moreover, if this transformation exists it can be used to diagonalize efficiently the systems in a reduced sub-space without the need of QMC. On a more positive note, it is not excluded that, despite exhibiting exponential scaling, the aforementioned approaches could yield a better exponent than the best classical method for some specific fermionic systems. To achieve this, it will be crucial to start with a classically intractable trial state to justify the subsequent imaginary-time projection. Further research and methodological advancements are required to assess the true potential of the method, in the presence of shot noise. Overall, the pursuit of an efficient method for calculating energy, inspired by the local energy in VMC, is a field of research that we hope will yield numerous fruitful results. It is necessary for the quantum computing and QMC communities to clearly understand the limitations and potentialities of their respective techniques in order to invent new hybrid algorithms at the interface of these two worlds. ### Classical-inspired circuits for VQE, quantum-inspired ansatze for VMC The techniques and methods that have been used for decades in QMC are so numerous that many have been (and many are waiting to be) exported to quantum computing. Trial functions play a central role in VMC. The use of explicitly correlated non-separable ansatze has brought great success to VMC and is basically a clever solution to compress the electronic wavefunction, which, when described in the space of determinants, requires otherwise an exponential number of coefficients. The latest iteration of this concept is the introduction of neural network quantum states by Carleo and Troyer in 2017, [112] which can be seen as more general forms of Jastrow, [94; 97] back-flow, [113; 114] and tensor network states. [115] As mentioned earlier, compressing the Hilbert space within a polynomial scaling size quantum memory enables the manipulation of linear combinations of arbitrarily large Slater determinants. However, when considering the variational approach, we are constantly seeking shorter quantum circuits that can capture as much entanglement as possible, within the coherence time limitation of NISQ systems. Several works have already proposed ways to implement a Gutzwiller operator, which is essentially the simplest form of a Jastrow operator, as a quantum circuit. Murta and Fernandes-Rossier [116] propose a method based on post-selection. Typically, the way to create non-unitary operators in quantum computing is through embedding them in a larger system that undergoes unitary evolution, a method also known as "block encoding". This involves introducing ancillary qubits, and it can be certified that the non-unitary operator has been successfully applied to the quantum state if and only if the ancillary register is measured and read in a given state. However, the problem with this approach is that the success probability decreases with the system size, requiring many repetitions. Seki and coworkers [117] also propose a similar approach, based on the linear combinations of unitaries, and therefore also affected by a finite success probability. Using a different approach, Mazzola and coworkers [118] defined implicitely a hybrid quantum-classical wavefunction with a Jastrow operator in post-processing. The approach has been then improved in its scalability in Ref. [119]. There, a quantum circuit is used as importance sampler, and the measured configurations undergo post-processing by a neural-network. Benfenati and coworkers [120] instead implemented a Jastrow operator moving it from the wavefunction to the Hamiltonian. This approach also do not require additional circuits compared to a VQE calculation. However, the re-defined Hamiltonian operator features much more Pauli terms to measure. Motta and coworkers devised an imaginary time evolution (QITE) operator without ancilla and post-selections. [121] The original formulation of the method formally incurs an exponential dependence on the correlation length in the general case, because it requires quantum state tomography. However, if truncated, it can generate heuristic trial states for variational calculations, and is still subject of improvements. [7] Finally, it is interesting to note that the flow of information is not always from the older discipline to the newer one. Some circuit ansatz used in quantum computing can be adapted to VMC. Inspired by the Hamiltonian variational circuit ansatz [8] (see Appendix A) Sorella devised a method called Variational AFQM capable of obtaining state-of-the-art ground state energies of the Hubbard model for various \(U/t\) parameters and dopings in the thermodynamic limit. [122] ### Variational real-time dynamics and updates in parameters space Variational states are not only used for ground state calculations but they can also be used to study dynamics. The price to pay is that the variational state must be flexible enough to accurately describe also excited states, and this can be a demanding constraint, while the advantage is the ability to use much shorter circuits compared to those used, for example, for trotterization. From a classical perspective, this area of research is very active in recent months, as it allows for countering quantum advantage experiments in the quantum dynamics application space (cfn. Sec. III). Obviously, the use of a variational state does not allow for exact evolution, but it is also true that the errors of a NISQ machine do not allow for it either. The balance between classical and quantum advantage for real-time dynamics will be shifted in favor of the latter when fidelity enables the simulation of sufficiently large systems for a sufficiently long time, rendering them inaccessible to classical approximation methods.[23] The subfield of variational real-time dynamics also offers interesting parallels between quantum computing and (time-dependent) VMC[123]. The formalism based on the time-dependent variational principle is the same. In practice, even the fundamental ingredients that allow for the update of variational parameters are the same: the matrix \(S\) defined in Sect. V.2. (cfn. Ref. [123] with Ref. [124]). As we have seen in the case of optimization, the fact that the elements of the \(S\) matrix are subject to statistical noise is a common issue in both implementations. In this case, as well, it is reasonable to expect cross-fertilization between the two techniques, regarding both variational forms and efficient ways to evaluate the \(S\) matrix. The field of variational algorithms for real-time simulations is very active. In this area, concepts borrowed from tensor-network simulations are also useful to shorten the circuit.[125] Generally speaking, variational parameters can be updated using different pseudo-dynamics \(\mathbf{\theta}^{\prime}=\mathbf{\theta}+\delta\mathbf{\theta}\) to achieve various objectives. While pure energy minimization is the most popular goal, and real-time evolution following the time-dependent variational principle is the second, there are other possibilities. Patti et al. [126] devised an iteration scheme to perform Markov chain Monte Carlo in the quantum circuit's parameter space, i.e., to sample from \(p(\mathbf{\theta})\sim\exp\left[-\beta\langle\psi(\mathbf{\theta})|H|\psi(\mathbf{\theta })\rangle\right]\). The resulting equation is a generalization of stochastic gradient descent that ensures detailed balance. This approach could assist in escaping local minima during VQE optimization. Similar ideas have been proposed in the VMC context earlier. Mazzola et. al. [78] showed that one can obtain an upper bound for the free energy, by sampling from \(p(\mathbf{\theta})\sim\sqrt{|S(\mathbf{\theta})|}\ \exp\left[-\beta\langle\psi(\mathbf{ \theta})|H|\psi(\mathbf{\theta})\rangle\right]\). In VMC, this can be achieved either using a modified Langevin equation for \(\mathbf{\theta}\) or a modified Metropolis acceptance, also known as "penalty method".[127] In conclusion, manipulating trial states in the presence of statistical noise is a common feature of VMC in all its formulations and scopes. Many ideas have been proposed to achieve stable parameter updates. The VQE community could profit from this established knowledge but also share its own developments and ideas to advance both fields. ## IX Classical Monte Carlo Meets Quantum Computing In this Section, we completely shift our perspective. Not all chemistry problems are quantum many-body ones, for example, understanding protein folding is a daunting task already in its classical force-field formulation. Likewise, not all problems that a quantum computer can solve are genuine quantum mechanical problems. In fact, in many cases, the opposite is true: the most famous quantum algorithms, that have made the field of Quantum Information renowned, are focused on solving "classical" problems. For instance, Shor's algorithm provides exponential speed-up for factoring integers, and Grover's algorithm enables quadratic speed-up for searching in databases.[22] Other examples include algorithms for linear algebra, optimization, and machine learning. Philosophically speaking, solving a purely classical problem with a quantum machine can be even more intellectually rewarding than simulating a quantum system, where the distinction between computation and simulation becomes less clear. Up to this point, we have been exploring whether and how, well-known techniques in quantum Monte Carlo can be adapted to quantum computing to simulate many-body quantum systems. Now we are questioning the opposite: Can a quantum computer be useful in speeding up a classical Monte Carlo algorithm, where the Hamiltonian is defined solely using classical variables, e.g., classical spins? And more specifically, can we achieve this already on NISQ machines? ### Autocorrelation of a Markov chain Markov chain Monte Carlo (MC) algorithms are of fundamental importance in both science and technology to understand models that lack a simple analytical solution.[128; 129] MC methods aim to generate statistically independent, representative configurations \(x_{i}\), belonging to the computational space, distributed as a target Boltzmann distribution, \(\rho(x)=\exp(-\beta V(x))\), at finite inverse temperature \(\beta=1/T\), and where \(V(x)\) is a classical potential energy. A Markov chain MC algorithm sequentially generates these representative configurations, through a transition probability matrix \(P(x,x^{\prime})\), that defines which states \(x,x^{\prime}\) can be connected along the chain, and the relative probability of the transition \(x\to x^{\prime}\) (each row of the matrix \(P\) is normalized to one).[130] Among the family of Markov Chain MC algorithms, the Metropolis algorithm is certainly the most popular one.[92] Here the transition process takes the form \(P(x,x^{\prime})=T(x,x^{\prime})A(x,x^{\prime})\), where \(T(x,x^{\prime})\) and \(A(x,x^{\prime})\) are, respectively, the _proposal_ and the _acceptance_ probability matrices. The algorithm works as follows: when at state \(x\), a candidate trial configuration \(x^{\prime}\) is generated from the distribution \(T(x,\cdot)\). The trial configuration is accepted with probability \(A(x,x^{\prime})\). If accepted, the next element of the chain becomes \(x^{\prime}\), otherwise, it remains \(x\). If \(T\) is a symmetric matrix, then the Metropolis acceptance is given by \(A(x,x^{\prime})=\min[e^{-\beta(V(x^{\prime})-V(x))},1]\).[92] It is clear that the efficiency of the algorithm strongly depends on the choice of \(T(x,x^{\prime})\).[131, 132, 133] The efficiency is given by the relaxation or mixing time, which quantifies the speed of convergence towards the equilibrium distribution \(\rho(x)\), and is formally given by the inverse of the gap, \(\delta\), between the largest and the second-largest (in modulus) eigenvalues of \(P\). Two limiting cases exist, the first involves a _local_ update scheme, based for instance on some physical intuition about the system (e.g. a single spin-flip). This usually produces a new configuration \(x^{\prime}\) that is similar to the parent \(x\). This choice increases the acceptance rate, because \(V(x)\sim V(x^{\prime})\) but results also in a long sequence of statistically correlated samples, such that long simulations are needed to thoroughly explore the configuration space. On the contrary, a _non-local_ update scheme is more effective in producing uncorrelated samples, but usually at the expense of a vanishing acceptance rate. Interestingly, it took about 30 years after the invention of the Metropolis algorithm in 1953, before the introduction of efficient non-local update schemes for lattice models, the Swendsen-Wang[134] and the Wolff[135] algorithms. These _cluster_ updates solved the _critical slowing down_ of MC simulations at phase transitions in ferromagnetic Ising models, but unfortunately are not as effective for frustrated models.[136, 137] ### Sampling from a classical Boltzmann distribution using wavefunction collapses Recently, it has been proposed the use of a quantum computer, digital or NISQ, to generate efficient non-local Metropolis updates \(T(x,x^{\prime})\) for spin systems. The theoretical framework has been first introduced by Mazzola[138] in 2021. Shortly after, Layden et. al.[139] demonstrated on real quantum hardware a quantum-enhanced Markov chain algorithm. Following Ref.[138], the general idea is rooted in the Figure 3: Schematic depiction of one quantum-enhanced Metropolis step described in Sec. IX.2. The system illustrated is a 2D lattice model, with \(L\) sites and a classical spin glass energy cost function, \(H_{1}\). In this notation \(|x\rangle=|\sigma_{1}^{z},\cdots,\sigma_{L}^{z}\rangle=|\vec{\sigma}^{z}\rangle\) is a bit-string, basis state of the \(2^{L}\) dimensional Hilbert space. We start from an initial bit-string \(|\vec{\sigma}_{\rm init}^{z}\rangle\), that undergoes unitary evolution (in a digital quantum hardware, this can be implemented by Trotterization) using a full quantum Hamiltonian \(H=H_{1}+H_{2}\). At the end of the evolution, the measurement process collapses the time-evolved state \(|\psi\rangle\) into a single bit-string \(|\vec{\sigma}_{\rm new}^{z}\rangle\). This concludes the proposal step \(T(x,x^{\prime})\). For the transverse field case, the proposal matrix is symmetric, \(T(x,x^{\prime})=T(x^{\prime},x)\). Finally, the acceptance step is performed classically, and the new configuration may or may not be accepted. Fokker-Plank formalism of non-equilibrium statistical mechanics in continuous systems [140]. The Fokker-Plank operator \(H_{\text{FP}}\) is a parent quantum Hamiltonian of the physical potential \(V(x)\), which spectrum is closely connected with the number of local-minima of \(V(x)\). For instance, in a double-well model, \(H_{\text{FP}}\) has two lowest-lying eigenvalues and eigenstates corresponding to the symmetric(antisymmetric) combination of the two Gaussian localized states in the two wells, \(|\psi_{L}(x)\rangle,|\psi_{R}(x)\rangle\). This idea can be ported to lattice models. Let us consider for simplicity the ferromagnetic Ising model, defined by \(V=H_{1}\) in Eq. 17, as our classical potential. The task is to sample from the classical Boltzmann distribution \(\exp\left[-\beta H_{1}(x)\right]\). Here, the autocorrelation time of a local spin-flip update scheme is dominated, at low temperatures, by the rate of the _rare-event_ processes that drive the system from the _all-up_ (left) \((\uparrow\uparrow\cdots\uparrow):=|\psi_{L}\rangle\) to the _all-down_ (right) \((\downarrow\downarrow\cdots\downarrow):=|\psi_{R}\rangle\) classical states. These processes are necessarily characterized by a nucleation event, exponentially suppressed with \(\beta\), and the subsequent diffusion of the domain wall separating the \(\uparrow\) and the \(\downarrow\) regions [141]. If however, one can construct a quantum Hamiltonian \(H\) such its low-lying eigenstates are mostly localized on the states \((\uparrow\uparrow\cdots\uparrow)\) and \((\downarrow\downarrow\cdots\downarrow)\), one could then prepare and sample configurations from these states with optimal autocorrelation times, through repeated collapses of these wavefunctions. This quantum Hamiltonian could be the quantum transverse field Hamiltonian in Eq. 17, where we add a quantum driver \(H_{2}\) to the classical potential, \(H_{1}\). Here, in the small \(\Gamma\) limit, the two (unnormalized) lowest-lying eigenstates of \(H\) are \(|\psi_{0}\rangle\approx|\psi_{L}\rangle+|\psi_{R}\rangle\) and \(|\psi_{1}\rangle\approx|\psi_{L}\rangle-|\psi_{R}\rangle\).[142] While the gap \(E_{01}\) between these states is exponentially vanishing with the system size, the gap between the rest of the state remains \(\mathcal{O}(1)\). It is clear that, by sampling configurations \(x\),[143] from either \(|\psi_{0}\rangle\) or \(|\psi_{1}\rangle\), we can achieve optimal autocorrelation times in the large \(\beta\) limit, as the states \(|\psi_{L}\rangle\),\(|\psi_{R}\rangle\) are sampled with equal probability. At intermediate temperatures, the probability distribution obtained via the eigenstates projection is generally different from the classical Boltzmann distribution \(\rho(x;\beta)=e^{-\beta H_{1}(x)}\) one aims to achieve. For this reason a standard Metropolis acceptance step needs to be performed. One can define a valid Markov chain out of this physical intuition, rooted in (1) the preparation of a localized state, (2) a quantum propagation to prepare a linear combination of the low-energy eigenstates of \(H\), and (3) a measurement to collapse into a new string state. In Ref. [138] it is proposed to use a QPE subroutine to prepare such low-energy eigenstates. Layden et. al. [139] simplifies the idea and uses a Hamiltonian simulation subroutine, \(e^{-iHt}\), with randomized \(t\) and \(\Gamma\) values at each step. Crucially they observe that such a quantum proposal update is symmetric, thus enabling a fast and practical evaluation of the acceptance step. The algorithm is sketched in Fig. 3. A superquadratic speedup for spin glass instances is observed, i.e. polynomial speed-up of order 3.6, compared with the best possible classical update strategy. Interestingly, the procedure is demonstrated on hardware where it is found that hardware errors only impact the efficiency of the chain, while the sampling remains unbiased, exactly due to the existence of a classical acceptance step after the quantum, noisy proposal move. This quantum-enhanced Markov chain MC needs a quantum dynamics subroutine. This, in turn, can be implemented both in the fault-tolerant, the NISQ regime, and in analog simulators, provided their architecture constraints. Clearly, to reach a possible quantum advantage one needs to deal with the fact that, as explained in Sect. II, quantum gate times are much slower than classical CPUs, and this may cancel a scaling advantage for reasonable system sizes. In particular, lattice MC simulations can be executed also on special-purpose classical hardware, such as FPGA, as demonstrated in Ref. [144], which may enjoy an even faster logical clock speed. Therefore, more work will be needed to assess whether this idea can bring a real benefit in this application space. ### Quantum walks and quantum Metropolis algorithms For the sake of completeness and clarity, it is important to mention here another family of algorithms, known as quantum walks, which share the same objective as the quantum-enhanced Markov chain algorithms described earlier: accelerating the convergence of classical Markov chains. While the justification for quantum-enhanced Markov chain algorithm of Sect. IX.2 is based on physical intuition [138], and also the potential gains are assessed heuristically, quantum walks come with a more rigorous guarantee: if they can be implemented, they provide a quadratic speed-up in autocorrelation times. This quadratic speed-up is related to concepts such as Amplitude Amplification or Grover's algorithm.[145] There are practical and conceptual difficulties that limit the design of quantized Markov chains, particularly in the acceptance step. In classical systems, we can always save the current configuration, previously denoted as \(x\), and reuse it if the trial move leading to \(x^{\prime}\) is not accepted. However, in quantum computing, the _no-cloning_ theorem prohibits the direct copying of a quantum state.[146] Furthermore, the acceptance step also involves arithmetic operations that are computationally more demanding in the quantum context. It is easy to imagine that a unitary "walk" operator must include rotations by the following arbitrary angle \[\phi=\arcsin\left(\sqrt{\min[e^{-\beta\Delta},1]}\right), \tag{19}\] where \(\Delta\) is an energy difference.[147] Now, it is important to note that quantum arithmetic is much more expensive in quantum computing because it must be reversible. For instance, the most efficient way to perform the addition is still a subject of ongoing research.[148] The most commonly used definition of a quantum walk originates from Szegedy.[149] For the sake of brevity, we have to refer the reader to the comprehensive review[150] or to Ref.[147], where Szegedy's algorithm is revisited from a more practical perspective, for details. In short, for any classical Markov chain defined by the transition matrix \(P(x,x^{\prime})\), (see Sect. IX.1) a quantum walk represents a quantized version of it that offers a quadratic speed-up in mixing time. Formally, it enhances the gap from \(\delta\) to \(\sqrt{\delta}\), such that the mixing time decrease quadratically from \(\mathcal{O}(1/\delta)\) to \(\mathcal{O}(1/\sqrt{\delta})\). Szegedy's walk circumvents the no-cloning constraint using two copies of the graph (or lattice), and postulate the existence of a unitary operation \(W\) of the form \[W\ket{x}\otimes\ket{0}=\sum_{x^{\prime}}\sqrt{P(x,x^{\prime})}\ket{x^{\prime} }\otimes\ket{x}. \tag{20}\] A practical implementation of the walk operator \(W\) would require digital rotation of angles such as Eq. 19, which are in turn evaluated by a sequence of costly arithmetic operators. Lemieux et. al[147] analyze the cost of quantum walks showing that the quadratic speed-up that they can offer is overshadowed by the cost of implementing the \(W\) operator, assuming even optimistic estimates for the gate time of fault-tolerant hardware. The quantum-enhanced method of Sec. IX.2 completely avoids performing the acceptance step on the quantum hardware, which is the main reason for its hardware feasibility. We note that it is increasingly easy to get confused with the names of the methods and the combinations of words such as "quantum", "Monte Carlo", and "Metropolis". In the traditional literature, as well as in this Perspective article, "quantum Monte Carlo" refers to the family of Monte Carlo algorithms that are executed on conventional computers but aim to solve many-body quantum problems. However, there is a community in quantum computing for which this combination of words indicates a Monte Carlo algorithm executed on quantum hardware, including Montanaro's algorithm for computing expectation values of multidimensional integrals using quantum amplitude estimation[151]. This type of application finds application in finance[34], and, while interesting, it is not discussed in this manuscript. Finally, we also note that a quantum computing-based method to speed up a Monte Carlo simulation, in principle, could be used to accelerate a QMC algorithm as well. In this case, the leap would be twofold: using quantum hardware to accelerate a classical algorithm, such as a path-integral MC, to simulate quantum Hamiltonians. Two philosophically more interesting algorithms can be mentioned for this purpose. Temme et al.[152] proposed a "quantum Metropolis algorithm" for studying quantum many-body Hamiltonians. This method suggests performing a walk in the eigenstates of the quantum Hamiltonian, thus overcoming the sign problem.[153] The algorithm includes performing and undoing QPE, an ancilla register that stores the energy, and the ability to perform on it measurements that only reveal one bit of information, enabling the acceptance step to overcome the no-cloning principle. Yung and Aspuru-Guzik[154] chose a different strategy for their "quantum-quantum Metropolis algorithm", finding a way to extend Szegedy's walk to quantum Hamiltonians. The runtime performance and scaling of such approaches is still yet to be assessed. ### Sampling with quantum annealers or simulators The algorithms presented in Sect.IX.3 require a fault-tolerant computer. However, it cannot be ruled out that an advantage in the sampling problem could come from hardwares that fall on the opposite spectrum, namely noisy quantum simulators or quantum annealers.[19] First of all, let's observe that the quantum-enhanced Markov chain method of Sect. IX.2 can be implemented not only using trotterization but also through real-time dynamics in a quantum simulator.[139] Furthermore, optimization and sampling tasks are closely connected.[155] Special-purpose quantum simulators, called quantum annealers, have been built with the aim of optimizing large-scale spin-glass problems, but they have also been reconsidered as thermal samplers[156] with some specific applications in machine learning.[157, 158] The possibility of using a quantum annealer as a sampler arises from its deviation from adiabaticity. The existence of vanishing gaps during annealing implies that at the end of the experiment, the wave function does not localize in the classical minimum of the cost function but remains delocalized, producing a distribution of read-outs. The presence of hardware noise amplifies this effect even further. This residual distribution could resemble a thermal Boltzmann distribution of some classical Hamiltonian, close to the problem Hamiltonian originally meant to be optimized, and at some effective temperature, which is difficult to determine.[156, 159] However, given all the possible hardware and calibration errors, it is unlikely to hope that this approach can generate an unbiased sampling from a target distribution. Recently, Ghamari et al.[160] proposed the use of this annealing process as an importance sampler. Similarly to the quantum-enhanced Markov chain method, detailed balance is restored using a classical acceptance step. In this case, as well, a control parameter is the annealing runtime, which generates a more-or-less-localized final distribution. Finally, Wild et. al.[161] proposed an adiabatic state preparation of Gibbs states that can also bring quantum speedup over classical Markov chain algorithms, that could also be implemented on NISQ Rydberg atoms devices. Conclusions In this Perspective, we investigate many intersections between quantum algorithms and Monte Carlo methods. We begin with a brief review of quantum computing applications for many-body quantum physics. We outline the consensus that is emerging after these years in which quantum computing has become mainstream. With the availability of quantum computers with \(\sim 100\) qubits and the ability to implement gate sets, albeit noisy [13], the field is taking on a more practical computation beyond the traditional boundaries of quantum information theory. We observe that different hardware platforms imply different gate frequencies, which must be taken into consideration in the perspective of achieving quantum advantage. Quantum advantage for high-accuracy many-body ground state calculations is likely to be deferred to the fault-tolerant era due to the existence of hardware noise in today's machines and the existence of highly developed classical competitors [89]. Although the possibility of obtaining quantum advantage through variational methods is possible, especially in systems where classical methods struggle [74], here we face the additional challenge posed by the presence of quantum measurements shot noise. We then list several points of contact between quantum and classical variational methods. First, we explain the difference between the statistical noise present in conventional QMC algorithms and the noise arising from quantum measurements. Classical QMC methods feature an energy estimator - the _local energy_ - that enjoys the zero-variance property. This, along with stable optimizers [162; 76], enables the optimization of wave functions featuring thousands of parameters. This is not currently possible in variational quantum computing. Even with access to an exact ground state preparation circuit, obtaining the energy with sufficient precision requires a costly number of circuit repetitions [8]. It is clear that this problem arises even before the circuit optimization stage. In the current literature, this aspect is often overlooked, as several new algorithms or circuits are tested without realistic shot noise conditions [67]. We suggest that finding the quantum equivalent of the _local energy_ should be one priority in variational algorithms developments. Along the same lines, we reviewed attempts and challenges in "quantizing" QMC methods [101; 102; 103; 105]. Other areas where we expect to see cross-fertilization between the quantum and classical worlds include the development of variational forms: classically-inspired circuits for VQE, quantum-inspired ansatze for VMC. Several essential ingredients for variational real-time evolution and parameters optimization under noisy conditions have been already put forward in the VMC community and will be instrumental for their quantum counterparts. Finally, after discussing how knowledge of QMC methods can provide new momentum to the development of quantum algorithms, we take the opposite direction, showing that quantum hardware can bring advantages to Monte Carlo itself. In this space, quantum walks have been present in the literature for several years and achieve a quadratic speed-up in autocorrelation times through the quantization of a classical Markov chain [149]. Their scaling is discussed, as is typical in quantum information, using anacular form, which assumes the existence of key subroutines without delving into the details of their concrete gate-level implementation. Recently, it has been shown that these oracles require fairly long circuits. In their necessary fault-tolerant implementation, this implies absolute runtimes that are still slower than classical Monte Carlo, even for large-scale classical-spin simulations [147]. A more hardware-friendly possibility is represented by a family of methods that use a quantum computer as an importance sampler or to perform only the proposal part of a Metropolis update on the quantum hardware [138; 139; 160]. Physically speaking, one simply leverages the fact that quantum measurements are uncorrelated, making them an efficient engine for sampling. In this case, shot noise is no longer a limitation but rather becomes the computational resource for quantum advantage [138]. Overall, the purpose of this Perspective is to further connect two communities: the quantum algorithms and Monte Carlo one. As mentioned, many methods developed in QMC can be repurposed in quantum computing. On the other hand, QMC can be a formidable competitor that can hinder or delay the quantum advantage. This is true for quantum chemistry applications, but also optimization and beyond. For instance, QMC can reproduce the scaling of quantum annealing machines, for classical optimization purposes, under certain conditions [163; 142; 164]. However, the two communities can be complementary, and we hope that new impactful algorithms, either quantum or classical, will emerge thanks to this interaction, to solve important problems in chemistry and condensed matter. **Acknowledgments.** I acknowledge discussions on these topics in the past years with S. Sorella, G. Carleo, P. Faccioli, A. Zen, K. Nakano, F. Tacchino, P. Ollitrault, S. Woerner, M. Motta, D. Layden, M. Troyer. I dedicate this manuscript to the memory of Sandro Sorella, who invented many techniques mentioned above, and inspired the whole QMC community with his creativity and enthusiasm. I also acknowledge financial support from the Swiss National Science Foundation (grant PCEFFP2_203455). ## Appendix A Hamiltonian variational and Jastrow ansatze **Quantum circuit.** We use the Hamiltonian variational (HV) ansatz [8], which is a very powerful and trasnferable ansatz, which is inspired from the adiabatic principle. The unitary operator defining the HV ansatz is made of \(d\) blocks, and each block is a product of \(\ell\) operators \(\hat{U}_{j}=\exp(\mathrm{i}\theta_{j}^{k}\hat{H}_{j})\), with \(j=1,\ldots,\ell\) indexing the non-commuting terms of the Hamiltonian. For the transverse field Ising model we only need \(\ell=2\), cfn. Eq. 17. In this case, the full unitary operator is \[\hat{U}_{\mathrm{HV}}(\theta):=\prod_{i=1}^{d}\hat{U}_{2}(\theta_{2}^{i})\hat{ U}_{1}(\theta_{1}^{i}), \tag{18}\] which can be efficiently decomposed using one- and two-qubit quantum gates, and the final parameterized state is \[|\psi(\theta)\rangle=\hat{U}_{\mathrm{HV}}(\theta)\left(\frac{|0\rangle+|1 \rangle}{\sqrt{2}}\right)^{\otimes L}, \tag{19}\] where the initial non-entangled state can be obtained from \(|0\rangle^{\otimes L}\) by placing one Hadamard gate on each qubit. The total number of parameters is \(\ell d\). In our numerical experiment, we use a state-vector emulation of the operator, based on linear algebra operations, and we do not compile our operator into a real circuit. Parameters are optimized using COBYLA and BFGS optimizers. These results are compatible with Ref. [74]. Once obtained the optimized state we emulate the noisy energy estimation using the Pauli measurement method. We sample \(M_{1}\) spin configuration from \(|\psi(\theta)|^{2}\) in the computational basis, and \(M_{2}=M_{1}\) in the rotated basis, obtained using the \(\mathrm{H}\otimes\mathsf{H}\cdots\mathsf{H}\) operator (cfn. [50]). The total cost of the energy evaluation is therefore \(2M_{j}\), where \(M_{j}=M_{1}=M_{2}\) values are reported in the text. **Classical ansatz.** For the case of VMC we use a long-range Jastrow state of the form, \[\psi(x)=\exp\left(\sum_{r=1}^{L/2}\lambda_{r}\left(\sum_{k}^{L}s_{k}^{z}s_{k+r }^{z}\right)\right) \tag{20}\] with five variational parameters. The global minimum is found when the parameters are \([0.220,0.057,0.030,0.022,0.010]\). This trial state yields a variational energy which is \(\sim 0.1\%\) close to the exact ground state, \(E_{0}\). To generate different ansatze of different quality, to showcase the zero-variance property of the local energy, we simply act on the first parameter \(\lambda_{1}\), pulling it away from its minimum value, and towards smaller values (this is done to create a more challenging, delocalized state), while keeping the other fixed. When \(\lambda_{1}=-0.15\) the variational energy degraded up to a \(10\%\) systematic error. We extract \(M_{\mathrm{VMC}}\) configurations in the computational basis, since we only need this basis to compute the local energy, Eq. 16 as the name suggests. To calculate the standard deviation of the estimator we simply repeat the numerical experiment, for each ansatz and \(M\) setup, 100 times.
2307.09114
BOLD: A Benchmark for Linked Data User Agents and a Simulation Framework for Dynamic Linked Data Environments
The paper presents the BOLD (Buildings on Linked Data) benchmark for Linked Data agents, next to the framework to simulate dynamic Linked Data environments, using which we built BOLD. The BOLD benchmark instantiates the BOLD framework by providing a read-write Linked Data interface to a smart building with simulated time, occupancy movement and sensors and actuators around lighting. On the Linked Data representation of this environment, agents carry out several specified tasks, such as controlling illumination. The simulation environment provides means to check for the correct execution of the tasks and to measure the performance of agents. We conduct measurements on Linked Data agents based on condition-action rules.
Tobias Käfer, Victor Charpenay, Andreas Harth
2023-07-18T10:03:45Z
http://arxiv.org/abs/2307.09114v1
BOLD: A Benchmark for Linked Data User Agents and a Simulation Framework for Dynamic Linked Data Environments ###### Abstract The paper presents the BOLD (Buildings on Linked Data) benchmark for Linked Data agents, next to the framework to simulate dynamic Linked Data environments, using which we built BOLD. The BOLD benchmark instantiates the BOLD framework by providing a read-write Linked Data interface to a smart building with simulated time, occupancy movement and sensors and actuators around lighting. On the Linked Data representation of this environment, agents carry out several specified tasks, such as controlling illumination. The simulation environment provides means to check for the correct execution of the tasks and to measure the performance of agents. We conduct measurements on Linked Data agents based on condition-action rules. * **URL**[https://github.com/bold-benchmark/](https://github.com/bold-benchmark/) **License** GPLv3 ## 1 Introduction Technologies from the Semantic Web stack are nowadays the technologies of choice to provide interoperable interfaces to composite cyber-physical systems. While controlling cyber-physical systems through such interfaces could benefit greatly from agent-oriented programming techniques, a Semantic Web environment has peculiar characteristics usually not found in agent environments. Our goal is to contribute to the convergence of the research fields of agent-oriented programming and Semantic Web [6, 5] by proposing a benchmark to evaluate agents that operate in Semantic Web environments. Several initiatives illustrate the adoption of Semantic Web technologies for cyber-physical systems. In manufacturing, the International Data Space uses Semantic Web technologies, with several use-cases around Industry 4.0 [33]. In building information management, several vocabularies have recently been introduced to represent data in buildings, such as Brick [1] or the work of the Linked Building Data community group1 at the World Wide Web Consortium (W3C). Works orthogonal to sub-fields of cyber-physical systems include W3C Recommendation such as the Semantic Sensor Network ontology (SSN) [19], a widely adopted vocabulary to describe sensor and actuator data, and the Web of Things (WoT) effort's WoT Architecture [29] and Thing Description [24], which allow to access sensor and actuator data from the devices. The Linked Data [2] environment underlying those efforts is characterized by: 1. Hypermedia, i. e. the possibility of following links, allowing agents to discover new system components and possible actions to perform on them, 2. Semantic alignments, allowing agents to bridge between terminologies of different system components, and 3. RESTfulness, thus agents can read/write information from/to various systems Most attention in Linked Data research has been given to perfecting the environment, that is, storing and serving data at scale by servers [37; 21; 27]. Comparatively little work has been done on Linked Data agents, the flip side of the same coin. This gap [12] could be filled in collaboration with the the agent research community, which has been developing Multi-Agent System (MAS) architectures for manufacturing and building automation for decades [10; 22; 18]. However, these architectures feature agents that are physically situated in their environment. That is, agents are physically constrained in perceiving and acting [15]. In contrast, Linked Data agents are never strictly situated: 1. Linked Data servers make sensor data visible to all agents without distinction. Linked Data agents must restrict the scope of their perception to data relevant to their individual goal, e.g. by following certain links only. 2. Servers expose and consume symbolic representations of the physical environment. Different servers use different vocabularies that can be semantically aligned using mappings. Linked Data agents must interpret such alignments. 3. All potential actions may be exposed to Linked Data agents simultaneously. To take full advantage of parallel actions and avoid conflicts, Linked Data agents must restrict the scope of their actions. Those points influence all three high-level phases of an agent's cognitive loop: sensing, reasoning, and action. As a result, the existing benchmarks that already exist for MAS and that assume situated agents (e.g. [40]) are not applicable. In this paper, we introduce a benchmark to specifically evaluate and compare Linked Data agents, their architectures and runtimes. Our benchmark, the Building on Linked Data (BOLD) benchmark, is based on a real-world system configuration in the building automation domain. The key contributions are: * A formal method for modelling dynamic Linked Data environments, and evaluating agent performances over such environments; * A corresponding execution framework; * An instantiation of our method and framework: * A dynamic Linked Data model of a building (building 3 of IBM Research Dublin) with simulated occupancy and lighting; and * A set of corresponding agent evaluation tasks with increasing complexity This paper is structured as follows: in Sec. 2, we provide a small example to illustrate the overall system; in Sec. 3, we provide preliminaries around the benchmark environment; subsequently, in Sec. 4 and 5, we derive the design of the BOLD environment; then, in Sec. 6, we present tasks for user agents to carry out; next, in Sec. 7, we report on applications of BOLD; in Sec. 8, we survey related work; and last, in Sec. 9, we summarise our work. ## 2 Example We now use an example to illustrate the main aspects of the BOLD benchmarking system5. We first characterize the resources the server makes available as part of the environment, in the example the Coffee Dock room of IBM Research Dublin's building 3. Next, we give an overview of how an agent carries out a simple task, in the example switching on the light in the Coffee Dock room. Finally, we illustrate how the server tracks the agent's progress over time. Footnote 5: We assume the reader has a basic understanding of HTTP, RDF and SPARQL. The interested reader can find more details on these technologies e. g. in [20]. We follow the prefix practices recorded in [http://prefix.cc/](http://prefix.cc/), except : as short for http://localhost:8080/, brick: as short for [http://buildsys.org/ontologies/Brick#](http://buildsys.org/ontologies/Brick#), and bf: as short for [http://buildsys.org/ontologies/BrickFrame#](http://buildsys.org/ontologies/BrickFrame#). The resources relevant for the example are: The Coffee Dock Room, :Room_CoffeeDusk, the room's lighting system, :Lighting_System_42GFLCoffeeDock, and the lighting system's on/off property, :property-Lighting_System_42GFLCoffeeDock#it. The description of the building uses the Brick ontology [1], plus terms from SOSA/SSN [19] to describe sensors and actuators, in particular the properties of sensing and actuation systems that agents can read and write. For an agent to switch on the light in the Coffee Dock Room, the agent has to first discover the URI of the light. Thus, the agent performs a HTTP GET request on :Room_CoffeeDusk, which leads to a response in RDF that contains a link to the lighting system. The agent next performs a traversal of this link and dereferences the IRI of the lighting system. The thus obtained graph contains a link to the on/off property of the lighting system and traverses. Finally, to achieve the task, the agent must write the value on to the lighting system property, which is done by an HTTP PUT request with a suitable RDF payload. To test whether an agent succeeds in a task, each task is defined by 'faults' in the environment that the agent must fix, in MAS terms, a norm violation. In the simple task of turning on the light, there is one fault to fix, defined as the Coffee Dock lighting system having the property value off, expressed as a SPARQL ASK query. An agent has succeeded in the task if the query evaluates to false. The simple task of turning on the light illustrates the basic case which just needs a single agent loop and for which a simple success metric suffices. The benchmark also includes tasks in which the agent has to loop and act continuously, which requires a more elaborate server setup (eg. to provide dynamic data), together with a fault rate metric to capture the success of agents appropriately. ## 3 Formal Definitions and Execution Framework We now introduce a series of concepts to formally define the tasks and metrics used in BOLD. We build on [11] for the definitions. A task is defined by a set of faults that agents have to fix. Such tasks correspond to the MAS concept of norms, i. e. constraints to be satisfied. The evaluation of a task depends on a simulation environment. We define the simulation environment and metrics for faults and performance. Last, we introduce a corresponding execution framework. ### Dataset and Simulation Run We use the usual abstract notation for RDF. \(I\), \(B\), \(L\) are disjoint sets of IRIs, blank nodes and literals. An RDF quad is an instance of \(I\cup B\times I\times I\cup B\cup L\times I\) and an RDF dataset is a set of RDF quads. \(D\) denotes the set of RDF datasets. Definition 1 (simulation run, simulation environment): A simulation run is a finite sequence of datasets \(\langle d_{0},d_{1},\ldots,d_{k}\rangle\), \(k\in\mathbb{N}\). A simulation environment can be defined as the pair \(\langle d,u\rangle\), where \(d\in D\) and \(u\) is some update function \(u:\ D\mapsto D^{n}\), such that for all possible simulation run \(\langle d_{0},d_{1},\ldots,d_{k}\rangle\) of the environment the following holds: \(d_{0}=d\) and \(\forall t<k:d_{t+1}\in u(d_{t})\) The update function \(u\) corresponds to a (set of) SPARQL \(\mathtt{UPDATE}(s)\). A simulation environment can thus be seen as a transition system that generates a set of simulation runs (of arbitrary size). SPARQL updates may include non-deterministic function calls. The output of \(u\) is therefore a set of RDF datasets.We assume \(n\) to be finite, so that SPARQL updates are only allowed to include a finite set of binary decisions of the form rand() <?threshold. The above definition only defines 'dry runs', during which the environment is updated automatically without agent intervention. Between each environmental update, though, agents can act on the environment by updating a dataset \(d_{i}\). Linked Data interfaces only allow a limited set of operations on Web resources (\(\mathtt{PUT}\), \(\mathtt{POST}\), \(\mathtt{DELETE}\)), as formalized below (\(\mathtt{GET}\) corresponds to \(id\)). Definition 2 (agent operation): Let \(\delta:D\times D\mapsto D\) be the function calculating the symmetric difference between two datasets: \(\delta(d,d^{\prime})=(d\cup d^{\prime})\setminus(d\cap d^{\prime})\). Moreover, let \(\pi\) be a projection function such that \(\pi(d)\) is the set of resource IRIs (graph names) appearing in \(d\). A function \(op:D\mapsto D\) is called an agent operation if \(\pi(\delta(d,op(d)))\) is a singleton. The identity function \(id(d)=d\) is an agent operation. On top, we use the following terminology: * if \(\pi(d)\subset\pi(op(d))\), \(op\) is a 'create' operation * if \(\pi(op(d))\subset\pi(d)\), \(op\) is a 'delete' operation * otherwise, \(\pi(d)=\pi(op(d))\) and \(op\) is a'replace' operation A full benchmark run is thus the interleaving of agent operations and environmental updates, obtained through composition, see \(\circ\) in the following definition. Definition 3 (benchmark run): Let \(\langle d,u\rangle\) be a simulation environment. A benchmark run \(\rho\) is a finite sequence \(\langle d_{0},d_{1},\ldots,d_{k}\rangle\), such that: (1) \(d_{0}=d\), and (2) \(\forall t\ <k\), there is a finite sequence of agent operations \(\langle op_{1},op_{2},\ldots op_{l}\rangle\), such that \(d_{t+1}\in op_{1}\circ op_{2}\circ\ldots op_{l}\circ u(d_{t})\) ### Faults and Performance Metrics Definition 4 (fault sequence): A fault sequence \(\gamma\) is a finite sequence of datasets \(\langle d_{0},d_{1},\ldots,d_{l}\rangle\), \(l\in\mathbb{N}\). A run \(\rho=\langle d_{0},d_{1},\ldots,d_{k}\rangle\) matches a fault sequence \(\gamma=\langle d^{\prime}_{0},d^{\prime}_{1},\ldots,d^{\prime}_{l}\rangle\) at time \(t\) if a) \(l\leq t\leq k\) and b) \(\forall i|t-l\leq i\leq t:d_{i}\cap d^{\prime}_{i}=d^{\prime}_{i}\) Fault sequences can be defined in terms of sequences of SPARQL queries. Several fault sequences may occur in the environment at a given time. In the following, we denote \(\Gamma\) the (possibly infinite) set of fault sequences defined for a task. We also denote \(\Gamma_{\rho,t}\) the set of all \(\gamma\in\Gamma\) such that \(\rho\) matches \(\gamma\) at time \(t\). Every task is associated with potential fault sequences in the environment that agents must fix as quickly as possible. In the Coffee Dock room example, the SPARQL ASK query would be associated to one task (as a fault sequence of size one). Given a run \(\rho\) and a a set of fault sequences \(\Gamma\) (defining a task), one can specify performance metrics to evaluate \(\rho\) against \(\Gamma\). Definition 5 (success rate): Let \(\rho\) be a benchmark run of size \(k\) and \(\Gamma\) a set of fault sequences of equal size \(l\). The fault rate is the number of times at which \(\rho\) matches some \(\gamma\in\Gamma\) divided by the length of \(\rho\): \[\frac{|\{t\mid\Gamma_{\rho,t}\neq\emptyset\}|}{k-l} \tag{1}\] When the task requires a single action loop (see Sec. 6), the fault rate gives an indication of 'how fast' an agent is at solving the task. When the task requires continuous action, the metric gives an indication of 'how efficient' an agent is over the whole duration of the task. Counting faults gives an indication of 'how close' an agent has been to solving the task on average. Definition 6 (average fault count): Let \(\rho\) be a benchmark run of size \(k\) and \(\Gamma\) be a set of fault sequences of equal size \(l\). The average fault count is the sum of \(\Gamma_{\rho,t}\) over \(\rho\): \[\frac{\sum_{l\leq t\leq k}|\Gamma_{\rho,t}|}{|\{t\mid\Gamma_{\rho,t}\neq \emptyset\}|} \tag{2}\] We do not only want to compare different agent architectures, but also the same architecture _across_ tasks. To this end, we further define a task-independent fault count for a run that is normalized against a certain dry run. Definition 7 (normalized fault count): Let \(\langle u,d\rangle\) be a simulation environment. Let \(\rho\) and \(\overline{\rho}\), both of size \(k\) and both starting with \(d\), be respectively a benchmark run and a simulation run in the environment. For all \(\overline{d_{t}}\) of \(\overline{\rho}\) (\(t<k\)), it holds that \(\overline{d_{t+1}}\in u(\overline{d_{t}})\). The ratio \[\frac{\sum_{l\leq t\leq k}|\Gamma_{\rho,t}|}{\sum_{l\leq t\leq k}|\Gamma_{ \overline{\rho},t}|} \tag{3}\] is a normalized fault count if for all \(d_{t}\) of \(\rho\) (\(t<k\)): \(d_{t+1}\cap\overline{d_{t+1}}=u(d_{t})\cap u(\overline{d_{t}})\). The above definition ensures that \(\rho\) and \(\overline{\rho}\) are comparable in how non-deterministic functions behave during their execution, regardless of agent operations. In practice, it is enough to use a pseudo-random number generator with the same seed for all runs that must be compared. The resulting normalized fault count then measures closeness to success for a task regardless of how many potential faults the task defines. While the metrics defined so far relate to minimizing faults, they do not penalize agents that would perform unnecessary operations to reach their goal. To take this aspect into account, we finally introduce read/write ratio as a metric. Definition 8 (read/write ratio): Let \(\rho\) be a benchmark run of size \(k\). Let, \(\forall t<k\), be a sequence \(\langle op_{t,1},op_{t,2},\ldots op_{t,l_{t}}\rangle\), such that \(d_{t+1}\in op_{t,1}\circ op_{t,2}\circ\ldots op_{t,l_{t}}\circ u(d_{t})\). Each operation sequence at time \(t\) may include multiple occurrences of \(id\), capturing read operations performed by agents. We denote \(i_{t}\) the number of occurences of \(id\) at time \(t\). We thus define the read/write ratio for \(\rho\) as follows: \[\frac{\sum_{t<k}i_{t}}{\sum_{t<k}l_{t}-i_{t}} \tag{4}\] ### Execution Framework for Dynamic Linked Data Simulations Our execution framework is based on Eclipse RDF4J (formerly: Sesame [9]) and controls the simulation, maintains the dataset, and executes SPARQL UPDATE / ASK queries, using which we implemented the simulation's dynamics / fault checks. We wrote SPARQL user-defined functions to facilitate this simulation. The REST interface is built on Java Servlets and the SPARQL Graph Store Protocol, and runs on Eclipse Jetty. Using.properties files, people can register datasets, initialisation queries, simulation updates, and/or fault checks. This way, people can build a repeatable standard-based dynamic Linked Data environment. ## 4 Dataset Our benchmark builds on Balaji et al.'s datasets that describe real buildings, which they published using their Brick ontology [1]. Of the datasets in [1], we chose building 3 of IBM Research Dublin, as it has diverse building systems with discrete and continuous state, including lighting, and built a Linked Data version. ### The Brick Ontology The Brick ontology is a simple model to represent buildings and their associated automation systems [1]. We base our considerations on version 1.0.0 of the Brick ontology, as this is the last version under which building 3 has been published. The main classes we use from the Brick ontology are [8], see Figure 1: * brick:Equipment refers to any technical equipment as part of some building automation system, including fire safety systems, HVAC (heating, ventilation and air conditioning), and lighting systems * brick:Point refers to the data points (sensors, commands, set points) to monitor and control building automation systems * brick:Location refers to any location inside a building (such as floors and rooms) that is of relevance for automation The Brick ontology also defines properties: * bf:hasPoint can relate locations and equipment to a brick:Point. * bf:hasPart and its inverse bf:isPartOf, are transitive properties to define equipment and locations in a hierarchical fashion * bf:isLocatedIn, a transitive property to specify the location of some equipment or a data point * bf:feeds, a property to specify what locations are covered (fed) by some automation system, either directly or indirectly via other systems. ### Description of Building 3 in Brick Building 3 of IBM Research Dublin is a two-storey office building. The building has a rectangular ground plot with the long side oriented from north to south, i. e. most rooms are oriented eastwards or westwards. In the building description6, we see a subdivision into floors and wings, to which the rooms are assigned using bf:hasPart relationships. The rooms and wings have brick:Lighting_Systems assigned. Such a light system comes in different forms: Some come with a brick:Occupancy_Sensor that determines whether there are people in its vicinity, others with a brick:Luminance_Command, in other words, a switch to control the lights, and others with a brick:Luminance_Sensor that we consider to be triggered by daylight. We provide basic statistics in Table 1. Footnote 6: [https://github.com/BuildSysUniformMetadata/GroundTruth/blob/2e48662/building_instances/IBM_B3.ttl](https://github.com/BuildSysUniformMetadata/GroundTruth/blob/2e48662/building_instances/IBM_B3.ttl) ### Building 3 in Read-Write Linked Data The description of building 3 as found online is one single static RDF graph with no dereferenceable URIs and no entity with a temporal extent (such as sensor measurements). To make it suitable for a Read-Write Linked Data benchmark, we extended this monolothic RDF graph in two ways. First, we scoped every triple in the graph with a dereferenceable resource IRI, such that the result is a Figure 1: The Brick Ontology’s RDF Properties as UML class diagram. UML class associations between RDFS classes depict domains and ranges of the properties. proper RDF dataset, as defined in Sec. 3. Second, we extended the definition of the building's data points, in order to provide resources that change over time. #### 4.2.3 Resource Partitioning As our aim is to provide a benchmark for Web agents, resource IRIs must be dereferenceable. Morever, we assume that cyber-physical systems with read-write Linked Data interfaces will be composed of large amounts of resources of small size (individually exposed by connected devices). To match this assumption, we partitioned the original RDF graph into smaller RDF graphs, each providing information about one room, one floor, one data point, etc. Specifically, we created an RDF dataset such that from every triple \(\langle s,p,o\rangle\) from the graph, we derived the quads \(\langle s,p,o,s\rangle\) and \(\langle s,p,o,o\rangle\). We thus derived about 3k dereferenceable IRIs from 25k triples, with graphs of 5-50 triples (see Table 1). #### 4.2.4 Time-varying Resource Definition Although the original RDF graph includes resources like sensors and actuators, it does not provide means to expose actual sensor measurements (such as occupancy or an illuminance value) nor does it provide means to expose the state of actuators (such as the status of a light switch). To complete the dataset on which BOLD is defined, we included time-varying and writable resources for such sensor and actuator data. To that end, we aligned the Brick ontology with SOSA/SSN as illustrated on Fig. 2. We then extended our dataset with resources defined as instances of ssn:Property. For all data points of the building 3 lighting systems, we created a property resource similar to the example payload of Sec. 2. We only consider lighting systems since all tasks of BOLD are defined according to a lighting scenario (see Sec. 6). As a result, we added \(>500\) dereferenceable IRIs to our Linked Data benchmark, see Table 1. Section 5 describes how these resources vary over time. ## 5 Simulation Environment The simulation environment is defined by an initial dataset and an update function that non-deterministically updates the dataset exposed to agents. Figure 2: Mapping from the Brick ontology to the SSN ontology in the form of an UML Class Diagram. See Figure 1 for an explanation of our notation. The dataset presented in Sec. 4.2 serves as initial dataset in the simulation. We now present how datasets are updated by the simulation environment and how a dataset is exposed to agents. ### Time The primary role of the simulation environment is to increment time. To this end, a timer is triggered at fixed intervals. The timer increments a number and progresses an RDF description of the simulated time (in OWL Time [13]). Time is made available to agents under a :sim resource, to which agents can also send a HTTP PUT request to start a simulation run. Time, as exposed to agents, corresponds to simulated time. Real execution time can be parameterized on the simulation server as the period of a timeslot between two successive environmental updates. However, care must be taken that all updates can be computed in less than the duration of a timeslot, to prevent time drift. Our implementation of the BOLD simulation server can compute all environmental updates in \(<100\,\mathrm{ms}\) (avg. 67\(\pm\)28 ms for tasks w/sunlight and occupancy, see Sec. 6). We obtained all results (Sec. 7) with this configuration. ### Sunlight and Occupancy Some BOLD tasks rely on sensor measurements that result from two physical processes to simulate: the illumination of the building by the sun and the movements of occupants in the building. Although they take different parameters into account, the simulation of both processes have the following characteristics: 1. non-determinism of transitions so that agents may not optimize using out-of-band information 2. predictable evolution to allow for model-based agent optimization An example is provided on Fig. 3. In the case of sunlight, outside illuminance (measured in lux) evolves quadratically over time, as the effect of the sun rising (6am), reaching a zenith and then setting (9pm). Illuminance at zenith (i. e. the maximum value) equals 40k lux. A cloud coverage factor is applied to that baseline, as follows: a random coverage in \([0,1]\) is generated at sunrise and at sunset, then coverage evolves linearly between these two values. Coverage either increases or decreases during the day (illustrated by run 1 and 2 resp. in Fig. 3a). \begin{table} \begin{tabular}{l r r} \hline \hline Rooms & 281 & Lighting systems & 278 \\ with occupancy sensors & 66 & with occupancy sensors & 156 \\ with luminance commands & 38 & with luminance commands & 105 \\ with luminance sensors & 20 & with luminance sensors & 48 \\ Floors & 2 & Triples in IBM\_B3.ttl & 24 947 \\ Wings & 3 & Resources IRIs & 3 281 \\ & & Dynamic resources & 551 \\ \hline \hline \end{tabular} \end{table} Table 1: Basic counts for Building 3 and the benchmark. Inside a building, the intensity of light that is incident on walls and floors is significantly lower than sunlight as measured outside. In BOLD, we randomly assign to each room an 'occlusion factor' in \([0.5,1]/10\) to account for this phenomenon. As a result, the maximum illuminance inside the building is 4,000 lux. To simulate the movements of occupants, time is divided in 4 periods, see Fig. 2(b): At night, the building has no occupant. From 8am, occupants start arriving at random times and occupy one room each. After 12pm, occupants leave the building for lunch, about 1 h, after which they may come back for another 4 h of work. Starting from 4pm, occupants start leaving the building. ### Linked Data Interface Agents can access and control the latest state of the simulation via a Linked Data interface that complies to the SPARQL Graph Store Protocol (with direct addressing) [32]. That is, for a dataset \(d_{i}\), agents can send HTTP requests to any resource IRI \(r\in\pi(d_{i})\). All resources in the dataset are readable except the default graph, which holds triples that are used to compute the simulation's update function (e.g. triples stating the existence of occupants and what room is their workplace). On top, instances of sosa:ActuatableProperty are writable. Our implementation is multi-threaded: one thread triggers environmental updates at the end of each timeslot while a thread pool concurrently accesses the dataset to carry out agent operations. All changes to the dataset are recorded, to calculate metrics at the end of a simulation run. ## 6 Tasks and Fault Definitions Buildings account for 36 % of the global energy use [23] and the top two sinks during operation are HVAC (heating, ventilation, air conditioning) and lighting [39]. The tasks currently defined in BOLD focus on lighting. Agents must find and operate the building's lighting system in a fully automated manner in order to minimze electric consumption, thus, in MAS terms, fulfilling simple norms. We consider different task types with increasing complexity: single-loop tasks and continuous-loop tasks. All tasks are designed for 24-hour runs. Tasks differ Figure 3: Example of (plain) random runs for sunlight and occupancy in the number of retrievals and updates they require, as well as in the number of perception-action loops agents must perform. These dimensions directly echo our discussion on the non-situatedness of Linked Data agents (Sec. 1, items 1 and 3). Moreover, certain tasks require that agents perform reasoning to make informed decisions (item 2). Table 2 summarizes the 10 tasks of BOLD. ### Single-loop Tasks Single-loop tasks require to carry out operations only once in the environment. Formally, a single-loop task defined by some faults \(\Gamma\) on a simulation environment \(\langle d,u\rangle\) has the following property: if \(\Gamma_{\rho,t}=\emptyset\) for some \(t\) and \(d_{t+1}\in u(d_{t})\) (i. e. in the absence of agent operations), then it holds that \(\Gamma_{\rho,t+1}=\emptyset\). As scenario, we consider the simple control of lights, e. g. that a janitor would trigger when the whole building closes (TS1) or when they test functionalities of the system (TS2 and TS3). **TS1**: A light that is on is considered a fault. This task does not require any data point to be read, the agent must merely find all lights and turn them off. **TS2**: In this task, a light that has not been toggled since the beginning of the run is considered a fault. In contrast to TS1, this task involves perception. The agent must first read the state of a luminance command and then toggle. **TS3**: Similar to TS1, but only a subset of the lights has to be switched off, namely those in rooms dedicated to 'personal hygiene' (toilet and shower). Agents must properly classify rooms, which requires that they first read sub-class axioms defined in the environment in a custom RDF vocabulary. These axioms specify, e.g. that 'disabled toilets' are a kind of 'toilets', themselves a kind of 'personal hygiene' rooms. As in TS1, a light in a toilet or shower that is on is considered a fault. To ensure agents correctly classify rooms, any light that has been toggled in other rooms is also considered a fault. ### Continuous-loop Tasks In single-loop tasks, changes in the environment have no effect on success. This is not true of continuous-loop tasks. In continuous-loop tasks, changes in the environment may cause faults to appear, agents must thus continuously monitor \begin{table} \begin{tabular}{l r r r r} \hline \hline **Task** & **Reads** & **Writes** & **Loops** & **Reasoning** \\ \hline TS1 & 0 & 146 & 1 & \\ TS2 & 146 & 146 & 1 & \\ TS3 & 0 & 6 & 1 & ✓ \\ TC1 & 146 & 146 & 2 & \\ TC2 & 146 & 146 & 2 & ✓ \\ \hline \hline \end{tabular} \begin{tabular}{l r r r r} \hline \hline **Task** & **Reads** & **Writes** & **Loops** & **Reasoning** \\ \hline TC3 & 147 & 146 & 2\({}^{*}\) \\ TC4 & 128 & 64 & 2\({}^{*}\) \\ TC5 & 128 & 64 & \(\sim\)4\({}^{*\dagger}\) \\ TC6 & 192 & 64 & \(\sim\)4\({}^{*\dagger}\) \\ TC7 & 256 & 64 & \(\sim\)4\({}^{*\dagger}\) \\ \hline \hline \end{tabular} \begin{tabular}{l r r r} \hline \hline **Task** & **Reads** & **Writes** & **Loops** & **Reasoning** \\ \hline TC3 & 147 & 146 & 2\({}^{*}\) \\ TC4 & 128 & 64 & 2\({}^{*}\) \\ TC5 & 128 & 64 & \(\sim\)4\({}^{*\dagger}\) \\ TC6 & 192 & 64 & \(\sim\)4\({}^{*\dagger}\) \\ TC7 & 256 & 64 & \(\sim\)4\({}^{*\dagger}\) \\ \hline \hline \end{tabular} \end{table} Table 2: BOLD tasks with lower bounds for read/write operations and loops (non-deterministic start/end (\(*\)) or loop count (\(\dagger\))) to correctly achieve the task. the environment. The initial value of all lights is randomized so that fault rate always equals 0 in the absence of agent operation, regardless of the task. Continuous tasks TC1 and TC2 involve time only. TC3 and TC4 involve illuminance sensors. TC5-TC7 introduce occupancy sensors. **TC1**: In this scenario, a weather report provides sunrise and sunset times. A light that is off during the day is considered a fault, under the assumption that the building is likely to be occupied. Conversely, lights should be off during the night as no one is expected to be in the building. An agent needs to retrieve sunrise and sunset timestamps, and turn lights on or off accordingly. **TC2**: In this scenario, in addition to sunrise and sunset times, parts of the building (floors) expose different opening and closing times beyond which no light should be on. The ground floor is assumed to close later (11pm) than other floors (7pm). All floors open at 8am. When a floor is open, any light on the floor that is off is considered a fault. In this task, the agent must perform automated reasoning to infer what room belongs to each floor in order to decide whether lights in the room should be on or off at a given point in time. **TC3**: In task TC3, a light that is off is a fault if outside illuminance is below a certain threshold (10,000 lux). In this scenario, we assume the building is equipped with a weather station mounted on its rooftop that includes a light sensor. By applying a threshold, we want to determine whether the lights in the entire building should be on or off. In contrast to TC1 and TC2, the times at which light should be on or off is randomly generated to simulate cloud coverage. Yet, agents could anticipate when to perform an action, as illuminance on the surface of the building varies from sunrise until sunset. **TC4**: In task TC4, a fault is defined as in TC3 but at the level of a single room. We only consider the rooms in the building that are equipped with luminance sensors so as to decide for each room whether lights should be on or off by applying a global threshold (500 lux). **TC5**: In TC5, a fault is any light that is off while an occupant is detected in the room. In this scenario, we assume that the rooms in the building are equipped with occupancy sensors. Using those sensors only, we want to determine whether the lights should be on or off. The challenge for agents in this task is to continuously monitor occupants coming in and leaving the building in a non-deterministic fashion. Although the simulated environment displays a clear occupancy pattern throughout the day, an agent cannot know it upfront, and can only build a model of occupancy by repeated observation. **TC6**: This task combines TC5 with the constraints of TC4: a fault is any light that is off while an occupant is detected in the room _and_ illuminance is below 500 lux. In this scenario, we want to raise energy efficiency and only turn on light in rooms with occupants and low illuminance. An agent faces less potential faults but overcoming them requires a more advanced model of the environment (or faster perception-action loops). **TC7**: TC7, the most challenging task of the benchmark adds one more constraint to TC6: instead of a global illuminance threshold, agents have to make decision based on the preferences of occupants, which they can update at any time during the simulation via brick:Setpoint resources. In this scenario, we want to raise the individual comfort of occupants. The environment includes a set points for each illuminance sensor in the building. Thus, the agent must read the value that was last set by the room's occupant before deciding. Future variations on TC3-TC7 could be based on temperature instead of illumination, to consider more elaborate control with feedback loops. Further versions of the benchmark could also consider incomplete environmental descriptions and defective devices. Agents could e.g. leverage additional topological information to infer probable values for the missing data points. We believe, however, that the nine BOLD tasks provided here represent a significant challenge to Linked Data agents, which must combine fast retrieval with non-trivial decision making. ## 7 Showcase Evaluation We now provide an evaluation of the tasks introduced in the previous section, and discuss as the relevance of the metrics we defined for the benchmark. This showcase evaluation relies on the Linked-Data-Fu engine [38], which can perform HTTP operations, do reasoning over Linked Data, and execute rule-based programs [25]. Linked-Data-Fu is a multi-threaded engine optimized towards inference and communication. However, it does not directly provide support for agent-oriented programming features such as planning or memorization. In the following, we compare the performances of two Linked-Data-Fu agents and show that both features (planning, memorization) are desired for performance improvements. The two Linked-Data-Fu agents we compare are configured as follows: a first agent (ldfu) is seeded with the building's URI and needs to discover all data points by following links. The second agent (ldfu-prefetch) behaves as if it had prefetched the model of the building, which allows immediate access to data points. Both agents execute a program that encodes condition-action rules, where the condition is a fault, as indicated in Table 2, and the action is a HTTP PUT request sent to the environment as a fix. Both agents repeat program execution as fast as possible until a run is over. Table 3 summarizes the performance of our two baseline agents against our BOLD simulation server. Experiments were run on a single machine with an Intel Core i5 processor (8GB RAM, 8 cores shared between agent and server processes). The table features all four metrics presented in Sec 3: fault rate (FR), average fault count (AFC), normalized fault count (NFC) and read/write ratio (RWR). As expected, ldfu-prefetch, which is directly provided with information about the building, consistently outperforms ldfu. Both agents show low fault rates for TS1, TS2 and TS3. As Linked-Data-Fu is optimised for such workloads, we regard those rates as lower bound. Rather, TS1-TS3 can be regarded as test cases to assert the correctness of an implementation. Results for TC1 and TC2 are also showing satisfactory results regarding FR and NFC. However, the number of reads could be significantly reduced by optimising the agents for the task at hand. In those two tasks, agents essentially only synchronize with the environment. Similarly, in TC3, outside illuminance evolves linearly with time. It should thus be straightforward to calculate when a fault will occur and thus reduce the number of reads (see Sec. 6). Regarding TC4-TC7, it is worth comparing the two agents. Although ldfu-prefetch sends \(>4\times\) fewer requests to the server, ldfu-prefetch only gains \(<14\%\) performance on fault rate, implying that fetching static data represents a rather low overhead. In contrast, the overhead related to exchanging dynamic readings is much more significant. Both agents perform readings on all data points, regardless of past values. For task TC4, for instance, they perform 128 reads which they complete only after the server has executed 10 iterations, on average. The agent's reaction time is therefore limited because of a high number of reads. Yet, the task only requires a total of \(128\times 2\) reads during the whole run (one for each light after sunrise and before sunset). To reduce agent-server interactions, an agent can remember past readings for statistical inference (e. g. to predict future illuminance levels). Such optimization is even more crucial for TC5-7, as indicated by significantly higher fault rates for ldfu-prefetch despite low fault counts. ## 8 Related Work Our work is at the intersection of (Read-Write) Linked Data, MAS and building simulation. While we went to lengths to make realistic assumptions for the simulation, we do not claim to compete with commercial building simulation tools. We see building automation mostly a vehicle for evaluating efficient MAS architectures and not as an end in itself, similarly as the recent ProcTHOR [14]. There is considerable body of work on benchmarking triple stores with a read-only SPARQL interface, e. g. SP\({}^{2}\)BENCH [37], BSBM [3], LUBM [17], and projects such as LDBC [7] and Hobbit [31]. Yet, BSBM contains a case that writes via SPARQL. A benchmark that sends read-only SPARQL requests to \begin{table} \begin{tabular}{l r r r r r} \hline \hline \multirow{2}{*}{**Task**} & \multicolumn{4}{c}{ldfu / ldfu-prefetch} \\ & **FR** & **AFC** & **NFC** & **RWR** \\ \hline TS1 & 0.04 / 0.03 140 / 98 & 0.96 / 0.67 & 19 / 0 & (0) \\ TS2 & 0.09 / 0.05 137 / 121 & 0.94 / 0.83 & 19 / 1 & (1) \\ \hline TS3 & 0.08 / 0.02 & 6 / 6 & 1 / 1 & 167 / 0 & (0) \\ \hline TC1 & 0.12 / 0.03 100 / 98 & 0.18 / 0.04 & 458 / 102 (1) \\ TC2 & 0.13 / 0.05 66 / 59 & 0.11 / 0.04 & 540 / 85 (1) \\ \hline TC3 & 0.08 / 0.03 & 61 / 58 & 0.08 / 0.03 & 1831 / 282 (1) \\ TC4 & 0.23 / 0.09 & 17 / 16 & 0.11 / 0.05 & 983 / 211 (2) \\ \hline TC5 & 0.42 / 0.29 & 16 / 15 & 0.15 / 0.10 & 395 / 68 (2) \\ TC6 & 0.26 / 0.31 & 11 / 10 & 0.09 / 0.11 & 800 / 105 (2) \\ TC7 & 0.40 / 0.31 & 16 / 12 & 0.20 / 0.11 & 628 / 127 (4) \\ \hline \hline \end{tabular} * FR: fault rate; AFC: average fault count; NFC: normalized fault count; RWR: read/write ratio (ideal ratio in parenthesis, calculated from Table 2). \end{table} Table 3: Performance on single-loop and continuous-loop tasks of a baseline agent. multiple sources is FedBench [36]. Closer to BOLD are approaches that consider the Linked Data interface, i. e. dereferencing of URIs in multiple requests: Hartig et al. [21] turned the dateset of BSBM into dereferenceable URIs, and DLUBM [27] distributes the dataset from LUBM. SolidBench7 serves a static social network via HTTP. None of these works consider state-changing operations that send RDF via HTTP. Thus, to our knowledge, BOLD is the first benchmark to provide a dynamic Read-Write Linked Data environment for benchmarking user agents. Footnote 7: [https://github.com/SolidBench/SolidBench.js](https://github.com/SolidBench/SolidBench.js) The intersection of agent technologies and (Semantic) Web technologies includes: JASDL [28] combines Jason with the facility to process semantic data given in description logics (DL) ontologies. Using BOLD, we want to address agent technologies that also consider the other aspects of Semantic Web Technologies: interaction in HTTP and hypermedia, and tone down expressive DL reasoning. REST-A [16] is an agent framework providing an abstraction for perception and actions based on REST operations. Conversely, Ricci et al. built a framework to build environments and agent organizations based on JaCaMo (a mature MAS framework [4]), REST and RDF [34]. Both approaches focus on lowering the engineering effort to design MAS architectures. MAS engineering tools are complementary to BOLD: REST-A or JaCaMo could serve as the basis for MAS implementations evaluated with BOLD. BOLD does not evaluate inter-agent communication, which is an orthogonal topic with specialised benchmarks [30]. ## 9 Conclusion We have presented BOLD, a framework next to a benchmark for measuring the performance of Linked Data agents. BOLD simulates a building, with occupants moving around and changing the lighting systems. BOLD includes 10 tasks for agents: 3 simple tasks, which require agents to carry out operations only once, and 7 complex tasks, which require the agents to continuously monitor the environment and carry out requests depending on changes in the environment. We have provided performance measurements of two baseline agents on all tasks and illustrated how more sophisticated agents could show better performances. We hope that BOLD -and if it's only its execution framework- fosters research and development in the area of autonomous agents for Read-Write Linked Data. As such, BOLD has been an environment for ISWC 2021's All-the-Agents Challenge [26], sparked discussions at a recent Dagstuhl seminar [5], and served as basis for a transportation showcase [35]. _Resource Availability Statement:_ Find the BOLD server source code, the building data (and scripts for its generation), building update and fault queries for the BOLD tasks online8. Example rules for agents that perform the BOLD tasks can also be found online9. The runtime for the sample implementation is only available upon request due to its alpha stadium10. Footnote 8: [https://github.com/bold-benchmark/bold-server](https://github.com/bold-benchmark/bold-server) Footnote 9: [https://github.com/bold-benchmark/bold-agents](https://github.com/bold-benchmark/bold-agents) Footnote 10: [http://linked-data-fu.github.io/](http://linked-data-fu.github.io/)
2305.16895
UMSE: Unified Multi-scenario Summarization Evaluation
Summarization quality evaluation is a non-trivial task in text summarization. Contemporary methods can be mainly categorized into two scenarios: (1) reference-based: evaluating with human-labeled reference summary; (2) reference-free: evaluating the summary consistency of the document. Recent studies mainly focus on one of these scenarios and explore training neural models built on PLMs to align with human criteria. However, the models from different scenarios are optimized individually, which may result in sub-optimal performance since they neglect the shared knowledge across different scenarios. Besides, designing individual models for each scenario caused inconvenience to the user. Inspired by this, we propose Unified Multi-scenario Summarization Evaluation Model (UMSE). More specifically, we propose a perturbed prefix tuning method to share cross-scenario knowledge between scenarios and use a self-supervised training paradigm to optimize the model without extra human labeling. Our UMSE is the first unified summarization evaluation framework engaged with the ability to be used in three evaluation scenarios. Experimental results across three typical scenarios on the benchmark dataset SummEval indicate that our UMSE can achieve comparable performance with several existing strong methods which are specifically designed for each scenario.
Shen Gao, Zhitao Yao, Chongyang Tao, Xiuying Chen, Pengjie Ren, Zhaochun Ren, Zhumin Chen
2023-05-26T12:54:44Z
http://arxiv.org/abs/2305.16895v1
# UMSE: Unified Multi-scenario Summarization Evaluation ###### Abstract Summarization quality evaluation is a non-trivial task in text summarization. Contemporary methods can be mainly categorized into two scenarios: (1) _reference-based_: evaluating with human-labeled reference summary; (2) _reference-free_: evaluating the summary consistency of the document. Recent studies mainly focus on one of these scenarios and explore training neural models built on pre-trained language models (PLMs) to align with human criteria. However, the models from different scenarios are optimized individually, which may result in sub-optimal performance since they neglect the shared knowledge across different scenarios. Besides, designing individual models for each scenario caused inconvenience to the user. Inspired by this, we propose **U**nified **M**ulti-scenario **S**ummarization **E**valuation Model (UMSE). More specifically, we propose a perturbed prefix tuning method to share cross-scenario knowledge between scenarios and use a self-supervised training paradigm to optimize the model without extra human labeling. Our UMSE is the first unified summarization evaluation framework engaged with the ability to be used in three evaluation scenarios. Experimental results across three typical scenarios on the benchmark dataset SummEval indicate that our UMSE can achieve comparable performance with several existing strong methods which are specifically designed for each scenario.1 Footnote 1: Code is available at [https://github.com/ZT-Yao/UMSE](https://github.com/ZT-Yao/UMSE). ## 1 Introduction Quantitatively evaluating the quality of generated summary is a non-trivial task that can measure the performance of the summarization system Lin (2004); Ng and Abrecht (2015); Zhang et al. (2020); Scialom et al. (2021), and can also be used as a reward model to give an additional training signal for the summarization model Wu and Hu (2018); Narayan et al. (2018); Scialom et al. (2019); Gao et al. (2019, 2020). The dominant evaluation methods are traditional word-overlap-based metrics like ROUGE Lin (2004) and BLEU Papineni et al. (2002). Although these metrics are very easy to use, they cannot evaluate semantic similarity. In recent years, many researchers focus on semantic-based evaluation tools Ng and Abrecht (2015); Zhang et al. (2020); Zhao et al. (2019). Different to traditional metrics which only use one score to measure the quality of the summary, Zhong et al. Zhong et al. (2022) propose to evaluate the summary quality in several dimensions (_e.g.,_ coherence, consistency, and fluency) by calculating the similarity between the generated summary and the human-annotated summary. The summarization evaluation methods can be categorized into two scenarios based on the input data type: (1) **reference-based** methods require the human-annotated summary as input and (2) **reference-free** methods only use the corresponding document. The reference-based methods Lin (2004); Papineni et al. (2002); Banerjee and Lavie (2005); Ng and Abrecht (2015); Zhang et al. (2020); Zhao et al. (2019); Yuan et al. (2021) usually use the human written summary (_a.k.a.,_ Figure 1: Illustration of multi-scenario summarization evaluation. reference summary) as the ground truth and calculate the similarity between generated and reference summary. With the help of the pre-train language model, these methods have a powerful ability to measure semantic similarity. However, not all real-world application scenarios have human-annotated summaries. Using the reference-based evaluation method with the human-annotated ground truth summary is labor-consuming. Thus, reference-free methods Wu et al. (2020); Gao et al. (2020); Scialom et al. (2019, 2021) propose to evaluate the summary by modeling the semantic consistency between the generated summary and the document. When evaluating a summarization system, even though we can individually select a proper evaluator condition on whether we have a reference summary, it is not very convenient. Moreover, since human annotation is costly, some summarization methods Wu and Hu (2018); Narayan et al. (2018); Scialom et al. (2019) choose to use the automatic evaluator to provide an additional training signal, instead of relying entirely on human-labeled document-summary pair data. In this type of usage, the evaluator needs to measure the quality of the model-generated summary with _partial_ human-labeled document-summary data. Besides, contemporary trainable evaluation models for different scenarios (with or without reference summary) are built on pre-train language models, which may transfer knowledge across different scenarios and provides a great opportunity to bridge these evaluation scenarios with a better combination of the best of both worlds. Hence, it is valuable to build a unified multi-scenario summarization evaluator that can be used for processing both types of input data. Intuitively, this naturally leads to two questions: (1) _How to build a unified multi-scenario evaluation model regardless of whether we have a reference summary?_ (2) _How to train the evaluator so that it can share knowledge between scenarios and maintain the exclusive knowledge in a specific task?_ In this paper, we propose a unified multi-scenario summarization evaluation method **U**nified **M**ulti-scenario **S**ummarization **E**valuation Model (UMSE). UMSE unifies three typical summary quality evaluation scenarios in one model: (1) **Sum-Ref**: evaluate using reference summary. UMSE measures the similarity between the generated summary and the human-annotated reference summary. (2) **Sum-Doc**: evaluate using document. Since using the reference summary is labor-consuming, UMSE can measure the consistency between generated summary and the original document. (3) **Sum-Doc-Ref**: evaluate using both document and reference summary. This method incorporates the advantages of sum-ref and sum-doc. To process these different types of input, we propose a perturbed prefix method based on the prefix tuning method Li and Liang (2021); Liu et al. (2022, 2021) that shares a unified pre-train language model across three scenarios by using different continuous prefix tokens as input to identify the scenario. Then, we propose \(2\) hard negative sampling strategies to construct a self-supervised dataset to train the UMSE without additional human annotation. Finally, we propose an ensemble paradigm to combine these scenarios into a unified user interface. To sum up, our UMSE can bring the following benefits: \(\bullet\)**One model adaptable to multi-scenario**. UMSE uses only one model to evaluate the generated summary whenever it has a reference summary. \(\bullet\)**Mutually enhanced training**. We propose a perturbed prefix method to transfer knowledge between scenarios, and it can boost the performance of each scenario. \(\bullet\)**Self-supervised**. UMSE can be trained using a fully self-supervised paradigm without requiring any human-labeled data, and it makes UMSE has strong generalization ability. To verify the effectiveness of the UMSE, we first compare with several baselines including the reference-based and reference-free methods. Specifically, UMSE outperforms all the strong reference-free evaluation methods by a large margin and achieves comparable performance with the state-of-the-art in a unified model. Ablation studies verify the effectiveness of our proposed perturbed prefix-tuning method. ## 2 Related Work ### Reference-free Metrics Reference-free metrics aim to evaluate the summary quality without the human-labeled ground truth summary as the reference, and these methods can be categorized into two types: trained model and training-free model. For the training-free methods, SUPERT Gao et al. (2020) first extracts salient sentences from the source document to construct the pseudo reference, then computes the semantic similarity to get the evaluation score. Following SUPERT, Chen et al. (2021) propose a centrality-weighted relevance score and a self-referenced redundancy score. While computing the relevance score, the sentences of pseudo reference are weighted by centrality, the importance of each sentence. For the methods which should be trained, LS-Score Wu et al. (2020) is an unsupervised contrastive learning framework consisting of a linguistic quality and a semantic informativeness evaluator. The question-answering paradigm is usually used in evaluating summaries, which evaluates the factual consistency between summary and document with the help of well-trained question-answering models Scialom et al. (2019); Gao et al. (2019); Durmus et al. (2020); Scialom et al. (2021). ### Reference-based Metrics Referenced-based metrics, which evaluate the quality of the summary by measuring the similarity of the summary and human written reference, can be divided into two categories: lexical overlap-based metrics and semantic-based metrics. ROUGE Lin (2004), the most commonly used metric for summary evaluation, measures the number of matching n-grams between the system output and reference summary. Other popular lexical overlap-based metrics are BLEU Papineni et al. (2002) and METEOR Banerjee and Lavie (2005) which are also commonly employed in other text generation tasks (_e.g.,_ machine translation). Since using the lexical overlap to measure the quality is sometimes too strict, many researchers turn to focus on exploring the semantic-based evaluation. ROUGE-WE Ng and Abrecht (2015) improves ROUGE by using Word2Vec Mikolov et al. (2013) embeddings, and S3 Peyrard et al. (2017) takes the ROUGE and ROUGE-WE as input features and is trained on human-annotated datasets. With the prosperity of the pre-training language model (PLM), more and more researchers introduce these models for evaluation. BERTScore Zhang et al. (2020) leverages the contextual embeddings from BERT Devlin et al. (2019) and calculates the cosine similarity between system output and reference sentence. CTC Deng et al. (2021) is based on information alignment from two dimensions: consistency and relevance. UniEval Zhong et al. (2022) is a multi-dimensional evaluator based on T5 Raffel et al. (2020), and it formulates the summary evaluation as a binary question-answering task and evaluates from four dimensions: coherence, consistency, fluency, and relevance. However, existing summarization evaluation models usually focus on measuring the summary quality from multiple aspects and transferring knowledge from PLM, they ignore the shareable knowledge between different scenarios. Evaluating the quality of the generated text is a also crucial task in generation tasks. In machine translation evaluation, Wan et al. (2022) proposes UniTE which is a multi-scenario evaluation method. UniTE employs monotonic regional attention to conduct cross-lingual semantic matching and proposes a translation-oriented synthetic training data construction method. However, the summarization task does not have these characteristics and directly applying UniTE to summarization evaluation cannot measure the important aspect of summary (_e.g.,_ coherence and relevance). ## 3 UMSE Model Problem FormulationGiven a model-generated summary \(X=\{x_{1},x_{2},\dots,x_{L_{x}}\}\) with \(L_{x}\) tokens, our goal is to use a unified evaluation model to produce a score \(s\in\mathcal{R}\) for \(X\). For the Sum-Ref scenario, the model uses generated summary \(X\) and ground truth summary \(Y=\{y_{1},y_{2},\dots,y_{L_{y}}\}\) as input. For the Sum-Doc scenario, we evaluate the summary quality by using generated summary \(X\) and document \(D=\{d_{1},d_{2},\dots,d_{L_{d}}\}\) with \(L_{d}\) tokens as input, which does not require any human annotation (_e.g.,_ ground truth summary \(Y\)). For the Sum-Doc-Ref scenario, the model uses generated summary \(X\), ground truth summary \(Y\), and document \(D\) as input. To train the evaluation model, we do not use any human-annotated summary quality dataset and we construct the training dataset by using several self-supervised training strategies. ### Overview In this section, we detail the **U**nified **M**ulti-scenario **S**ummarization **E**valuation **M**odel (UMSE). An overview of UMSE is shown in Figure 2. UMSE has two main parts: (1) **Data construction.** We first construct two self-supervised datasets for coherence and relevance evaluation scenarios. (2) **Unified Model.** To unify the different input data into a unified model, we propose a perturbed prefix-tuning method to train the UMSE. ### Data Construction Employing a human annotator to annotate the quality of generated summary to train the evaluation model is labor-consuming and will lead the evaluation model hard to use. We propose to use the self-supervised tasks to construct the training dataset for the evaluator without using any human annotation. Since measuring the quality of the summary requires two main semantic matching abilities: (1) matching with the reference summary and (2) matching with the document, we propose two self-supervised tasks to construct the training dataset automatically: \(\bullet\)**Summary matching oriented data**: The goal for this task is to construct positive and negative samples which are different in whether the summary contains the salient information. Given a document-summary pair \(D,Y\), the data sample to construct is a summary pair. The positive data pair \((Y,X^{LD3})\) contains the reference summary \(Y\) and a candidate summary \(X^{LD3}\) which contains relevant information. And the negative data pair \((Y,X^{BM})\) contains the reference summary \(Y\) and a candidate summary \(X^{BM}\) which describes similar but not relevant information. Particularly, if the negative data is very hard for the evaluation model to identify (_e.g.,_ requires reasoning ability or is very similar to the positive sample), the evaluation model will achieve better performance than using very simple negative data. Thus, we propose to use the leading three sentences of the corresponding document \(D\) as the candidate summary \(X^{LD3}\). For the candidate summary \(X^{BM}\) in negative data pair, we first use the BM25 retrieval model to retrieve the most similar document \(D^{\prime}\) to \(D\) and obtain the reference summary \(Y^{\prime}\) of \(D^{\prime}\). To make the negative sample harder, we randomly replace a sentence in \(Y^{\prime}\) with one sentence in \(X^{LD3}\) as the final negative summary \(X^{BM}\). \(\bullet\)**Document matching oriented data**: The golden criterion for evaluating the summary quality is whether the summary describes the main facts of the document. Hence, we construct self-supervised data which aims to train the model to measure the semantic relevance between summary and document. The positive data pair \((D,Y)\) consists of document \(D\) and its reference summary \(Y\). The negative data pair \((D,X^{BM})\) contains the document \(D\) and a false summary \(X^{BM}\) which is similar to \(Y\). We employ the same BM25 retrieval method in coherence data construction to obtain \(Y^{\prime}\) and replace a sentence in \(Y\) with a sentence in \(Y^{\prime}\) as the negative summary \(X^{BM}\). For brevity, we omit the superscript of \(X\) in the following sections. ### Perturbed Prefix-Tuning Although the three scenarios have different input types, we can directly concatenate them into a text sequence which can be easily adopted by the pre-train language model. Following previous work (Zhong et al., 2022), although our evaluation model does not require additional summarization-quality data annotations, human-written summaries are still required to train the estimator. Therefore, reducing the dependence on human-written sum Figure 2: Illustration of UMSE which tackles the summarization evaluation in three scenarios by a unified model trained with two self-supervised tasks. maries can improve the applicability of our model in low-resource scenarios. Thus, we employ prefix-tuning to explore the semantic understanding ability of large language models on the summarization evaluation task. Specifically, we append different prefix sequences at the start of each input text sequence according to the scenario: \[\mathbf{H}_{SR} =\text{PLM}([\text{CLS}]P_{SR}X[\text{SEP}]Y),\] \[\mathbf{H}_{SD} =\text{PLM}([\text{CLS}]P_{SD}X[\text{SEP}]D),\] \[\mathbf{H}_{SDR} =\text{PLM}([\text{CLS}]P_{SDR}X[\text{SEP}]D[\text{SEP}]Y),\] where [CLS] and [SEP] are both special tokens in PLM, \(\mathbf{H}_{SR}\in\mathbb{R}^{(L_{x}+L_{y}+L_{p}+2),z}\) denotes the token level representation for Sum-Ref pair, and \(z\) is the hidden size of the PLM. The \(\mathbf{P}_{*}\in\mathbb{R}^{\mathbf{L_{p}},\mathbf{z}}\) denotes the prefix for each scenario, which is a continuous prompt with length \(L_{p}\). The advantage of using the unified evaluator is that we can use one large language model to conduct three tasks and it will reduce the size of the evaluation toolkit. Although these data scenarios have their exclusive task characteristic, there are also some shared abilities and knowledge which can be transferred between different scenarios. To model the exclusive characteristic and transfer knowledge using the continuous prefix in a coordinated way, we propose a prefix perturbation method that uses the same tokens with different orders of different scenarios. Take the prefix of Sum-Doc scenario as an example, \(\mathbf{P_{SD}}\) contains \(L_{p}\) continuous prefix tokens \(\mathbf{P_{SD}}=\{\mathbf{p_{1}},\mathbf{p_{2}},\dots,\mathbf{p_{L_{p}}}\}\). We perturb \(\mathbf{P_{SD}}\) as \(\{\mathbf{p_{1}},\mathbf{p_{3}},\dots,\mathbf{p_{L_{p}}},\mathbf{p_{2}},\dots,\mathbf{p_{L_{p}-1}}\}\), and use this perturbed prefix as the prefix for Sum-Doc-Ref \(\mathbf{P_{SDR}}\). This prefix perturbation method keeps the prefix used across scenarios to use the same continuous tokens in a different order. Thus, our model can simultaneously transfer knowledge between scenarios and keep the exclusive ability prompted by the different prefixes. To obtain the summary-level overall representation, we conduct a pooling operation on the token-level representation: \[\mathbf{E}_{SR} =\text{Pooling}(\mathbf{H}_{SR}), \tag{1}\] \[\mathbf{E}_{SD} =\text{Pooling}(\mathbf{H}_{SD}),\] (2) \[\mathbf{E}_{SDR} =\text{Pooling}(\mathbf{H}_{SDR}), \tag{3}\] where \(\mathbb{E}_{*}\in\mathbb{R}^{z}\) denotes the summary-level representation. Then we employ a multi-layer perception (MLP) network to conduct a binary classification and obtain the probability \(p\): \[p_{*} =\text{Softmax}(\text{MLP}(\mathbf{E}_{*}))\in\mathcal{R}^{2}, \tag{4}\] \[s_{*} =p_{*}^{+}, \tag{5}\] where \(p_{*}^{+}\in\mathbb{R}\) denotes the probability of positive class in \(p_{*}\). During training, we use cross entropy loss \(\mathcal{L}_{ce}\) to optimize the model parameters to distinguish the positive and negative samples: \[\mathcal{L}_{ce}=-\left[\sum_{i=1}^{n}c_{i}\log p_{i}^{+}+(1-c_{i})\log\left(1 -p_{i}^{+}\right)\right],\] where \(c_{i}\in\{0,1\}\) denotes the label of \(i\)-th training sample which indicates whether this sample is a positive or negative sample. At the inference stage, we take the probability of positive class \(p^{+}\) as the final evaluation score \(s\). ### Variant of Sum-Doc-Ref Evaluation Intuitively, the scenario Sum-Doc-Ref can be seen as a combination of the Sum-Doc and Sum-Ref scenarios. Hence, an intuitive method to conduct the evaluation of the Sum-Doc-Ref scenario is to directly fuse the scores of the Sum-Doc and Sum-Ref scenarios. In this section, we propose a variant implementation to conduct evaluation conditions on the input of Sum-Doc-Ref, named **UMSE(Fusion)**. We combine the score of the Sum-Doc and Sum-Ref scenarios to get the score for the Sum-Doc-Ref: \[s_{SDR}=f(s_{SR},s_{SD}), \tag{6}\] where \(f\) denotes the ensemble strategy, such as min and max. In the experiment, we will analyze the performance of different implementations of \(f\). ## 4 Experiment ### Datasets In the training phase, we construct the positive and negative data pairs using the CNN/DailyMail (Nallapati et al., 2016) dataset. Then the trained evaluators are tested on the meta-evaluation benchmark SummEval (Fabbri et al., 2021) to measure the rank correlation coefficient between the evaluation model and human judgment. **CNN/DailyMail** has \(286,817\) training document-summary pairs, \(13,368\) validation and \(11,487\) test pairs in total. The documents in the training set have \(766\) words and \(29.74\) sentences on average while the reference summaries contain \(53\) words and \(3.72\) sentences. **SummEval** is a meta-evaluation benchmark. To collect the human judgments towards the model-generated summaries, they first randomly select 100 document and reference pairs from the test set of CNN/DailyMail, then generate summaries using 16 neural summarization models. Each summary is annotated by 3 experts and 5 crowd-sourced workers along four dimensions: coherence, consistency, fluency, and relevance. Finally, there is a total of \(12800\) summary-level annotations. ### Evaluation Metrics Following previous work Yuan et al. (2021); Zhong et al. (2022), we measure the rank correlation coefficient between the evaluation model and human judgment to represent the performance of the evaluator. In the experiments, we employ the Spearman \((\rho)\) and Kendall-Tau \((\tau)\) correlations between the evaluator output scores and human ratings. The statistical significance of differences observed between the performance of UMSE and the strongest baseline in each scenario is tested using a two-tailed paired t-test and is denoted using \(\blacktriangle\) (or \(\blacktriangledown\)) for strong significance at \(\alpha=0.01\) and \(p<0.05\). ### Comparisons In the experiment, we compare the proposed UMSE with widely used and strong baselines: _Reference-based Methods:_ (1) ROUGE Lin (2004) is one of the most popular metrics, and it computes n-gram overlapping between the system output and reference summary. We employ the ROUGE-1, ROUGE-2, and ROUGE-L in our experiments. (2) BERTScore Zhang et al. (2020) leverages the contextual embedding from the pre-training language model BERT Devlin et al. (2019) and calculates the cosine similarity between system output and reference. (3) MoverScore Zhao et al. (2019) utilizes the Word Mover's Distance to compute the distance between the embedding of generated summary and reference. (4) BERTScore Yuan et al. (2021) uses the weighted log probability of the pre-train language model BART's Lewis et al. (2020) output to evaluate the quality of summaries. (5) CTC Deng et al. (2021) is a general evaluation framework for language generation tasks including compression, transduction, and creation tasks. CTC is designed on the concept of information alignment. (6) UniEval Zhong et al. (2022) formulates the summary evaluation as binary question answering and can evaluate the summary from four dimensions, coherence, consistency, fluency, and relevance. _Reference-free Methods:_ (1) BLANC Vasilyev et al. (2020) is defined as a measure of the helpfulness of a summary to PLM while PLM performs the Cloze task on document sentences. In specific, the final score is the accuracy difference of whether use a summary to concatenate with the masked sentence. (2) SummaQA Scialom et al. (2019) is a QA-based evaluation metric. It generates questions from documents, answers the questions based on the summary by a QA model, and computes the QA metric as evaluation scores. (3) SUPERRT Gao et al. (2020) constructs the pseudo reference by extracting salient sentences from the source document and computes the similarity between generated summary and pseudo reference to evaluate the quality of the summary. (4) UniTE Wan et al. (2022) is a unified evaluation model for machine translation in different scenarios: reference-only, source-only and source-reference-combined. To prove the effectiveness of the perturbed prefix-tuning, we design an ablation model, UMSE-PT (w/o Prefix-Tuning). We remove the prefix of input and jointly fine-tune one pre-train language model using the two datasets we constructed. ### Implementation Details Following Deng et al. (2021), we employ the roberta-large Liu et al. (2019) as the backbone of our model. The MLP consists of 3 linear layers with tangent activation and the dimensions of each layer are 3072, 1024, and 2, respectively. Following Wan et al. (2022), the max length of input sequence (with prompt) is set to 512. We vary the length of prompt in {8, 16, 32, 64, 128}, and find that 128 is the best choice. We use AdamW as the optimizer and the learning rate is set to 3.0e-05 selected from {2.0e-05, 3.0e-05, 5.0e-05}. The number of train epochs is set up to 10 epochs and the batch size is set to 8. We fix the random seed always to 12 and trained our model on an NVIDIA GeForce RTX 3090 GPU for 6-7 hours. We use PyLucene to implement the BM25 algorithm to retrieve similar documents. The size of the two training datasets is 30K respectively, and the positive and negative samples are half. ### Evaluation Results We compare our UMSE with strong baselines in Table 1. We can surprisingly find that UMSE (w/ SD) performs comparably to the UMSE (w/ SR) in the Sum-Ref scenario and achieves significant improvement over the existing baselines, which demonstrates that our proposed perturbed prefirstuning can transfer knowledge from other scenarios. BERTScore is the state-of-the-art reference-based single-dimensional evaluation method, and the performance of UMSE increases by 105.63%, 34.93%, and 38.62% compared to BERTScore in terms of Coherence (\(\rho\)), Fluency (\(\tau\)), and Relevance (\(\rho\)) respectively. Compared with the reference-free baselines, UMSE (w/ SD) outperforms SUPERT 144.71%, 29.30%, and 47.09% in terms of Coherence (\(\rho\)), Fluency (\(\tau\)), and Relevance (\(\rho\)) respectively. Although the UMSE achieves slightly lower performance than the baseline in one dimension, the UMSE achieves consistently strong performance in three scenarios which can facilitate \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Coherence**} & \multicolumn{2}{c}{**Consistency**} & \multicolumn{2}{c}{**Fluency**} & \multicolumn{2}{c}{**Relevance**} \\ \cline{2-9} & \(\rho\) & \(\tau\) & \(\rho\) & \(\tau\) & \(\rho\) & \(\tau\) & \(\rho\) & \(\tau\) \\ \hline \multicolumn{9}{c}{_Sum-Ref Methods_} \\ ROUGE\({}^{-}\)1 (Lin, 2004) & 0.1670 & 0.1260 & 0.1600 & 0.1300 & 0.1590 & 0.0940 & 0.3260 & 0.2520 \\ ROUGE\({}^{-}\)2 (Lin, 2004) & 0.1840 & 0.1390 & 0.1870 & 0.1550 & 0.1590 & 0.1280 & 0.2900 & 0.2190 \\ ROUGE\({}^{-}\)1 (Lin, 2004) & 0.1280 & 0.0990 & 0.1150 & 0.0920 & 0.1050 & 0.0840 & 0.3110 & 0.2370 \\ BERTScore (Zhang et al., 2020) & 0.2840 & 0.2110 & 0.1100 & 0.0900 & 0.1930 & 0.1580 & 0.3120 & 0.2430 \\ MOVERScore (Zhao et al., 2019) & 0.1590 & 0.1180 & 0.1570 & 0.1270 & 0.1290 & 0.1050 & 0.3180 & 0.2440 \\ UNITE (w/ SR) (Zhou et al., 2022) & 0.1792 & 0.1362 & 0.0557 & 0.0474 & 0.0614 & 0.02255 & 0.1716 \\ UMSE (w/ SR) & 0.5840\({}^{\star}\) & 0.4443\({}^{\star}\) & 0.2494\({}^{\star}\) & 0.2055\({}^{\star}\) & 0.2601\({}^{\star}\) & 0.2132\({}^{\star}\) & 0.4217\({}^{\star}\) & 0.3189\({}^{\star}\) \\ UniEvalY (Zhou et al., 2022) & 0.4950 & 0.3740 & **0.4350** & **0.3650** & **0.4190** & **0.3460** & **0.4240** & **0.3270** \\ BATTScore (Yuan et al., 2021) & 0.4480 & 0.3420 & 0.3820 & 0.3150 & 0.3560 & 0.2920 & 0.3560 & 0.2730 \\ \hline \multicolumn{9}{c}{_Sum-Doc Methods_} \\ BLANC (Vasilyev et al., 2020) & 0.1219 & 0.0951 & 0.2768 & 0.2307 & 0.1727 & 0.1436 & 0.2574 & 0.1983 \\ SummaQk (Scialton et al., 2019) & 0.1239 & 0.0963 & 0.2540 & 0.2102 & 0.1782 & 0.1457 & 0.2120 & 0.1628 \\ \(\bigtriangledown\) SUPERT (Gao et al., 2020b) & 0.2165 & 0.1716 & 0.3438 & 0.2863 & 0.2509 & 0.2024 & 0.2746 & 0.2132 \\ UNITE (w/ SD)(Wan et al., 2022) & 0.1703 & 0.1327 & 0.1160 & 0.0956 & 0.0871 & 0.0703 & 0.2738 & 0.2084 \\ UMSE (w/ SD) & 0.5298\({}^{\star}\) & 0.4052\({}^{\star}\) & 0.3579\({}^{\star}\) & 0.2961\({}^{\star}\) & 0.3163\({}^{\star}\) & 0.2617\({}^{\star}\) & 0.4039\({}^{\star}\) & 0.3060\({}^{\star}\) \\ \hline \multicolumn{9}{c}{_Sum-Doc-Ref Methods_} \\ \(\bigtriangledown\) CT (Deng et al., 2021) & 0.4020 & 0.3100 & 0.3660 & 0.3010 & 0.2990 & 0.2450 & 0.4280 & 0.3360 \\ UniTE (w/ SDR) (Wan et al., 2022) & 0.1885 & 0.1453 & 0.1244 & 0.1017 & 0.1076 & 0.0886 & 0.2874 & 0.2232 \\ UMSE (w/ SDR) & 0.4704 & 0.3532 & 0.3413 & 0.2817 & 0.3006 & 0.2451 & 0.3894 & 0.2929 \\ UMSE/fusion (w/ SDR) & **0.5944\({}^{\star}\)** & **0.4515\({}^{\star}\)** & 0.3381 & 0.2813 & 0.3316\({}^{\star}\) & 0.2731\({}^{\star}\) & **0.4358\({}^{\star}\)** & **0.3282\({}^{\star}\)** \\ \hline \multicolumn{9}{c}{_Ablation Methods_} \\ UMSE-PT (w/ SR) & 0.5607 & 0.4246 & 0.2664 & 0.2193 & 0.2552 & 0.2079 & 0.4228 & 0.3155 \\ UMSE-PT (w/ SD) & 0.5007 & 0.3810 & 0.3505 & 0.2905 & 0.3079 & 0.2533 & 0.4276 & 0.3220 \\ UMSE/fusion-PT (w/ SDR) & 0.5757 & 0.4397 & 0.3338 & 0.2751 & 0.3206 & 0.2638 & 0.4375 & 0.3291 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparing with baselines on SummEval dataset. We use the notion “(w/ *)” to denote which data is used as input. \((\rho)\) denotes the Spearman correlations and \((\tau)\) denotes the Kendall-Tau correlations. The row with shaded background denotes the multi-dimensional metrics which output a score for each dimension, and it is unfair for comparing with these methods. The number with underline denotes the max value in the scenario and the **bold-face** denotes the max value over three scenarios. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Coherence**} & \multicolumn{2}{c}{**Consistency**} & \multicolumn{2}{c}{**Fluency**} & \multicolumn{2}{c}{**Relevance**} \\ \cline{2-9} & \(\rho\) & \(\tau\) & \(\rho\) & \(\tau\) & \(\rho\) & \(\tau\) & \(\rho\) & \(\tau\) \\ \hline Single Model (w/ SR) & 0.5019 & 0.3796 & 0.2916 & 0.2391 & 0.3090 & 0.2525 & 0.4153 & 0.3096 \\ UMSE (w/ SR) & 0.5840\({}^{\dagger}\) & 0.4443\({}^{\dagger}\) & 0.2494\({}^{\dagger}\) & 0.2055\({}^{\ddagger}\) & 0.2601\({}^{\ddagger}\) & 0.2132\({}^{\ddagger}\) & 0.4217\({}^{\ddagger}\) & 0.3189\({}^{\dagger}\) \\ \hline Single Model (w/ SD) & 0.4798 & 0.3599 & 0.3132 & 0.2580 & 0.2992 & 0.2454 & 0.3644 & 0.2760 \\ UMSE (w/ SDR) & 0.5298\({}^{\dagger}\) & 0.4052\({}^{\ddagger}\) & 0.3579\({}^{\dagger}\) & 0.2961\({}^{\ddagger}\) & 0.3163\({}^{\ddagger}\) & 0.2617\({}^{\ddagger}\) & 0.4039\({}^{\dagger}\) & 0. users from having to use multiple models. As illustrated in the related work SS 2, some evaluators (_e.g.,_ UniEval and BARTScore) focus on evaluating the summary in multi-dimension which model the specific dimension features and output _multiple scores_. Different from these methods, we focus on an orthogonal aspect that uses a unified model in multiple scenarios, and we only use _one score_ to represent the summary quality. Thus, directly comparing with these multi-dimensional metrics is not fair. Since our unified multi-scenario evaluator is orthogonal to these multi-dimension evaluators, we will combine the multi-dimensional method into UMSE in future work. Similar to our UMSE, UniTE is also a multi-scenario unified evaluation method for machine translation. However, UniTE achieves worse performance than UMSE, which demonstrates our assumption that the matching framework and the data construction method in UniTE are mainly focusing on the characteristic of translation. And we cannot simply use UniTE in the summarization task. From the results of UMSE(Fusion) (w/ SDR) and UMSE (w/ SDR), we can find that the fusion model achieves better performance, and we will use the fusion method in our release version of UMSE. An extensive analysis of why the fusion method works better than directly concatenating Sum-Doc-Ref in the input of PLM is shown in the following section. ### Discussions Ablation StudiesTo verify the effectiveness of our proposed perturbed prefix tuning method, we employ an ablation model UMSE-PT in three scenarios. In this model, we mix the training datasets we constructed and jointly fine-tune one PLM for _all_ scenarios. From the results shown in Table 1, we can find that UMSE-PT underperforms with the UMSE in all scenarios. Although using a shared pre-train language model can also transfer knowledge among these scenarios, these ablation studies demonstrate that using the shared continuous prefix tokens provides an explicit way to share common matching knowledge and it can boost the performance of the UMSE. Moreover, we employ an intuitive experiment that separately fine-tunes a PLM for _each_ scenario, and the results are shown in Table 3. Although the performance of the Sum-Ref drops slightly in terms of two dimensions, our proposed UMSE boosts the performance in the Sum-Doc scenario significantly. And boosting the performance of the Sum-Doc scenario is more valuable since evaluation in this scenario does not require any human annotating. Analysis of Sum-Doc-Ref FusionIn SS 3.4, we propose a variant model for the Sum-Doc-Ref scenario which directly fuses the scores of Sum-Doc and Sum-Ref to produce the score for the Sum-Doc-Ref scenario. In this section, we conduct experiments to explore which fusion method will lead to better performance. We employ four different fusion methods: (1) max method takes the maximum of \(s_{SD}\) and \(s_{SR}\) as \(s_{SDR}\); (2) min method takes the minimum of \(s_{SD}\) and \(s_{SR}\); (3) geometric mean fusion uses \(\sqrt{s_{SD}s_{SR}}\) as \(s_{SDR}\); and (4) arithmetic mean fusion employs \(\frac{(s_{SD}+s_{SR})}{2}\). From Table 2, we can find that the arithmetic mean achieves the best performance, and we finally use the arithmetic mean fusion in the UMSE(Fusion). Analysis of Perturbed Prefix LengthTo verify the effectiveness of our proposed perturbed prefix, we conduct experiments using the different lengths \begin{table} \begin{tabular}{c c c} \hline \hline **Model** & **Faithful** & **Factual** \\ \hline ROUGE-1 & 0.197 & 0.125 \\ ROUGE-2 & 0.162 & 0.095 \\ ROUGE-L & 0.162 & 0.113 \\ BERTScore & 0.190 & 0.116 \\ QA & 0.044 & 0.027 \\ UMSE & **0.242** & **0.167** \\ \hline Entailment & 0.431 & 0.264 \\ \hline \hline \end{tabular} \end{table} Table 4: The performance of different models on detecting hallucinations. The evaluation metric is the Spearman correlation. The faithful and factual annotations are released by Maynez et al. (2020). The row with shaded background denotes the model is trained on a supervised dataset, making it unfair to compare it with other methods. Figure 3: Performance across different prefix lengths. of the prefix. From Figure 3, we can find that the performance of our UMSE gradually improved with the growth of the prefix length. Analysis of Hallucination DetectionTo analyze the effectiveness of our model in detecting hallucinations, we conducted experiments on the dataset released by Maynez et al. (2020) and the results are shown in Table 4. According to the Spearman correlations on both faithful and factual, UMSE outperforms baselines, such as ROUGE, BERTScore, and QA, which demonstrates the ability of our proposed model in detecting hallucinations. ## 5 Conclusion In this paper, we propose **U**nified **M**ulti-scenario **S**ummarization **E**valuation Model (UMSE) which is a unified multi-scenario summarization evaluation framework. UMSE can perform the semantic evaluation on three typical evaluation scenarios: (1) Sum-Ref; (2) Sum-Doc and (3) Sum-Doc-Ref using only one unified model. Since these scenarios have different input formats, we propose a perturbed prefix-tuning method that unifies these different scenarios in one model and it can also transfer knowledge between these scenarios. To train the UMSE in a self-supervised manner, we propose two training data construction methods without using any human annotation. Extensive experiments conducted on the benchmark dataset SummEval verify that the UMSE can achieve comparable performance with existing baselines. ## Limitations In this paper, we propose the evaluation model UMSE which can be used to evaluate the summary quality in three typical scenarios. However, in the summarization task, different annotators have different writing styles, and there might exist more than one good summary for one document. Moreover, there can be summaries that concentrate on different aspects of a document (_e.g.,_ describing the location and room of a hotel). In the future, we aim to incorporate more scenarios (_e.g.,_ multi-references and multi-aspects) into our unified evaluation method. ## Ethics Statement In this section, we would like to discuss the ethical concerns of our work. Our proposed method UMSE is a unified model for multi-scenario summarization evaluation and is designed to help humans efficiently evaluate summaries. And the sensitive information is masked while constructing the training data from CNN/DailyMail dataset. ## Acknowledgements We would like to express sincere thanks to the anonymous reviewers for their helpful comments. This research was supported by the Natural Science Foundation of China (T2293773, 62102234, 62272274, 62202271, 61902219, 61972234, 62072279), the National Key R&D Program of China with grant (No.2022YFC3303004, No.2020YFB1406704), the Key Scientific and Technological Innovation Program of Shandong Province (2019JZZY010129), the Tencent WeChat Rhino-Bird Focused Research Program (JR-WXG-2021411), the Fundamental Research Funds of Shandong University.
2308.08510
Autoencoding a Soft Touch to Learn Grasping from On-land to Underwater
Robots play a critical role as the physical agent of human operators in exploring the ocean. However, it remains challenging to grasp objects reliably while fully submerging under a highly pressurized aquatic environment with little visible light, mainly due to the fluidic interference on the tactile mechanics between the finger and object surfaces. This study investigates the transferability of grasping knowledge from on-land to underwater via a vision-based soft robotic finger that learns 6D forces and torques (FT) using a Supervised Variational Autoencoder (SVAE). A high-framerate camera captures the whole-body deformations while a soft robotic finger interacts with physical objects on-land and underwater. Results show that the trained SVAE model learned a series of latent representations of the soft mechanics transferrable from land to water, presenting a superior adaptation to the changing environments against commercial FT sensors. Soft, delicate, and reactive grasping enabled by tactile intelligence enhances the gripper's underwater interaction with improved reliability and robustness at a much-reduced cost, paving the path for learning-based intelligent grasping to support fundamental scientific discoveries in environmental and ocean research.
Ning Guo, Xudong Han, Xiaobo Liu, Shuqiao Zhong, Zhiyuan Zhou, Jian Lin, Jiansheng Dai, Fang Wan, Chaoyang Song
2023-08-16T17:07:37Z
http://arxiv.org/abs/2308.08510v1
# Autoencoding a Soft Touch to Learn Grasping from On-land to Underwater+ ###### Abstract Robots play a critical role as the physical agent of human operators in exploring the ocean. However, it remains challenging to grasp objects reliably while fully submerging under a highly pressurized aquatic environment with little visible light, mainly due to the fluidic interference on the tactile mechanics between the finger and object surfaces. This study investigates the transferability of grasping knowledge from on-land to underwater via a vision-based soft robotic finger that learns 6D forces and torques (FT) using a Supervised Variational Autoencoder (SVAE). A high-framerate camera captures the whole-body deformations while a soft robotic finger interacts with physical objects on-land and underwater. Results show that the trained SVAE model learned a series of latent representations of the soft mechanics transferable from land to water, presenting a superior adaptation to the changing environments against commercial FT sensors. Soft, delicate, and reactive grasping enabled by tactile intelligence enhances the gripper's underwater interaction with improved reliability and robustness at a much-reduced cost, paving the path for learning-based intelligent grasping to support fundamental scientific discoveries in environmental and ocean research. Underwater Grasping Soft Robotics Tactile Learning ## 1 Introduction Collecting delicate deep-sea specimens of geological or biological interests with robotic grippers and tools is central to supporting fundamental research and scientific discoveries in environmental and ocean research (Feng et al., 2022; Gong et al., 2021). The human fingers are dexterous in object manipulation thanks to the finger's musculoskeletal biomechanics and skin's tactile perception even in harsh environments such as underwater (Billard and Kragic, 2019; Kumar et al., 2019). Much research has been devoted to skilled object manipulation in daily life scenarios (Ciocarlie and Allen, 2009). However, limited research focuses on transferring such capabilities to an underwater environment (Mura et al., 2018). The ambient environment significantly challenges visual and tactile feedback integration while performing object grasping for visual identification under fluidic interference on the surface of physical interaction (Capalbo et al., 2022; Galloway et al., 2016). As a challenging task for humans, designing and developing robotic solutions for reactive and reliable grasping becomes even more complicated when the end-effector is fully submerged underwater (Stuart et al., 2017). ### Design towards Soft Grasping for Ocean Exploration Object grasping is essential for environmental and ocean research to collect in situ specimens, where a trend towards softness in gripper design shows a growing adoption over the years (Licht et al., 2016). Classical research on underwater grasping mainly focused on a direct translation of mechanical grippers made from rigid materials with waterproof design for all components, including the actuators, mechanisms, and sensors, resulting in a bulky design that is usually difficult for system integration (Yuh, 2000). Previous research reports a modular continuum finger for dexterous sub-sea manipulation with force and slip sensing, where the complex integration of a range of mechanical, electrical, and computing subsystems limits the use of this prototype out of a laboratory testing tank (Lane et al., 1999). Another submarine gripper was developed as part of the European project TRIDENT (Bemfica et al., 2014), demonstrating dexterity for executing grasping and manipulation activities, but suffers from challenges when interacting with the delicate subsea environment and objects. Recent development in soft robotics adopts a different approach to leverage material softness for grasping (Wang et al., 2023). The advantage of soft grippers for underwater scenarios is a systematic integration of fluidic actuation, motion transmission, and form-closed adaptation enabled by the soft, lightweight, low-cost material and fabrication against an aquatic environment with a reduced complexity using simple open-loop control (Gong et al., 2021). These soft grippers had demonstrated successful, compliant interaction with various objects underwater (Wang and Cui, 2021). A recent review shows an emerging research gap in introducing sensory capabilities to soft grippers underwater for closed-loop grasping feedback (Mazzeo et al., 2022). ### The Need for Vision-based Tactile Grasping Underwater Inspired by the tactile perception of human fingers, a wide range of robotic research has been devoted to integrating with object grasping in industrial or daily life settings (Bao et al., 2023). Current research on tactile perception often leverages material softness for skin-like design (Kumar et al., 2019). Recent work in 3D Tactile Tensegrity expanded the adoption of tactile sensing to the underwater environment, presenting a promising direction through the integration of soft self-powered triboelectric nanogenerators and deep-learning-assisted data analytics for underwater exploration (Xu et al., 2023). While recent work gave an exhaustive investigation of tactile sensors and their applications in intelligent systems (Liu et al., 2020), there is also an emerging field of vision-based tactile sensors in robotics under-represented in this field (Li and Peng, 2022). Vision-based tactile perception leverage machine vision to provide multi-modal contact information with detailed spatial resolution (Shimonomura, 2019). The focus is to deploy soft mediums that deform under external forces and infer tactile information from visual observation (Zhang et al., 2022). Sato et al. (Sato et al., 2010) built a linear approximation model to estimate the contact forces by tracking two colored spherical markers arranged at different depths of an elastomer surface. Yamaguchi et al. (Yamaguchi and Atkeson, 2019) presented another low-order approximation model to infer the contact forces from observed makers' variations in the camera. Unfortunately, the current literature has not yet explored the adoption of vision-based tactile robotics in the underwater scenario. ### Machine Learning for Latent Intelligence in Tactile Robotics The performance of machine learning algorithms heavily depends on the choice of data representation (Bengio et al., 2013). When projecting complex soft robotics deformation into image space (Yuan et al., 2017), a growing trend of research is devoted to treating the representation of a captured image as the latent variables of an appropriate generative model (Doersch et al., 2015). The generative models are usually highly interpretable in understanding the causal relations of the observations (Kingma and Welling, 2014), making it a potential solution to increase the robustness of vision-based, soft, tactile sensing underwater where the environmental uncertainties are much worse than the daily life or industrial settings, as shown in **Figure 1**. Variational autoencoder (VAE) recently emerged as a powerful generative model that learns the distribution of latent variables and is widely used for visual representation in robot learning (Rezende et al., 2014; Kingma and Welling, 2014; Takahashi et al., 2019). Since the original publication, many variants and extensions of VAE have been proposed. Semi-supervised VAE was proposed to address the problem of unlabeled data training (Kingma et al., 2014). Higgins et al. (Higgins et al., 2017) introduced weight to balance the reconstruction error and regularization of latent variable distribution, enabling learning of a disentangled latent representation. The recent adoption of a supervised VAE model for identifying critical underlying factors for prediction demonstrates the promising potential for application in robotic grasping (Ji et al., 2021). This study investigates the transferability of grasping knowledge from on-land to underwater via a vision-based soft robotic finger that learns 6D forces and torques (FT) using a Supervised Variational Autoencoder (SVAE). Using real-time images collected from an in-finger camera that captures the soft finger's whole-body deformations while interacting with physical objects on-land and underwater, we established a learning-based approach to introduce tactile intelligence for soft, delicate and reactive grasping underwater, making it a promising solution to support scientific discoveries in interdisciplinary research. ## 2 Results ### In-Finger Vision for a Soft Tactile Finger Underwater Here we present the in-finger vision design for tactile sensing compatible with both on-land and underwater scenarios, as shown in Figure 1(a). The finger is based on a soft metamaterial with a shrinking cross-sectional geometry towards the tip, capable of omni-directional adaptation on the finger surface to unknown object geometries, enabling a passive form-closure for robotic grasping (Wan et al., 2022). A monocular RGB camera (120 frames per second) is mounted inside a support frame under the finger to obtain high-framerate images of the finger's adaptive deformations at a resolution of \(640\times 480\) pixels. The support frame is 3D printed with the optically transparent material (Somos(r) WaterShed XC 11122). All electronics inside are waterproofed by dipping the camera board, except the lens, into transparent silicon. We added six LEDs to the camera board for improved lighting conditions, resulting in an integrated design of a water-resistant, soft robotic finger with machine vision from the inside. Figure 1(b) shows the integration of the proposed finger with a Robotiq's Hand-E gripper, which has an ingress protection rating of IP67 for testing in lab tanks. The proposed soft finger exhibits spatial adaptive deformations, conforming to the object's geometry during physical contact and exhibiting both regular and twisted adaptions for enhanced robustness for grasping, as shown in Figure 1(d). For more intensive use in the field, one can directly mount the soft finger to the tip of existing grippers on an underwater vehicle. We demonstrated the effeteness of using the soft finger by grasping some Yale-CMU-Berkeley (YCB) objects of various shapes and softness underwater or floating on the water surface (Calli et al., 2015). See Movie S1 in the Supplementary Materials for further details. The advantage of the proposed design is a complete separation of the sensory electronics from the soft interaction medium by design, resolving the issues of an enclosed chamber that may suffer from severe surface deformation when used underwater (Li and Peng, 2022). Such design enables us to collect real-time image streams of the physical Figure 1: **Overview of the soft visual-tactile learning across land and water using SVAE.** (a) Design of the sensorized soft finger where the camera board is sealed with a silicon layer. (b) The integrated amphibious gripper is transformed by replacing the fingertip of a Robotiq Hand-E gripper with the sensorized soft finger with omni-directional grasping adaptations. The Hand-E gripper has an ingress Protection (IP) rating of IP67, which is suitable for our underwater experiment in a tank without extra waterproofing. (c) The scheme of visual-tactile learning takes an image of the deformed metamaterial as input, reconstructs the image, and simultaneously predicts the force and torque. (d) The amphibious gripper is mounted on a Franka Emika Panda robot arm to execute force control tasks on land and underwater. interaction between the soft finger and external object using the in-finger vision, as shown in Figure 1(c), which can be further implemented with generative models, such as the supervised variational autoencoder (SVAE), to provide the tactile perception of grasping interactions, both on land and underwater. ### Generative Tactile Learning via Supervised Variational Autoencoder Here presents a generative learning architecture for tactile perception in both on-land and underwater scenarios with latent explanations using a supervised variational autoencoder (SVAE) in **Figure 2**. The generative model is illustrated in Figure 2(a), which includes an encoder \(q_{\phi}(Z|X)\) to process real-time images from the in-finger vision, then processed through a latent space operation to estimate a latent distribution of the interactive physics \(Z\sim N(Z_{\mu}|X_{\theta})\), assuming a normal distribution. Here, we added a force and torque prediction based on the \(Z_{\mu}\) to produce the 6D tactile estimation as an auxiliary output. Finally, through a generative decoder \(p_{\theta}(Z|X)\), our model reproduces images of the tactile interactions based on the learned SVAE model. Note that \(\theta\) and \(\phi\) are the parameters of the encoder and decoder neural networks, which must be optimized during training. Note that the SVAE model's loss function for training is the combination of image reconstruction loss, force/torque prediction loss, and latent representation regularization loss. Following the detailed formulation of the SVAE model in the Materials and Methods section, for tuning with small datasets, we introduced two hyper-parameters, \(\alpha\) and \(\beta\), to modify the objective function into the following, where the parameter \(\alpha\geq 0\) is used to adjust the relative importance during optimization Figure 2: **Latent deformation learning model for the soft metamaterial.** (a) The architecture of the Supervised Variational Autoencoder (SVAE) model, where a VAE is combined with a supervised regression task for force and torque prediction. (b) Predicted force/torque versus the ground truth in each of the six dimensions on the test dataset. (c) Distributions of prediction errors in each 6D force/torque dimension over different ranges. between the image reconstruction and force/torque prediction tasks. \[\tilde{L}(\theta,\phi;X,Y)= -\frac{\alpha}{1+\alpha}\|X-\hat{X}\|-\frac{1}{1+\alpha}\|Y-\hat{ Y}\|+\beta D_{KL}[N(Z_{\mu},Z_{\sigma})||N(\mathbf{0},I)], \tag{1}\] Figure 2(b) shows the predicted 6D forces and torques via SVAE against the ground truth. The \(R^{2}\) scores are higher than 0.98 for 6D force and torque predictions, indicating the SVAE model's excellent performance in tactile sensing on the test dataset. We also plot the distributions of prediction errors in each 6D force/torque dimension over different ranges in Figure 2(c). For applied forces ranging between \([0,2)\), \([2,4)\), \([4,6)\), \([6,8)\), and \([8,10)\) N, the standard deviations of the prediction are 0.07, 0.06, 0.09, 0.12, and 0.24 N, respectively. For applied torques ranging between \([0,120)\), \([120,240)\), \([240,360)\), \([360,480)\), and \([480,600)\) N-mm, the standard deviations of the prediction are 4.6, 3.9, 6.0, 9.1, and 20.3 N-mm, respectively. These results follow a Gaussian distribution with a near-zero mean and an increasing standard deviation as the range becomes more considerable. The force sensing errors are comparable in the \(x\) and \(y\) axes and more prominent in the \(z\) axis, while the torque sensing errors are the least in the \(z\) axis. This characteristic is primarily due to the metamaterial's structural design, which is less sensitive to the force along the \(z\) axis. We also conducted a comparative study to evaluate the proposed SVAE model against two baseline models, including a ConvNet model for force and torque prediction only and a VAE model for image reconstruction only, with results summarized in **Table 1**. These two models share the same network architecture as the SVAE but are trained separately. However, the ConvNet model is a deep regression network with a convolutional layer that only takes force/torque prediction loss. And the VAE model is a Variational Autoencoder that only takes image reconstruction and latent representation regularization loss. We used mean square error (MSE) between the original and reconstructed images to evaluate the representation learning task and the coefficient of determination \(R^{2}\) to assess the overall force/torque prediction task. See Methods S1 in the Supplementary Materials for further details on training data collection. The SVAE has shown comparable performance over the vanilla VAE in the representation learning task while \(\alpha\) is approaching infinity. Meanwhile, SVAE outperforms the deep regression model ConvNet in the force/torque prediction task when \(\alpha\leq 1\) and the training is focused more on the prediction task, achieving over \(99.45\%\) on the validation set. Since SVAE is a multi-task learning framework, the hyper-parameter \(\alpha\) is vital in balancing the reconstruction and prediction tasks. Here, \(\alpha=1\) is chosen for all validation tests and real-time experiments. The results show that the co-trained representation learning enhances the force/torque prediction task. See Movie S2 in the Supplementary Materials for a video demonstration of real-time 6D force/torque prediction using the SVAE model in on-land and underwater scenarios. ### Land2Water Generalization of Tactile Representation We also investigated the generalization of tactile representations learned via SVAE in a Land2Water skill transfer problem for tactile sensing in **Figure 3**. While implementing the SVAE model, we chose a 32-dimension definition with a balanced trade-off between reconstruction error and dimensional complexity in explanatory power. See Methods S2 in the Supplementary Materials for further discussion. Figure 3(a) shows the comparison of the 32D latent space vectors for tactile perception between on-land (top) and underwater (bottom) scenarios when the soft finger is experiencing the same deformation delivered by the robotic arm. Five random instances of the in-finger vision are chosen for each scenario and plotted with their corresponding latent variable distributions. We identified a similar distribution between the upper and lower plots for these five random instances. This suggests the transferability of the latent variables' explanatory power in tactile perception between \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{**Settings**} & **Image Reconstruction** & **Force/Torque Prediction** \\ & & **Error (MSE)** & **Accuracy (avg.)** \\ \hline ConvNetV & Vanilla & \(-\) & \(96.04\%\) \\ \hline \multirow{5}{*}{SVAE} & \(\alpha=0.001\) & \(5.19\times 10^{-2}\) & \(99.53\%\) \\ & \(\alpha=0.01\) & \(5.17\times 10^{-2}\) & \(99.52\%\) \\ \cline{1-1} & \(\alpha=0.1\) & \(2.02\times 10^{-2}\) & \(99.53\%\) \\ \cline{1-1} & \(\alpha=1\) & \(9.47\times 10^{-3}\) & \(99.45\%\) \\ \cline{1-1} & \(\alpha=10\) & \(6.68\times 10^{-3}\) & \(96.22\%\) \\ \cline{1-1} & \(\alpha=100\) & \(5.65\times 10^{-3}\) & \(61.46\%\) \\ \hline VAE & Vanilla & \(5.36\times 10^{-3}\) & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparative analysis of the proposed SVAE’s performances. the on-land and underwater scenarios. This is because the learned latent representation could be close to the intrinsic dimension of the soft finger deformation, minimizing the information loss during tactile image encoding. The segment of the soft finger interacting with objects is made from 3D-printed metamaterial without any electronic parts, whose mechanical properties are not affected by the water, indicating the generalization of tactile representation in Land2Water transfer, which is reported for the first time in the vision-based tactile sensing literature. The correlation map plotted in Figure 3(b) suggests that these 32 latent variables learned from our SVAE model are generally unrelated, which is a preferred property for representation learning [Bengio et al., 2013]. However, the correlation map is not a good predictor for the latent representation, but it is not a good predictor for the latent representation. Figure 3: **Representation learning of deformations of the proposed soft metamaterial.** (a) The complex deformations of the soft metamaterial on land and underwater are represented in the latent space. (b) Correlation map of learned 32 latent variables. (c) Reconstructed images of varying selected latent variables. (d) The relative correspondences between latent variables and force/torque. for variable clusters such as \(\{Z_{7},Z_{8},Z_{9}\}\) and \(\{Z_{12},...,Z_{15}\}\), regional correlation is observable at a relatively small scale. We also demonstrate the latent interpolation for the metamaterial's deformation projected in the image plane on selected dimensions of \(\{Z_{1}\), \(Z_{3}\), \(Z_{15}\), \(Z_{20}\), \(Z_{28}\}\) in Figure 3(c), which gives an intuitive sense of what are the latent variables represent in physical space. For example, we found that \(Z_{1}\) and \(Z_{3}\) are related to pushing right-downward and right-upward when their values go from negative to positive, while \(Z_{15}\) and \(Z_{20}\) are related to moving left-upward and left-downward. Furthermore, \(Z_{28}\) has a prominent horizontal movement. These latent variables are strongly related to representing the complex deformations of the soft metamaterial in terms of image reconstruction but are not disentangled. As shown in Figure 3(d), the correlation between the 6D force/torque and the 32D latent variables is complex and diversified. For example, the latent variable \(Z_{28}\) strongly correlates with \(F_{y}\), which agrees with the reconstructed horizontal movement along the corresponding axis of the camera coordinate. ### Land2Water Grasping Knowledge Transfer This section presents two experiment results that implement the Land2Water grasping knowledge obtained through tactile sensing, including one for object grasping against location uncertainties and another for tactile sensorimotor grasping adaptability from on-land to underwater scenarios. #### 2.4.1 Object Grasping against Location Uncertainties This experiment demonstrates the equal necessity of tactile perception when grasping underwater in **Figure 4**, which is generally acknowledged to increase the robustness in on-land conditions. Figures 4(a)&(c) show the open-loop grasping without force feedback, where the gripper reaches the target grasping point, closes the fingers to a given gripping width and lifts the object. While in the case of closed-loop grasping with force feedback, as illustrated in Figures 4(b)&(d), the gripper adjusts the gripping width according to the force estimation from SVAE until a grasping confirmation signal is triggered, then lifts the object. While modern learning-based methods are effective in performing grasp planning [Morrison et al., 2020, Mahler et al., 2019], here, we manually selected the gripper's grasping positions and gripping Figure 4: **Tactile grasping results with or without SVAE in on-land and underwater scenarios.** (a) Open-loop object grasping on land with a predefined grasping position. (b) Closed-loop grasping on land with contact force feedback. (c) Open-loop object grasping underwater with a predefined grasping position. (d) Closed-loop object grasping underwater with contact force feedback. (e) Test objects with the predefined grasping points marked. (f) The grasp result summary. widths for each test object shown in Figure 4(e) for the ease of comparison. Figure 4(f) summarizes the ten grasping trials for each object using both methods and reports the average success rate. We added a standard deviation \(\sigma=5\) mm to simulate the uncertainty due to grasping parameters prediction. The average success rate for on-land grasping of the five test objects is \(44\%\) without contact feedback. After adding tactile feedback, the success rate is significantly enhanced to \(100\%\). After adding tactile feedback, our results show a similar enhancement for underwater grasping, boosting the average success rate from \(30\%\) in open-loop grasping to \(90\%\) in closed-loop grasping. See Movie S3 in the Supplementary Materials for a video demonstration. #### 2.4.2 Tactile Sensorimotor Grasping Adaptability This experiment demonstrates sensorimotor grasping using tactile perception enabled by the proposed SVAE model, which is transferable from on-land to underwater scenarios. **Figure 5(a)** demonstrates the overall experiment process. Once contact begins, the in-finger vision captures the whole-body deformations of the soft finger and feeds the SVAE model with real-time image streams of the physical interaction at 120 Hz. The 6D forces and torques are predicted for both on-land and underwater scenarios, then compared against a pre-defined threshold for reactive grasping. During this process, the width between the two soft fingers is actively adjusted to accommodate the disturbances in object status, i.e., fluidic disturbances for grasping underwater and sudden collision for on-land grasping. We execute the reactive grasping by sending reference position commands to a position controller in the robot system using a motion generator calculated by the measured gripper position and force error detected on the fingers. Figure 5(b) illustrates the experiment process that tests the gripper system's responsiveness of tactile-reactive grasping, a desirable capability for both on-land and underwater grasping of objects with known properties. After making contact with a slightly rotated tube, fixed, of oval cross-section, we send force commands to the gripper to maintain a contact force at 0.4, 1.6, and 3 N sequentially. Shown in Figures 5(c)&(d) are the recorded force (in blue) against the commanded force (in red) when the experiment was conducted on land and underwater. In both scenarios, the force controller successfully transited and stabilized at the commanded contact force within seconds. See Movie S4 in the Supplementary Materials for a video demonstration. Figure 5(e) illustrates another experiment that tests the gripper's capability to maintain a specified contact force while reacting to disturbances, a preferred but more challenging skill for both on-land and underwater grasping of objects with unknown yet delicate properties. In this experiment, the oval-shaped tube is commanded to rotate clockwise in \(45^{\circ}\) and \(60^{\circ}\) first and then counterclockwise in \(90^{\circ}\) to simulate the changing interaction between the gripper and target object. During the process, the gripper needs to maintain a 0.4 N force for the on-land experiment in Figure 5(f) and a 1.6 N force for the underwater experiment in Figure 5(g). When the target object changes its pose during rotation, the gripper reacts to the shape variation based on the estimated force from SVAE. See Movie S5 in the Supplementary Materials for a video demonstration. We also tested the gripper's reactive grasping under rotational disturbances by turning a cylinder along the \(z\)-axis. In this case, the SVAE model successfully predicted a torque while the fingers started twisting and commanded the gripper to rotate while maintaining a zero torque \(\tau_{z}\) in reactive motion. See Movie S6 in the Supplementary Materials for a video demonstration. ## 3 Discussion It has been a challenge to introduce robotic intelligence into underwater grasping by adding the sense of touch (Mazzeo et al., 2022), which supports delicate and autonomous interactions with the unstructured aquatic environment for scientific activities in environmental, biological, and ocean research. Classical solutions usually take a mechanical approach with various sealing technologies to deal with fluidic pressure and corrosive contamination, suffering trade-offs in engineering flexibility and intelligent perception. This work proposes a vision-based approach to achieve high-performing tactile sensing underwater by combining the emerging advancement in soft robotics and machine learning. The simplicity of the design enables a minimum set of mechanical components, avoiding dynamic seals for enhanced robustness underwater. The soft finger's passive adaptation and in-finger vision enable a seamless integration of the proposed Supervised Variational Autoencoder (SVAE) to learn tactile sensing through visual sensing underwater. The latent representations learned from the SVAE algorithm enable a generative solution to infer the 6D forces and torques during physical interactions underwater with explanatory reasoning. As a result, we successfully transferred the tactile intelligence of the proposed gripper system from on-land to underwater. We achieved tactile force prediction accuracy above \(98\%\) along each axis on the test set, using the same hardware with minimal algorithmic parameter adjustment. Real-time grasping experiment results in a lab tank demonstrate the effectiveness of the soft tactile finger for reliable and delicate grasping in both environments. Model explainability and generalization are primarily concerned in machine learning research. Considering the transferability of tactile intelligence from on-land to underwater, we leverage the variational autoencoder model's powerful representation learning capability to express the soft finger's deformation patterns in latent space. Experiment results show that the extracted latent features of the same finger deformation in different environments exhibit a similar distribution. From the statistical inference perspective, learning this low-dimensional deformation pattern is closely related to the dimension reduction problem (Adragni and Cook, 2009) where the learned latent representation corresponds to an approximated sufficient statistics of original data (Joyce and Marjoram, 2008). In contrast with conventional dimension reduction techniques such as Principal Component Analysis, the convolutional neural network usually performs better in finding the low-dimensional representation from image data (Hinton and Salakhutdinov, 2006). Performance degradation of tactile force prediction from on-land to underwater is unavoidable due to the significant change of visual input to the SVAE model. Due to unpredictable fluid dynamics, object grasping underwater is generally Figure 5: **Tactile perception of soft finger for real-time robotic grasping control.** (a) Experiment setup of the force control tasks with soft tactile sensing. The goal of the force control tasks is to maintain the contact force at the required values by controlling the position of the soft finger. (b)-(d) Desired gripping force following experiments. The gripping force is commanded to a series of expected values, and the corresponding gripper adaptation stages are illustrated in different colors. (e)-(g) In-hand object shape adaptation experiments. The grasped object’s shape changes constantly, and the gripper sensitively adjusts its position to maintain the constant gripping force. more challenging than on land, which is the same case with or without tactile feedback, as demonstrated in our experimental results. However, adding tactile feedback to the gripper system effectively enhanced the reliability of underwater grasping. The finger's network design cuts the fluids while closing, generally causing fewer disturbances for underwater grasping, a common problem usually suffered by fingers with a rigid structure (Stuart et al., 2017). Tactile perception is generally desired to achieve effective grasping behaviors in underwater environments but is under-explored in research and practice compared with the on-land scenarios (Mazzo et al., 2022; Yan et al., 2021). Operational tasks for underwater robotics are usually associated with a lack of vision, leading to ambiguous recognition of objects (Subad et al., 2021), in which tactile perception plays an important role. Our results in grasping success rates demonstrate the benefit of tactile perception when visual perception is underperforming. Besides, reactive control architecture based on the perception-action cycle can be integrated with our tactile soft finger to achieve more intelligent manipulation underwater. Our presented work has several limitations, which need future research for optimization in structural design and learning algorithms. For example, visual input tends to be corrupted by background noise in an underwater environment, which could be alleviated mechanically by adding a layer of silicone skin on the finger surface (Jiang et al., 2021). We could also enhance the tactile perception using XMem (Cheng and Schwing, 2022) to track the soft finger's deformation from the in-finger vision or use inpainting algorithms (Yu et al., 2023) to use the in-finger vision for visual perception. The proposed underwater grasping system is yet to be tested on Remotely Operated Vehicles (ROVs) in shallow and deep water for further engineering enhancement. ## 4 Materials and Methods ### Formulating the Supervised Variational Autoencoder Accurately deriving the relationship between deformation and force of soft structure can significantly improve the efficacy of visual-tactile sensing (Ma et al., 2019). However, the geometry-dependent deformation of the soft structure is complex to represent. Even though we can discretize the structure with standard node elements using the Finite Element Method, measuring the displacements of corresponding nodes from a monocular camera can be another problem. The standard solution involves a two-step method by first building a force-displacement mapping of soft structure and then solving the partial observable vision problem using a monocular camera (Dong et al., 2022). Here, we leverage the interpretability of latent variables in the original VAE model and constrain these learned factors to image-based features of our soft finger deformation using in-finger vision, where the restored force can be measured during training and act as a supervised signal to guide the learning of latent space. As shown in Figure 2(a), suppose the collected, labeled data pairs \((X,Y)\) are independent and identically distributed, where \(X\) and \(Y\) are images and vectors of force/torque, respectively. The aim is to find an optimal representation \(Z\) of \(X\) containing sufficient information about \(Y\). To tackle both representation learning and force/torque prediction tasks, we extend the optimization framework of the original VAE (Kingma and Welling, 2014) to an additional supervised task and maximize the log-likelihood function of marginal probability \(\log p_{\theta}(X,Y)\): \[\log p_{\theta}(X,Y)=L(\theta,\phi;X,Y)+D_{KL}[q_{\phi}(Z|X)||p_{\theta}(Z|X,Y)], \tag{2}\] where \(L(\theta,\phi;X,Y)\) is the evidence lower bound (ELBO) for SVAE, which can be extended as: \[\log p_{\theta}(X,Y) \geq L(\theta,\phi;X,Y) \tag{3}\] \[=E_{Z\sim q_{\phi}(Z|X)}[\log p_{\theta}(X|Z)]+E_{Z\sim q_{\phi}(Z |X)}[\log p_{\theta}(Y|Z)]-D_{KL}[q_{\phi}(Z|X)||p_{\theta}(Z)].\] In Equation 3, for continuous data of image, force, and latent variables \(X,Y,Z\), the prior distribution of the latent variables \(p_{\theta}(Z)\), distribution of probabilistic encoder \(q_{\phi}(Z|X)\) and decoder \(p_{\theta}(X|Z)\), \(p_{\theta}(Y|Z)\) are assumed to follow a normal distribution: \[p_{\theta}(Z) \sim N(\mathbf{0},I), \tag{4}\] \[q_{\phi}(Z|X) \sim N(Z_{\mu}(X,\phi),Z_{\sigma}(X,\phi)),\] \[p_{\theta}(X|Z) \sim N(X_{\mu}(Z,\theta),I),\] \[p_{\theta}(Y|Z) \sim N(Y_{\mu}(Z,\theta),I).\] Maximization of the new ELBO in Equation 3 is equivalent to maximizing the following optimization object, where the outputs from two decoders are denoted as \(\hat{X}\) and \(\hat{Y}\), respectively. \[\tilde{L}(\theta,\phi;X,Y)=-\|X-\hat{X}\|-\|Y-\hat{Y}\|+D_{KL}[N(Z_{\mu},Z_{ \sigma})||N(\mathbf{0},I)]. \tag{5}\] Therefore, we build a hierarchical, convolutional, multi-scale model for the encoder and decoder to model the long-range correlations in image data. We use four residual serial blocks to extract and reconstruct image features in different scales [14]. The first two terms in Equation 5 measure reconstruction errors and force/torque prediction errors, respectively. The third term encourages the approximated posterior \(q_{\phi}(Z|X)\) to match the prior \(p_{\theta}(Z)\), which controls the capacity of latent information bottleneck. Although the derived optimization objective function Equation 5 implicitly balances the three sources of loss, its optimization can be complex in practice. To resolve this issue, we propose the formulation of Equation 1 in Section 2.2 by introducing hyper-parameters \(\alpha\) and \(\beta\) to Equation 5. Introducing parameter \(\beta\geq 0\) ahead of the third term of Equation 1 is inspired by the work of Higgins et al. [14] so that the optimal \(\beta\) can be estimated heuristically in unsupervised scenarios. We tested several choices of \(\beta\) in a candidate set, ranging from \(10^{-4}\) to \(10^{2}\), and fixed \(\beta=0.1\) in our experiment. All networks were trained on a computer with NVIDIA GTX 1080Ti GPU, a batch size 64, and Adam optimizer [14]. Considering the relatively small dataset size, the initial learning rate was set to \(5\times 10^{-5}\) and decreased with the training epoch. ### Tactile Grasping from On-Land to Underwater We conducted object grasping experiments in a lab tank with and without contact force feedback for both on-land and underwater conditions to demonstrate the benefit of tactile learning in reliable object grasping against environmental uncertainties. We tested the grasping success rate using objects of different shapes, sizes, and materials from on-land to underwater. With the adoption of learned tactile perception, two more tasks were tested to demonstrate intelligent grasping behaviors in both on-land and underwater conditions. ``` 1:Raw image \(I_{raw}\); Measured position \(P_{m}\); Reference force \(F_{ref}\) 2:Reference position \(P_{ref}\) 3:Initialize force tolerance \(\delta\) and control gain \(K\) 4:while\(True\)do 5:Filtered image \(I_{f}\gets ColorThreshold(I_{raw})\) 6:Estimated Force \(F_{est}\gets VisualForceNet(I_{f})\) 7:if\(|F_{ref}-F_{est}|>\delta\)then 8:\(\Delta_{P}=\frac{1}{K}(F_{ref}-F_{est})\)\(\triangleright\) position update rule 9:\(P_{ref}=P_{m}+\Delta_{P}\) 10:elseif\(|F_{ref}-F_{est}|\leq\delta\)then 11:\(P_{ref}=P_{m}\) 12:endif 13:endwhile ``` **Algorithm 1** Reference Motion Generator As is shown in Figure 5(a), to achieve an intelligent closed-loop grasping behavior, it is an essential requirement for the grasping system to maintain a specified contact force while reacting to the varying environment. The industrial gripper can achieve reliable position commands at a high bandwidth due to the built-in low-level position controller. It is our goal to design a high-level position control policy \(u=\pi(P_{m},F_{est},F_{ref})\) with measured gripper position \(P_{m}\) and estimated contact force \(F_{est}\) that achieves the desired contact force \(F_{ref}\). \[\pi(P_{m},F_{est},F_{ref})=\operatorname*{arg\,min}_{u}|F_{est}-F_{ref}|. \tag{6}\] Thanks to the proposed tactile force proprioceptive soft finger, which acts simultaneously as an end-effector and a sensor, a heuristic control policy \(\pi=P_{ref}\) is presented to generate the reference motion command for the inner low-level position control loop, as shown in Algorithm 1. The frequency of tactile perception feedback is determined by the computational time cost of the proposed SVAE model and the frame rate of the USB camera. We used a 1060Ti 6G GPU laptop in all grasping experiments, and the average inferring time was 5 ms. As a result, the force controller frequency is bounded by the camera frame rate at 120 Hz. Note that to estimate the contact force parallel to the gripping direction, modification of SVAE output is necessary. See Methods S3 and Methods S4 in the Supplementary Materials for a detailed derivation of controller design. ## Acknowledgements This work was partly supported by the Ministry of Science and Technology of China [2022YFB4701200], the National Natural Science Foundation of China [62206119], the Science, Technology, and Innovation Commission of Shenzhen Municipality [ZDSYS20220527171403009, JCYJ20220818100417038], Guangdong Provincial Key Laboratory of Human-Augmentation and Rehabilitation Robotics in Universities, and the SUSTech-MIT Joint Centers for Mechanical Engineering Research and Education. ## Data Availability Statement The data that support the findings of this study are openly available on GitHub at: [https://github.com/bionicdl-sustech/AmphibiousSoftFinger](https://github.com/bionicdl-sustech/AmphibiousSoftFinger). ## Supporting Information ### Method S1 on Data Collection for Tactile Learning We set up a collaborative robot arm, the Franka Emika Panda, for the automated data collection on land and the validation experiments underwater. As is shown in **Figure S1**(a), data collection using the robotic platform is guarded by sensors without human invention. We use a 6-axis Force/Torque sensor (Nano17 from ATI) to provide labels for supervised visual-tactile learning, which has a force precision of 0.0125 N and a torque precision of 0.0625 N-mm. 3D-printed rods with different cross-sectional geometries are used to contact soft fingers to increase the diversity of labeled contact samples, as shown in Figure S1(b). During the collection process, one of the rods was fixed each time while the soft finger was commanded to move and make contact with the rod at random poses. We simultaneously recorded the camera image and 6D force/torque measurement right after reaching the target pose, as highlighted in the yellow shaded area in Figure S1(c). The random contact pose of the soft finger concerning the rod has two degree-of-freedoms (DOFs) in position and one DOF in orientation and is generated by robot movement command \((x,0,z,0,0,\theta)\) where \(x\sim U(0,5)\) cm, \(y\sim U(-5,5)\) cm, and \(\theta\sim U(-\pi,\pi)\), where \(U\) stands for a uniform distribution. The total contact points cover the middle portion of the finger surface as illustrated in Figure S1(d). 30k pairs of samples in total are collected, which are split into training, validation, and test subsets at a ratio of 7:1:2. Figure S1(e) plots the histograms of the measured forces and torques, which have a range of 10 N for forces and 600 N-mm for torques. ### Method S2 on Trade-off Analysis of the Latent Space Vector Dimension A classic trade-off in machine learning scenarios is between model precision and model complexity (Tishby and Zaslavsky, 2015). Before building the SVAE model, we trained several different sizes of latent dimension VAE models and tried to find the best deformation representation for tactile force prediction. As illustrated in **Figure S2**(a), if we try to encode the deformation images using smaller latent codes, a more considerable discrepancy in image reconstruction will be shown, in which case, tactile force prediction will most likely be inaccurate due to the loss of deformation-dependent information. On the other hand, if we approximate the original images with larger latent dimensions, the image reconstruction error is lower. Still, too many deformation-irrelevant details are kept, which leads to poor generalization. Figures S2(b)-(g) show reconstructed images of the soft finger using a 6-dimension to 256-dimension latent space VAE model. Each row of images is generated by selecting a single latent axis and uniformly sampling values in the range of \([-5,5]\), which stand for the response of the reconstruction model with a specific size of latent space to its latent variable. With the increase of latent space size, the details of reconstructed images are more complete, but the response to a single latent dimension becomes less intuitive. In information theory, the complexity of the model is characterized by its coding length, which is proportional to the amount of information between the original data and their new representation (Shwartz-Ziv and Tishby, 2022). A 32-dimensional latent code is finally chosen for tactile force prediction in our paper, as it is simple enough and provides relatively high accuracy. We admit optimal dimension of latent code that balances the trade-off can be sought more systematically. ### Method S3 on Contact Force Estimation Figure 5(a) shows that image pre-processing for background segmentation is necessary to use the soft finger for force inference. Due to the distinct appearance difference between the finger and grasped object, a reliable image thresholding operation based on color is used. The output image from the color-based thresholding filter contains only the deformed metamaterial, which is sent to SVAE for force and torque prediction. Although the contact configuration between grasped object and the soft finger is arbitrarily complex, our perception finger can always predict a composite force and torque loaded at the base of the finger, which we denote as \({}^{B}\!F\in R^{G}\), expressed in reference frame \(B\) (\(O_{B}\)-\(Y_{B}Z_{B}\) in Figure 5(a)). To measure the force parallel to the gripping direction \(F_{est}\in R\), it is necessary to transform the spatial force \({}^{B}\!F\) from the finger base reference frame \(B\) to the world reference frame \(W\) (\(O_{W}\)-\(Y_{W}Z_{W}\) in Figure 5(a)) and project along the gripping direction. Denoting the rotation transformation matrix from frame \(B\) to frame \(W\) as \({}^{W}\!R_{B}\in SO(3)\), the estimated contact force along the \(Y\) axis of the world frame can be approximated as: \[F_{est}=\begin{bmatrix}0&0&0\\ 0&1&0\\ 0&0&0\end{bmatrix}\left[\begin{array}{cc}^{W}\!R_{B}&0_{3\times 3}\end{array} \right]\ ^{B}\!F.\] (S7) **Method S4 on Details of Tactile Force Tracking Controller** Algorithm 1 describes the solution to the closed-loop tactile force tracking control policy \(u=\pi(P_{m},F_{est},F_{ref})\) with measured gripper position \(P_{m}\) and estimated contact force \(F_{est}\) that achieves the desired contact force \(F_{ref}\). Here we give design details of the proposed control policy \(\pi=P_{ref}\). Figure 5(a) gives the approximate relationship between contact force and gripper position. \[F(P)=\left\{\begin{array}{cc}0&0\leq P<P_{c}\\ \Psi(P)&P_{c}\leq P<P_{max}\end{array}\right.,\] (S8) where \(P_{c}\) is the distance for the gripper to move from the non-contact state to the contact state. \(P\) is the gripper position. \(P_{max}\) is the maximum allowed position for the gripper to move. Inside the interval \([0,P_{c}]\), the object is not in contact with the gripper, while in the interval \([P_{c},P_{max}]\), the contact force is monotonically increasing with position \(P\), which indicates that: \[\frac{\partial F}{\partial P}=\Lambda\geq 0,\] (S9) where \(\Lambda\) is contact-state-dependent and related to the stiffness of the soft finger. **Proposition 1**.: _Given control gain \(K>\frac{\Lambda}{2}\), any feasible reference contact force \(F_{ref}\) can be achieved within specified force tolerance \(\delta>0\), using proposed Algorithm 1 :_ \[|F_{est}-F_{ref}|\leq\delta.\] (S10) Proof.: For every measured gripper position \(P_{m}\), we can get an estimated contact force from SVAE: \[F_{est}=F(P_{m}).\] (S11) The reference motion generator in Algorithm 1 for condition when \(|F_{ref}-F_{est}|>\delta\), is expressed as following: \[P_{ref} =P_{m}+\Delta_{P}\] (S12) \[=P_{m}+\frac{F_{ref}-F(P_{m})}{K}.\] Due to the continuity of function \(\Psi(P)\) in Equation S8 within corresponding interval and monotonicity of Equation S9, for \(t\in(0,1)\), we can see that: \[|F(P_{ref})-F_{ref}| =|F(P_{m}+\Delta_{P})-F_{ref}|\] (S13) \[=|F(P_{m})+\Delta_{P}\frac{\partial F}{\partial P}|_{P=P_{m}+t \Delta_{P}}-F_{ref}|\] \[=|F(P_{m})-F_{ref}+\frac{\Lambda}{K}(F_{ref}-F(P_{m}))|\] \[=|(1-\frac{\Lambda}{K})(F(P_{m})-F_{ref})|\] \[\leq|(1-\frac{\Lambda}{K})||F(P_{m})-F_{ref}|.\] Given the chosen control gain condition in Proposition 1, we have: \[K>\frac{\Lambda}{2}\Rightarrow|(1-\frac{\Lambda}{K})|<1,\] (S14) then, the Equation S13 becomes: \[|F(P_{ref})-F_{ref}|<|F(P_{m})-F_{ref}|=|F_{est}-F_{ref}|.\] (S15) This indicates that the reference motion command generated by Algorithm 1 will always ensure a minor contact force error, eventually satisfying the specified force tolerance condition. ## Supporting Videos * Movie S1. Amphibian Grasping with Visual-Tactile Soft Finger. * Movie S2. Real-time Force/Torque Prediction. * Movie S3. Object Grasping Success Rates Experiments with/without Contact Feedback. * Movie S4. Contact Force Following Experiments. * Movie S5. Object Shape Adaptation Experiments. * Movie S6. Robot End-effector Reaction to Soft Finger Twist.
2310.12702
Benchmarking Function Hook Latency in Cloud-Native Environments
Researchers and engineers are increasingly adopting cloud-native technologies for application development and performance evaluation. While this has improved the reproducibility of benchmarks in the cloud, the complexity of cloud-native environments makes it difficult to run benchmarks reliably. Cloud-native applications are often instrumented or altered at runtime, by dynamically patching or hooking them, which introduces a significant performance overhead. Our work discusses the benchmarking-related pitfalls of the dominant cloud-native technology, Kubernetes, and how they affect performance measurements of dynamically patched or hooked applications. We present recommendations to mitigate these risks and demonstrate how an improper experimental setup can negatively impact latency measurements.
Mario Kahlhofer, Patrick Kern, Sören Henning, Stefan Rass
2023-10-19T12:54:32Z
http://arxiv.org/abs/2310.12702v1
# Benchmarking Function Hook Latency in Cloud-Native Environments ###### Abstract Researchers and engineers are increasingly adopting cloud-native technologies for application development and performance evaluation. While this has improved the reproducibility of benchmarks in the cloud, the complexity of cloud-native environments makes it difficult to run benchmarks reliably. Cloud-native applications are often instrumented or altered at runtime, by dynamically patching or hooking them, which introduces a significant performance overhead. Our work discusses the benchmarking-related pitfalls of the dominant cloud-native technology, Kubernetes, and how they affect performance measurements of dynamically patched or hooked applications. We present recommendations to mitigate these risks and demonstrate how an improper experimental setup can negatively impact latency measurements. ## 1 Introduction Cloud-native technologies aim to build loosely coupled, resilient, observable, and secure systems [3]. Observability and security are typically achieved by dynamically instrumenting or altering already built applications with _function hooks_[2]. These are small pieces of code added to an application's functions. In particular, security tools need to dynamically modify, redirect, or block specific execution patterns, which often results in significant performance penalties [7]. Careful benchmarking is required to measure the performance impact of such changes. Besides _empirical standards_ for software benchmarking [9] and _methodological principles_ for performance evaluation in cloud computing [6], we address benchmarking-related pitfalls of cloud-native environments with: 1. Recommendations on how to measure the latency of function hooks in cloud-native environments. 2. A demonstration of an improper experimental setup that makes hypothesis testing harder. ## 2 Cloud-Native Benchmark Suite Cloud environments are frequently used to build complete benchmark suites, as they provide a well-reproducible environment [8]. A typical benchmark suite (Figure 1) consists of a _system under test (SUT)_, e.g., the patched application, a _load generator_ sending requests to that application, and a _monitoring tool_ measuring performance metrics. Latency is often measured directly by the load generator. In Kubernetes, workloads are organized into _pods_ of one or more _containers_ which share storage and networking resources. Physical or virtual machines that run these pods are called _nodes_. Recommendation 1When measuring latency, ensure that the load generator and the SUT are in separate containers within the same pod. Otherwise, additional network hops may distort the measurements. Recommendation 2If components of the benchmark suite need to be in separate pods, ensure that both pods are deployed on the same physical node, e.g., by specifying _node restrictions_ in Kubernetes. Recommendation 3Weigh the benefits of a _service mesh_ against its additional network overhead. Service meshes wrap each application behind a reverse proxy and make it easier to monitor and control inbound and outbound network traffic [10]. Recommendation 4Generally avoid benchmarking in _multi-tenancy clusters_, i.e., clusters that are shared across teams, either physically or virtually. ## 3 Function Hook Granularity We distinguish four layers [2, 12] where function hooks or patches can be injected (Figure 2): * **Application-level** hooks use methods implemented by the application's developers, e.g., a plugin system. Since such systems are not widely available, this layer cannot be used for general-purpose hooks on already built applications. Figure 1: Typical components of a benchmark suite * **Runtime-level** hooks use native capabilities of language runtimes to modify applications, e.g., the JVM Tool Interface (JVM TI), the.NET Profiling API, or Node.js module preloading. * **Library-level** hooks override symbols in shared libraries, e.g., by the "LD_PRELOAD trick" [2].1 Footnote 1: [https://man7.org/linux/man-pages/man2/read.2.html](https://man7.org/linux/man-pages/man2/read.2.html) * **Kernel-level** hooks use native capabilities of the operating system to modify application behavior, e.g., kernel modules or eBPF programs. **Recommendation 5** The monitoring tool should be placed as close as possible to the layer where the hook is injected. Testing farther away pollutes measurements with noise from other layers (Section 4.2). To achieve optimal results, hooking and monitoring should be done at the "same layer", i.e., by embedding monitoring functionality into the hook itself, e.g., by recording timestamps before and after the hook is executed, directly in the hook's code. Moving the monitoring tool further away from the hook is justifiable when one wants to more accurately represent real-world behavior instead. **Recommendation 6** Describe if the benchmark measures the specific hooking overhead in isolation (_micro benchmark_), or rather represents a real-world application with a hook injected into it (_macro benchmark_) [1]. Benchmarking both cases and discussing the differences is recommended. ## 4 Demonstration We first build a Java application that simply responds with "Hello World" to any HTTP request. We then implement a _library-level LD_PRELOAD hook_ that blocks all requests that contain specific keywords (Listing 1). The hook changes the read2 system call by overriding the corresponding symbol in the C standard library.3 We then measure our application's network performance with and without the hook. Footnote 2: [https://man7.org/linux/man-pages/man2/read.2.html](https://man7.org/linux/man-pages/man2/read.2.html) Footnote 3: The full source code can be found at [https://github.com/dynatrace-research/function-hook-latency-benchmarking](https://github.com/dynatrace-research/function-hook-latency-benchmarking) ``` 1ssize_tread(intfd,void=buf,size_tcount){ 2read_tread_ptr=(read_t)dslym(RTLD_NEXT,"read"); 3size_tbytes_read=read_ptr(fd,buf,count); 4if(is_http_socket(fd)){ 5if(contains_keyword(buf,count)){ 6//traceorblockcall 7} 8returnbytes_read; 9} ``` Listing 1: A hook on the read symbol of glibc Low-level function hooks carry the risk that the hooked function is used by high-level functionality for a purpose other than the one originally intended. For example, the read call that we override here is also used to read regular files, not just network packets. **Recommendation 7** Therefore, describe how the hooked function is typically used by applications and ensure that the benchmarks reflect their proper use, e.g., with _synthetic micro benchmarks_[1], but also real-world behavior. Suitable cloud-native, real-world reference applications are TeaStore [4], DeathStarBench [5], or Unguard [11] for security use cases. ### Experimental Setup Our experiment consists of two containers: Locust (a performance testing tool) as the load generator with embedded monitoring, and the SUT. With containers, we not only represent cloud-native paradigms, but also isolate concerns between the _benchmark owner_ and the _SUT owner_. We compare four conditions: 1. **In Docker** (a popular container runtime): Both containers run on a single server, communicating through the host network. 2. **In Kind** (a tool for running Kubernetes using Docker containers): Both containers run inside a single pod on a local, single-node Kind cluster. The remaining two conditions use the AWS EKS service (a popular enterprise-grade cloud provider). 1. **In EKS pod**: Both containers run inside a single pod in a managed, single-node AWS EKS cluster. 2. **Across EKS nodes**: Both containers run in separate pods, each pod on a different node, in a managed AWS EKS cluster with two nodes. Docker and Kind are running on a 24-core (Intel Xeon E5-2680 v3) Ubuntu 22.04 server with 64 GB memory. EKS nodes are t3.medium EC2 instances (2 vCPUs, Intel Xeon Platinum 8000, 4 GB memory). **Recommendation 8** Ensure that the servers do not hit any resource limits during the experiment to avoid performance degradations due to resource contention. We measure the round-trip time (RTT) of \(50,000\) HTTP request-response interchanges between Locust and the SUT. We empirically observed that the RTT definitely stabilizes under all four conditions after \(\sim 4,000\) warm-up requests (Figure 3). ### Hypothesis Testing and Results Figure 4 shows the RTT distribution per condition, with and without the hook, after warm-up requests. Our function hook must introduce a performance overhead: To test the null hypothesis that the mean RTT is the same with and without the hook, we use an independent two-sample \(t\)-test, assuming equal but unknown variances and equal sample sizes.4 Let \(\bar{x}_{1}\) and \(\bar{x}_{2}\) be the sample means, \(n\) be the sample size per condition, and \(s_{p}\) the _pooled standard deviation_5, then the test statistic is given by: \(t=\left(\bar{x}_{1}-\bar{x}_{2}\right)/\left(s_{p}\sqrt{2/n}\right)\). With \(n=46,000\) samples left per condition after removing warm-up requests, the hooking overhead is significant (\(\alpha=0.05\)) in three of four conditions (all \(p<0.0001\)). The _across EKS nodes_ condition is not significant (\(p=0.2713\)) since we lose a lot of statistical power in EKS due to the higher RTT variance. As expected and in line with related work [6], measurements taken in EKS generally exhibit a much higher variance than measurements on our own server, due to diverse factors that can hardly be controlled for. We expected the _Docker_ condition, being the most minimal setup, to show the lowest variance (\(s_{p}=0.50\)), and the _Kind_ condition, which just adds a few Kubernetes components, to show the second-lowest variance (\(s_{p}=0.84\)). We also expected that the network latency _across EKS nodes_ shows the highest variance (\(s_{p}=1.95\); \(3.9\) times higher than in _Docker_). Packets in that condition traverse the origin container, pod, and node, some intermediate network that interconnects nodes, until they reach the target node, pod, and container again. In a multi-cluster environment, a communication path with that many hops is common. Kubernetes _services_, which are widely used abstractions on top of pods, would add even more hops to that route. Perhaps surprising is that the variance of the _EKS pod_ condition was still relatively high (\(s_{p}=1.64\)). Keeping network communication within the same pod decreased the variance, but it seems that the background noise of our EKS cluster is still relatively high and affecting inter-pod traffic. Recommendation 9 As shown, measurements in cloud-native environments tend to have a higher variance than in local environments [6]. To regain statistical power, the sample size must be increased. Recommendation 10 Conducting experiments in differently configured environments is a general principle [6, P2] that is especially relevant for cloud-native environments. Different cloud providers, service meshes, or network setups help increase diversity. ## 5 Conclusion This work provides 10 practical recommendations for researchers and engineers who benchmark function hook latency in cloud-native environments, but want to reduce the measurement bias introduced by these environments. We have shown that function hook latency measurements can be easily contaminated by noise, without doing anything obviously wrong. We hope to raise awareness while providing practical guidance for similar latency-based benchmarks, as some of our recommendations are also broadly applicable.
2308.07623
On mesh coarsening procedures for the virtual element method
In the context of adaptive remeshing, the virtual element method provides significant advantages over the finite element method. The attractive features of the virtual element method, such as the permission of arbitrary element geometries, and the seamless permission of 'hanging' nodes, have inspired many works concerning error estimation and adaptivity. However, these works have primarily focused on adaptive refinement techniques with little attention paid to adaptive coarsening (i.e. de-refinement) techniques that are required for the development of fully adaptive remeshing procedures. In this work novel indicators are proposed for the identification of patches/clusters of elements to be coarsened, along with a novel procedure to perform the coarsening. The indicators are computed over prospective patches of elements rather than on individual elements to identify the most suitable combinations of elements to coarsen. The coarsening procedure is robust and suitable for meshes of structured and unstructured/Voronoi elements. Numerical results demonstrate the high degree of efficacy of the proposed coarsening procedures and sensible mesh evolution during the coarsening process. It is demonstrated that critical mesh geometries, such as non-convex corners and holes, are preserved during coarsening, and that meshes remain fine in regions of interest to engineers, such as near singularities.
Daniel van Huyssteen, Felipe Lopez Rivarola, Guillermo Etse, Paul Steinmann
2023-08-15T08:18:26Z
http://arxiv.org/abs/2308.07623v1
# On mesh coarsening procedures for the virtual element method ###### Abstract In the context of adaptive remeshing, the virtual element method provides significant advantages over the finite element method. The attractive features of the virtual element method, such as the permission of arbitrary element geometries, and the seamless permission of 'hanging' nodes, have inspired many works concerning error estimation and adaptivity. However, these works have primarily focused on adaptive refinement techniques with little attention paid to adaptive coarsening (i.e. defenfinement) techniques that are required for the development of fully adaptive remeshing procedures. In this work novel indicators are proposed for the identification of patches/clusters of elements to be coarsened, along with a novel procedure to perform the coarsening. The indicators are computed over prospective patches of elements rather than on individual elements to identify the most suitable combinations of elements to coarsen. The coarsening procedure is robust and suitable for meshes of structured and unstructured/Voronoi elements. Numerical results demonstrate the high degree of efficacy of the proposed coarsening procedures and sensible mesh evolution during the coarsening process. It is demonstrated that critical mesh geometries, such as non-convex corners and holes, are preserved during coarsening, and that meshes remain fine in regions of interest to engineers, such as near singularities. Virtual element method Mesh adaptivity Mesh coarsening De-refinement Voronoi meshes Elasticity ## 1 Introduction Adaptive remeshing techniques are a critical tool in engineering analysis that facilitate automatic and localized manipulation of a mesh to improve its approximation properties for a given problem. In contrast to conventional uniform remeshing, adaptive remeshing allows degrees of freedom to concentrate in, and be removed from, the most and least critical regions of a problem domain respectively. Thus, improving the accuracy of numerical simulation methods while reducing their computational load. A typical adaptive procedure has the well-known 'Solve \(\rightarrow\) Estimate \(\rightarrow\) Mark \(\rightarrow\) Remesh' structure. These four main steps involve; generating an approximate numerical solution to a problem using some pre-existing mesh, using the approximate solution to estimate the error, marking/flagging elements to be refined or coarsened/de-refined based on the estimated error, and finally creating an updated mesh. These steps are then performed iteratively until some user-defined termination criteria is met. In the context of the finite element method (FEM) adaptive remeshing techniques are already well-established. There exists a wide range of approaches to _a-posteriori_ error estimation and a variety of tools/packages for the creation of updated meshes. Some of the most widely used approaches to _a-posteriori_ error estimation include residual-based [1, 2] and recovery-based error estimators [3, 4, 5, 6]. In general, there is greater focus on adaptive refinement techniques over adaptive coarsening/de-refinement techniques. However, there exists a range of open-source software libraries capable of performing both adaptive refinement and coarsening of finite element meshes [7, 8]. Performing localized adaptation of finite element meshes is non-trivial as significant manipulation of not only the elements being adapted but also of the surrounding elements is required to preserve the method's conformity. In general, coarsening of finite element meshes is more complex than refinement. As such, most coarsening processes performed using finite elements only reverse previously performed refinement to return a mesh to its initially coarser state, see for example [9]. The complexities involved in coarsening finite element meshes mean that true coarsening procedures are rarely used in practical applications. The introduction of the virtual element method (VEM) gave rise to many new opportunities in the context of adaptive remeshing. The VEM is an extension of the FEM that permits arbitrary polygonal and polyhedral element geometries in two- and three-dimensions respectively [10, 11]. A feature of the VEM of particular interest in the context of adaptive remeshing is the permission of arbitrarily many nodes along an element's edge. That is, nodes that would be considered 'hanging' in a finite element context are trivially incorporated into the virtual element formulation [12, 13]. The geometric robustness of the VEM has been demonstrated with the method exhibiting optimal convergence behaviour in cases of challenging, including strongly non-convex, element geometries [14, 15, 16, 17]. Additionally, in cases of distorted, and possibly stretched, element geometries that could arise during adaptive remeshing (particularly during anisotropic remeshing) the VEM stabilization term can be easily tuned to improve the accuracy of the method [18, 19]. Furthermore, the robustness of the VEM under challenging numerical conditions, such as near-incompressibility and near inextensibility, is increasingly well reported [20, 21, 22, 23, 16, 17]. Due to its geometric flexibility, as well as its geometric and numerical robustness, the VEM is an obvious candidate for problems involving adaptive remeshing. However, in contrast to the FEM, the VEM basis/shape functions are not explicitly defined over an element domain. Thus, the typical error estimators used in a FEM context usually cannot be trivially applied in a VEM setting. Additionally, the freedom of element geometry permitted by the VEM requires the development of more versatile mesh refinement and coarsening techniques than those used in FEM applications. Adaptive refinement is currently a very popular topic in the VEM context with a rapidly growing literature focusing on residual-based [24, 25, 26, 27] and recovery-based [28, 29, 30, 31]_a-posteriori_ error estimation. Furthermore, several approaches for localized refinement of the unstructured polygonal element geometries permitted by the VEM have been presented [31, 32, 33]. A comparatively unexplored topic is the development of techniques for adaptive coarsening (i.e. de-refinement) of virtual element meshes. Since a fully adaptive remeshing procedure comprises both refinement and coarsening capabilities, the development of coarsening techniques for the VEM is of great significance. Furthermore, since the VEM permits arbitrarily many nodes along an element's edge, preserving the conformity of the method during coarsening is trivial. Thus, the VEM is particularly well-suited to problems involving truly adaptive coarsening processes. To the best of the authors' knowledge, to date only one contribution exists focusing on adaptive coarsening in a virtual element context [34]. The work [34] uses the element-level displacement gradient error to identify elements to group. Specifically, neighbouring elements whose error estimators fall below a certain threshold are collected into groups. The mesh is then coarsened by combing the groups of elements into larger individual elements using a simple edge straightening procedure. The results presented are promising and of great interest. Specifically, the method yields coarser meshes in less critical portions of a problem domain while retaining finer discretizations near singularities. Additionally, the coarsening approach improves the efficiency of the VEM solution. That is, the amount of error per degree of freedom exhibited by the coarsened meshes is lower than that of uniform meshes. The importance of developing coarsening procedures for the VEM to be used in fully adaptive remeshing procedures, and the promising results of the coarsening approach presented in [34], strongly motivate further development and investigation into adaptive coarsening procedures for the VEM. In addition to its importance for fully adaptive remeshing procedures there are numerous other benefits of, even standalone, adaptive coarsening procedures. For example, a simulation using an adaptively coarsened mesh should have a similar level of accuracy to, and should execute significantly faster than, a uniform mesh simulation. This improved efficiency can be exploited to speed up problems that require multiple solutions or solution steps such as; sensitivity analysis, Monte Carlo simulations, solution procedures in non-linear analysis, dynamic problems, and fracture analysis. The improved efficiency offered by adaptively coarsened meshes can, thus, reduce both the computational time and energy consumption of the aforementioned problem types. In this work a novel patch-based approach to computing coarsening indicators for meshes of virtual elements is presented. The indicators are computed over prospective patches of elements to coarsen in contrast to the conventional element-level indicators used in adaptive refinement procedures. The patch-based approach is motivated by seeking to identify the most suitable combinations of elements to coarsen, and can even be used to approximately predict the element-level error after coarsening. This prediction could be used to determine if the coarsening of a particular patch might result in unsatisfactory high local or global error after coarsening and should not be performed. Two approaches to the computation of the coarsening indicator over a patch are proposed and are based on the displacement field and approximate energy error norm computed using a recovery procedure. Furthermore, a novel approach to determining the geometry of the coarsened element groupings is presented. Specifically, the updated/coarsened element geometry is created by constructing a convex hull around an element patch. To this end, a novel edge straightening procedure is presented and used in creating the geometry of the convex hull. The robustness of the proposed edge straightening procedure is demonstrated through its application to complex groups of edges. The performance of the proposed indicators and coarsening procedures is investigated through a range of benchmark problems of varying complexity. For each problem the mesh evolution during coarsening is analysed along with the behaviour in the \(\mathcal{H}^{1}\) error norm for structured and unstructured/Voronoi meshes with a range of initial uniform mesh densities. Finally, to facilitate a thorough investigation of the proposed coarsening procedures, and critically for brevity, it is chosen in this work to analyse only adaptive coarsening procedures. The analysis of fully adaptive remeshing procedures thus represents a future contribution. The structure of the rest of this work is as follows. The governing equations of linear elasticity are set out in Section 2. This is followed in Section 3 by a description of the first-order virtual element method. The procedures used to generate and coarsen meshes are presented in Section 4. This is followed, in Section 5, by a description of the various mesh coarsening indicators along with the procedure used to identify patches of elements qualifying for coarsening. Section 6 comprises a set of numerical results through which the performance of the various coarsening procedures is evaluated. Finally, the work concludes in Section 7 with a discussion of the results. ## 2 Governing equations of linear elasticity Consider an arbitrary elastic body occupying a plane, bounded, domain \(\Omega\subset\mathbb{R}^{2}\) subject to a traction \(\bar{\boldsymbol{t}}\) and body force \(\boldsymbol{b}\) (see Figure 1). The boundary \(\partial\Omega\) has an outward facing normal denoted by \(\boldsymbol{n}\) and comprises a non-trivial Dirichlet part \(\Gamma_{D}\) and a Neumann part \(\Gamma_{N}\) such that \(\Gamma_{D}\cap\Gamma_{N}=\emptyset\) and \(\overline{\Gamma_{D}\cup\Gamma_{N}}=\partial\Omega\). In this work small displacements are assumed and the strain-displacement is relation given by \[\boldsymbol{\varepsilon}\left(\boldsymbol{u}\right)=\frac{1}{2}\left[\nabla \,\boldsymbol{u}+\left[\nabla\,\boldsymbol{u}\right]^{T}\right]\,. \tag{1}\] Figure 1: Arbitrary elastic body subject to body force and traction. Here the displacement is denoted by \(\mathbf{u}\), \(\mathbf{\varepsilon}\) is the symmetric infinitesimal strain tensor and \(\nabla\left(\bullet\right)=\frac{\partial\left(\bullet\right)_{i}}{\partial\,x_{ j}}\,\mathbf{e}_{i}\otimes\mathbf{e}_{j}\) is the gradient of a vector quantity. Additionally, linear elasticity is assumed and the stress-strain relation is given by \[\mathbf{\sigma}=\mathbb{C}:\mathbf{\varepsilon}\,. \tag{2}\] Here, \(\mathbf{\sigma}\) is the Cauchy stress tensor and \(\mathbb{C}\) is a fourth-order constitutive tensor. For a linear elastic and isotropic material (2) is given by \[\mathbf{\sigma}=\lambda\operatorname{tr}\left(\mathbf{\varepsilon}\right)\mathbf{I}+2\mu \,\mathbf{\varepsilon}\,, \tag{3}\] where \(\operatorname{tr}\left(\bullet\right)\) denotes the trace, \(\mathbf{I}\) is the second-order identity tensor, and \(\lambda\) and \(\mu\) are the well-known Lame parameters. For equilibrium it is required that \[\operatorname{div}\,\mathbf{\sigma}+\mathbf{b}=0\,, \tag{4}\] where \(\operatorname{div}\left(\bullet\right)=\frac{\partial\left(\bullet\right)_{ ij}}{\partial\,x_{j}}\,e_{i}\) is the divergence of a tensor quantity. The Dirichlet and Neumann boundary conditions are given by \[\mathbf{u}=\mathbf{g}\quad\text{on}\,\Gamma_{D}\,,\text{ and} \tag{5}\] \[\mathbf{\sigma}\cdot\mathbf{n}=\bar{\mathbf{t}}\quad\text{on}\,\Gamma_{N}\,, \tag{6}\] respectively, with \(\mathbf{g}\) and \(\bar{\mathbf{t}}\) denoting prescribed displacements and tractions respectively. Equations (3)-(6), together with the displacement-strain relationship (1), constitute the boundary-value problem for a linear elastic isotropic body. ### Weak form The space of square-integrable functions on \(\Omega\) is hereinafter denoted by \(\mathcal{L}^{2}\left(\Omega\right)\). The Sobolev space of functions that, together with their first derivatives, are square-integrable on \(\Omega\) is hereinafter denoted by \(\mathcal{H}^{1}\left(\Omega\right)\). Additionally, the function space \(\mathcal{V}\) is introduced and defined such that \[\mathcal{V}=\left[\mathcal{H}_{D}^{1}\left(\Omega\right)\right]^{d}=\left\{ \mathbf{v}\,|\,v_{i}\in\mathcal{H}^{1}\left(\Omega\right)\,,\,\mathbf{v}=\mathbf{0}\text{ on }\Gamma_{D}\right\} \tag{7}\] where \(d=2\) is the dimension. Furthermore, the function \(\mathbf{u}_{g}\in\left[\mathcal{H}^{1}\left(\Omega\right)\right]^{d}\) is introduced satisfying (5) such that \(\mathbf{u}_{g}|_{\Gamma_{D}}=\mathbf{g}\). The bilinear form \(a\left(\cdot,\cdot\right)\), where \(a:\left[\mathcal{H}^{1}\left(\Omega\right)\right]^{d}\times\left[\mathcal{H}^ {1}\left(\Omega\right)\right]^{d}\to\mathbb{R}\), and the linear functional \(\ell\left(\cdot\right)\), where \(\ell:\left[\mathcal{H}^{1}\left(\Omega\right)\right]^{d}\to\mathbb{R}\), are defined respectively by \[a\left(\mathbf{u},\,\mathbf{v}\right)=\int_{\Omega}\mathbf{\sigma}\left(\mathbf{u}\right):\bm {\varepsilon}\left(\mathbf{v}\right)\,dx\,, \tag{8}\] and \[\ell\left(\mathbf{v}\right)=\int_{\Omega}\mathbf{b}\cdot\mathbf{v}\,dx+\int_{\Gamma_{N}} \bar{\mathbf{t}}\cdot\mathbf{v}\,ds-a\left(\mathbf{u}_{g},\,\mathbf{v}\right)\,. \tag{9}\] The weak form of the problem is then: given \(\mathbf{b}\in\left[\mathcal{L}^{2}\left(\Omega\right)\right]^{d}\) and \(\bar{\mathbf{t}}\in\left[\mathcal{L}^{2}\left(\Gamma_{N}\right)\right]^{d}\), find \(\mathbf{U}\in\left[\mathcal{H}^{1}\left(\Omega\right)\right]^{d}\) such that \[\mathbf{U}=\mathbf{u}+\mathbf{u}_{g}\,,\quad\mathbf{u}\in\mathcal{V}\,, \tag{10}\] and \[a\left(\mathbf{u},\,\mathbf{v}\right)=\ell\left(\mathbf{v}\right)\,,\quad\forall\mathbf{v}\in \mathcal{V}\,. \tag{11}\] ## 3 The virtual element method The domain \(\Omega\) is partitioned into a mesh of non-overlapping arbitrary polygonal elements1\(E\) with \(\overline{\cup E}=\overline{\Omega}\). Here \(E\) denotes the element domain and \(\partial E\) its boundary, with \(\overline{\left(\bullet\right)}\) denoting the closure of a set. An example of a typical first-order element is depicted in Figure 2 with edge \(e_{i}\) connecting vertices \(V_{i}\) and \(V_{i+1}\). Here \(i=1,\ldots,n_{\text{v}}\) with \(n_{\text{v}}\) denoting the total number of element vertices. A conforming approximation of order \(k\) is constructed in a space \(\mathcal{V}^{h}\subset\mathcal{V}\) where \(\mathcal{V}^{h}\) is built-up element-wise and comprises vector valued functions \(\boldsymbol{v}_{h}\). The functions \(\boldsymbol{v}_{h}\) are those that are \(\mathcal{C}^{0}\) continuous on the domain \(\Omega\), are polynomials of degree \(\leq\,k\) on element edges, and whose strain gradient divergence is a polynomial of degree \(\leq\,k-2\) on an element (see [35]). For the most general case of an approximation of arbitrary order \(k\) the space \(\mathcal{V}^{h}|_{E}\) is defined as \[\mathcal{V}^{h}|_{E}=\left\{\boldsymbol{v}_{h}\in\mathcal{V}\,|\,\boldsymbol{ v}_{h}\in\left[\mathcal{C}^{0}(E)\right]^{2}\,,\,\nabla^{2}\,\boldsymbol{v}_{h} \in\mathcal{P}_{k-2}\text{ on }E\,,\,\boldsymbol{v}_{h}|_{e}\in\mathcal{P}_{k}(e) \right\}\,. \tag{12}\] Here \(\mathcal{P}_{k}(X)\) is the space of polynomials of degree \(\leq\,k\) on the set \(X\subset\mathbb{R}^{d}\) with \(d=1,\,2\) and \(\nabla^{2}=\nabla\cdot\nabla\) is the Laplacian operator. In this work a first-order, i.e. \(k=1\), approximation is considered, thus (12) simplifies to \[\mathcal{V}^{h}|_{E}=\left\{\boldsymbol{v}_{h}\in\mathcal{V}\,|\,\boldsymbol{ v}_{h}\in\left[\mathcal{C}^{0}(E)\right]^{2}\,,\,\nabla^{2}\,\boldsymbol{v}_{h}= \boldsymbol{0}\text{ on }E\,,\,\boldsymbol{v}_{h}|_{e}\in\mathcal{P}_{1}(e) \right\}\,. \tag{13}\] All computations will be performed on element edges and it is convenient to write, for element \(E\), \[\boldsymbol{v}_{h}|_{\partial E}=\boldsymbol{N}\cdot\boldsymbol{d}^{E}\,. \tag{14}\] Here, \(\boldsymbol{N}\) is a matrix of standard linear Lagrangian basis functions and \(\boldsymbol{d}^{E}\) is a \(2n_{\nu}\times 1\) vector of the degrees of freedom associated with \(E\). The virtual basis functions are not known, nor required to be known on \(E\); their traces, however, are known and are simple Lagrangian functions. The virtual element projection for a first-order formulation \(\Pi\,:\,\mathcal{V}^{h}|_{E}\to\mathcal{P}_{0}(E)\) is required to satisfy \[\int_{E}\Pi\,\boldsymbol{v}_{h}\cdot\boldsymbol{\varepsilon}\left(\boldsymbol {p}\right)\,dx=\int_{E}\boldsymbol{\varepsilon}\left(\boldsymbol{v}_{h}\right) \cdot\boldsymbol{\varepsilon}\left(\boldsymbol{p}\right)\,dx\quad\forall \boldsymbol{p}\in\mathcal{P}_{1}\,, \tag{15}\] where \(\Pi\,\boldsymbol{v}_{h}\) represents the \(\mathcal{L}^{2}\) projection of the symmetric gradient of \(\boldsymbol{v}_{h}\) onto constants [35]. Since the projection is constant at element-level, after applying integration by parts to (15), and considering (14), the components of the projection can be computed as \[\left(\Pi\,\boldsymbol{v}_{h}\right)_{ij}=\frac{1}{2}\frac{1}{|E|}\sum_{e\in \partial E}\int_{e}\left[N_{iA}\,d_{A}^{E}\,n_{j}+N_{jA}\,d_{A}^{E}\,n_{i} \right]ds\,, \tag{16}\] where summation is implied over repeated indices. The virtual element approximation of the bilinear form (8) is constructed by writing \[a^{E}\left(\boldsymbol{u},\,\boldsymbol{v}\right): =a\left(\boldsymbol{u},\,\boldsymbol{v}\right)|_{E}=\int_{E} \boldsymbol{\varepsilon}\left(\boldsymbol{v}_{h}\right):\left[\mathbb{C}: \boldsymbol{\varepsilon}\left(\boldsymbol{u}_{h}\right)\right]dx\,, \tag{17}\] where \(a^{E}\left(\cdot,\cdot\right)\) is the contribution of element \(E\) to the bilinear form \(a\left(\cdot,\cdot\right)\). Consideration of (16) allows (17) to be written as (see [23]) \[a^{E}\left(\boldsymbol{u}_{h},\,\boldsymbol{v}_{h}\right)=\underbrace{\int_{E }\Pi\,\boldsymbol{v}_{h}:\left[\mathbb{C}:\Pi\,\boldsymbol{u}_{h}\right]dx}_{ \text{Consistency term}}+\underbrace{\int_{E}\left[\boldsymbol{ \varepsilon}\left(\boldsymbol{v}_{h}\right):\left[\mathbb{C}:\boldsymbol{ \varepsilon}\left(\boldsymbol{u}_{h}\right)\right]-\Pi\,\boldsymbol{v}_{h}: \left[\mathbb{C}:\Pi\,\boldsymbol{u}_{h}\right]\right]dx}_{\text{Stabilization term}}\,, \tag{18}\] where the remainder term is discretized by means of a stabilization. Figure 2: Sample virtual element. ### The consistency term The projection (16), and thus the consistency term, can be computed exactly yielding \[a_{\mathrm{c}}^{E}\left(\mathbf{u}_{h},\,\mathbf{v}_{h}\right)\,=\,\int_{E}\Pi\,\mathbf{v}_{h }:[\mathbb{C}:\Pi\,\mathbf{u}_{h}]\,dx\,=\,\widehat{\mathbf{d}}^{E}\cdot\left[\mathbf{K}_{ \mathrm{c}}^{E}\cdot\mathbf{d}^{E}\right]\,. \tag{19}\] Here \(\mathbf{K}_{\mathrm{c}}^{E}\) is the consistency part of the stiffness matrix of element \(E\) with \(\widehat{\mathbf{d}}^{E}\) and \(\mathbf{d}^{E}\) the degrees of freedom of \(\mathbf{v}_{h}\) and \(\mathbf{u}_{h}\) respectively that are associated with element \(E\). ### The stabilization term The remainder term cannot be computed exactly and is approximated by means of a discrete stabilization term [36, 37]. The approximation employed in this work is motivated by seeking to approximate the difference between the element degrees of freedom \(\mathbf{d}^{E}\) and the nodal values of a linear function that is closest to \(\mathbf{d}^{E}\) in some way (see [35, 23]). The nodal values of the linear function are given by \[\widetilde{\mathbf{d}}=\mathbf{\mathcal{D}}\cdot\mathbf{s}\,. \tag{20}\] Here \(\mathbf{s}\) is a vector of the degrees of freedom of the linear function and \(\mathbf{\mathcal{D}}\) is a matrix relating \(\widetilde{\mathbf{d}}\) to \(\mathbf{s}\) with respect to a scaled monomial basis. For the full expression of \(\mathbf{\mathcal{D}}\) see [35, 23]. After some manipulation (see, again, [23]) the stabilization term of the bilinear form can be approximated as \[a_{\text{sab}}^{E}\left(\mathbf{u}_{h},\,\mathbf{v}_{h}\right)\,=\,\int_{E}\left[ \mathbf{\varepsilon}\left(\mathbf{v}_{h}\right):[\mathbb{C}:\mathbf{\varepsilon}\left( \mathbf{u}_{h}\right)]-\Pi\,\mathbf{v}_{h}:[\mathbb{C}:\Pi\,\mathbf{u}_{h}]\right]dx\, \approx\,\widehat{\mathbf{d}}^{E}\cdot\mathbf{K}_{\mathrm{s}}^{E}\cdot\mathbf{d}^{E}\,, \tag{21}\] where \(\mathbf{K}_{\mathrm{s}}^{E}\) is the stabilization part of the stiffness matrix of element \(E\) and is defined as \[\mathbf{K}_{\mathrm{s}}^{E}=\mu\left[\mathbf{I}-\mathbf{\mathcal{D}}\cdot\left[\mathbf{ \mathcal{D}}^{T}\cdot\mathbf{\mathcal{D}}\right]^{-1}\cdot\mathbf{\mathcal{D}}^{T} \right]\,. \tag{22}\] The total element stiffness matrix \(\mathbf{K}^{E}\) is then computed as the sum of the consistency and stabilization matrices. ## 4 Mesh generation and mesh coarsening In this section the procedures used to generate meshes and coarsen patches of elements are described. ### Mesh generation The mesh generation procedure used in this work is identical to that described in [32]. All meshes are created by Voronoi tessellation of a set of seed points. Seed points will be generated in both structured and unstructured sets to create structured and unstructured meshes respectively. In the case of structured meshes seeds points are placed to form a structured grid, while in the case of unstructured/Voronoi meshes seeds are placed arbitrarily within the problem domain. Hereinafter the terms 'unstructured' and 'Voronoi' meshes will be used interchangeably to refer to meshes created from arbitrarily placed seed points. An initial Voronoi tessellation of the seed points is created using PolyMesher [38]. Then, a smoothing algorithm in PolyMesher is used to iteratively modify the locations of the seed points to create a mesh in which all elements have approximately equal areas. The mesh generation procedure is illustrated in Figure 3 where the top and bottom rows depict the generation of structured and unstructured/Voronoi meshes respectively. ### Mesh coarsening The patches of elements qualifying for coarsening are identified using the procedure described in the next section. Once a patch of elements has been marked for coarsening, the coarsening process is performed by grouping the elements into a single larger element formed by a convex hull. #### 4.2.1 Overview of coarsening procedure The same coarsening procedure is used for patches of both structured and unstructured elements. An overview of the mesh coarsening procedure is illustrated in Figure 4 for the more general case of unstructured elements. Here the patch of elements marked for coarsening is indicated in grey and the surrounding elements are indicated in white. An element is considered to be surrounding the patch of marked elements if it shares any nodes with any of the marked elements. The first step in the coarsening procedure involves creating a convex hull surrounding the patch of marked elements. The convex hull is indicated by a blue dashed line. Thereafter, the surrounding elements are checked to determine if their centroids (indicated in orange) lie within the convex hull. If any of the centroids of the surrounding elements do lie within the convex hull, those elements are added to the patch of elements to coarsen and an updated convex hull around the new patch of elements is computed. The next phase of the coarsening procedure involves categorising the nodes in the element patch. Firstly, nodes that will not form part of the coarsened element geometry are identified and flagged for deletion. These nodes are those which are completely surrounded by marked elements. That is, the relevant node does not lie on the domain boundary, and every element connected to the node has been marked for coarsening (i.e. is indicated in grey). Nodes that meet these criteria are indicated in red. Secondly, the remaining nodes, i.e. non-red nodes, are divided into two groups; those that lie on the convex hull (indicated in blue), and those that do not lie on the convex hull (indicated in green). The green nodes are used to identify which edges of the element patch require straightening to align with the convex hull. An edge is flagged for straightening if any of its nodes do not lie on the convex hull, i.e. if any of its nodes have been marked as green. The edges flagged for straightening are indicated in green. These edges are then straightened using the procedure described in the next section. After the edges have been straightened the nodes flagged for deletion (red nodes) are removed, the elements marked for coarsening (grey elements) are deleted, and a new element is created from the remaining (blue and green) nodes connected in the conventional counter-clockwise sequence. Figure 3: Mesh generation procedure for structured and unstructured/Voronoi meshes. All remaining nodes (blue and green nodes) are checked to determine if they are necessary. That is, if removing them would alter the geometry or connectivity of the newly created element, or any of the surrounding elements. If all the element edges connected to a particular node are co-linear, then that node is considered as unnecessary. The unnecessary nodes are then flagged for deletion (indicated in red). Finally, the unnecessary nodes are removed. Figure 4: Coarsening procedure overview. #### 4.2.2 Edge straightening procedure The edge straightening procedure is illustrated in Figure 5 where the first step depicted corresponds to the step in Figure 4 before the edge straightening is performed2. The edge straightening procedure is performed by grouping consecutive edges that have been marked for straightening and are not separated by any (blue) nodes lying on the convex hull. Examples of these groups of edges are depicted in Figure 5 and are numbered for clarity. Footnote 2: It is noted that the element geometries presented in Figure 5 are not from any simulation or actual results. The exaggerated geometric features are presented for illustrative purposes to demonstrate challenging scenarios that could arise during a coarsening procedure. A group of edges to be straightened comprises some number of (green) edges that have been identified for straightening, and two (blue) nodes lying on the convex hull that form a line segment indicated by a blue dashed line. The straightening is performed by first straightening the green edges to form a line that is parallel to the line segment formed by the two blue nodes. This straightened line, together with the nodes along it, is then scaled to the size of the blue dashed line segment. This procedure creates a straightened edge along which the green nodes have the same relative spacing as they did in the unstraightened configuration. Additionally, the procedure prevents flattening of sections of the surrounding elements and has been found to be robust in the presence of challenging and highly non-convex geometries. The edge straightening procedure is illustrated in Figure 5 for each of the groups of edges to be straightened. First, the edges to be straightened, the nodes on the convex hull, and the line segment connecting these nodes are depicted. In step (a) the edges to be straightened and the line segment are separated for illustrative purposes. In step (b) the (green) edges are straightened to form a line parallel to the (blue dashed) line segment. In step (c) the straightened line, together with the nodes along it, is scaled to the size of the (blue dashed) line segment. Finally, step (d) depicts the newly straightened and scaled edge overlaying the (blue dashed) line segment and, thus, depicts the result of the straightening procedure for a particular edge. After all of the edge groups have been straightened, all of the nodes of the surrounding (white) elements are checked to determine if they lie inside the convex hull, these nodes are indicated in yellow. If a node does lie inside the convex hull it is projected to an updated position using mean value coordinates (MVC) [39]. This is performed by finding all other nodes that are directly connected to, i.e. share an edge with, the yellow node and using both their initial/unprojected and projected locations. Using the initial locations a fictitious element is imagined comprising the directly connected nodes. Then, relative weights are computed using MVC for each of the directly connected nodes at the location of the yellow node. This corresponds to evaluating the MVC weight functions for each directly connected node at the position of the yellow node. Finally, the new/projected location of the yellow node is computed as the weighted sum of the updated/projected locations of the directly connected nodes using the previously calculated MVC weights. This step is employed to prevent 'tangling' of elements during the coarsening procedure. It is noted that other approaches to relocating the (yellow) nodes trapped inside the convex hull are possible. For example, the yellow nodes could be projected onto the convex hull using a minimum distance projection. Alternatively, the geometry of the (white) surrounding elements could be trimmed so that there is no overlap with the convex hull and the yellow nodes could then be deleted. The MVC-based approach proposed here is chosen because it does not require the creation or deletion of any nodes, and does not alter the nodal connectivity of any elements. Figure 5: Edge straightening procedure. ## 5 Mesh coarsening indicators In this section the proposed mesh coarsening indicators are presented along with the procedure used to identify the patches of elements qualifying for coarsening. Since the coarsening of a mesh involves combining groups/patches of elements into a single larger element, it is chosen to construct and compute the coarsening indicators over patches of elements. Patches of elements are identified as all of the elements connected to a specific node. The computed coarsening indicator values are then assigned to the relevant node but reflect the behaviour over the element patch. Examples of element patches are depicted in Figure 6 where the node defining patch \(i\), i.e. \(V_{\text{def}}^{\text{p}_{i}}\), is indicated in purple. All of the elements in the patch are indicated in grey and are labelled \(E_{a}^{\text{p}_{i}},\;E_{b}^{\text{p}_{i}},\;\ldots E_{n_{E}}^{\text{p}_{i}}\) where \(n_{E}\) is the number of elements in the patch and the superscript \(\text{p}_{i}\) indicates their association with the \(i\)-th patch. Furthermore, the nodes associated with the \(j\)-th patch are indicated on the figure and are labelled as \(V_{a}^{\text{p}_{j}},\;V_{b}^{\text{p}_{j}},\;\ldots\;V_{n_{i}}^{\text{p}_{j}}\) where \(n_{\text{v}}\) is the number of nodes associated with the patch. ### Displacement-based indicator Similar to the displacement-based refinement indicator proposed in [40, 32], the displacement-based coarsening indicator is motivated by seeking to quantify the deviation from coplanar of the nodal values of the displacement \(\mathbf{u}_{h}\) over a patch of elements. To compute the indicator for patch \(i\) a least-squares best fit linear approximation of the displacement field over the patch \(\mathbf{u}_{\text{p}_{i}}\) is computed. The displacement field \(\mathbf{u}_{\text{p}_{i}}\) is computed component-wise with component \(k\) described over patch \(i\) as \[u_{k}^{\text{p}_{i}}=\mathbf{p}\left(x,\,y\right)\,\mathbf{a}_{k}=\left[1\quad x\quad y \right]\begin{bmatrix}a_{k}^{1}\\ a_{k}^{k}\\ a_{k}^{k}\end{bmatrix} \tag{23}\] Here \(\mathbf{a}_{k}\) are the degrees of freedom of \(u_{k}^{\text{p}_{i}}\) and are computed as \[\mathbf{a}_{k}=\mathbf{A}^{-1}\mathbf{b}_{k} \tag{24}\] where \[\mathbf{A}=\sum_{m=1}^{n_{\text{v}}}\mathbf{p}\left(x_{m}^{\text{p}_{i}},\;y_{m}^{ \text{p}_{i}}\right)^{T}\mathbf{p}\left(x_{m}^{\text{p}_{i}},\;y_{m}^{\text{p}_{i }}\right)\quad\text{and}\quad\mathbf{b}_{k}=\sum_{m=1}^{n_{\text{v}}}\mathbf{p}\left(x _{m}^{\text{p}_{i}},\;y_{m}^{\text{p}_{i}}\right)^{T}u_{k}^{\text{h}}\left(x _{m}^{\text{p}_{i}},\;y_{m}^{\text{p}_{i}}\right) \tag{25}\] respectively. Here \(x_{m}^{\text{p}_{i}}\) and \(y_{m}^{\text{p}_{i}}\) are the coordinates of the \(m\)-th node associated with patch \(i\), and \(u_{k}^{\text{h}}\) is the displacement degree of freedom at \(\mathbf{x}_{m}^{\text{p}_{i}}\). The displacement-based coarsening indicator on patch \(i\), denoted by \(\Upsilon_{\text{DB}}^{i}\), is then defined as the \(\mathcal{L}^{2}\) deviation of the nodal values of the displacement \(\mathbf{u}_{h}\) from the least-squares displacement \(\mathbf{u}_{\text{p}_{i}}\) and is computed as \[\Upsilon_{\text{DB}}^{i}=\left[\sum_{j=1}^{n_{\text{v}}}\left[\mathbf{u}_{\text{p }_{i}}\left(\mathbf{x}_{j}^{\text{p}_{i}}\right)-\mathbf{u}_{h}\left(\mathbf{x}_{j}^{\text {p}_{i}}\right)\right]\cdot\left[\mathbf{u}_{\text{p}_{i}}\left(\mathbf{x}_{j}^{\text {p}_{i}}\right)-\mathbf{u}_{h}\left(\mathbf{x}_{j}^{\text{p}_{i}}\right)\right]\right]^ {0.5}. \tag{26}\] Figure 6: Examples of element patches. ### Energy error-based indicator The energy error-based error indicator is inspired by the well-known \(Z^{2}\) error estimator originally presented in [3], and the energy error approximation technique presented in [31] for virtual elements. The energy error-based indicator is motivated by seeking to predict how much coarsening a particular patch of elements would increase the local and global approximations of the energy error. Then, those patches that are identified to increase the error the least are the most suitable for coarsening. The error in the \(\mathcal{H}^{1}\) semi-norm, i.e. the energy error norm, is defined as \[e_{\mathcal{H}^{1}}=\left[\frac{1}{2}\int_{\Omega}\left[\boldsymbol{\sigma}^{ ex}-\boldsymbol{\sigma}^{h}\right]^{T}\mathbb{D}^{-1}\left[\boldsymbol{ \sigma}^{ex}-\boldsymbol{\sigma}^{h}\right]d\Omega\right]^{0.5}\,, \tag{27}\] where \(\boldsymbol{\sigma}^{ex}\) is the exact/analytical stress solution and \(\mathbb{D}\) is the constitutive matrix. In practical applications the exact stress is typically unknown and is replaced with an approximation \(\boldsymbol{\sigma}^{*}\). The stress \(\boldsymbol{\sigma}^{*}\) is usually computed as a higher-order approximation of the element stresses described in terms the displacement basis functions. However, in VEM applications the displacement basis functions are not explicitly defined over the entire domain and it is common to use node-based error approximations. Thus, it is sufficient to compute \(\boldsymbol{\sigma}^{*}\) at the nodal positions. This is most easily done by computing a super-convergent stress at each node using a patch-based recovery technique based on super-convergent sampling points (see [31]). In this work a low-order VEM is considered where the approximation of the stress field is constant at element-level, i.e. piece-wise constant. Thus, the higher-order stress field approximation \(\boldsymbol{\sigma}^{*}\) must be piece-wise linear and is computed at each node via a least-squares linear best fit over a patch of elements. The super-convergent stress at a node is computed by considering the patch of elements connected to the node. The location of the centroids of the elements in the patch are treated as the super-convergent sampling points and the element-level stresses are assigned as the degrees of freedom of the sampling points. Since a linear best fit is required, at least three sampling points are needed in order to determine a unique fit. Thus, in cases where a node is connected to less than three elements the patch is enlarged to increase the number of sampling points. Specifically, the patch is enlarged to include elements that are connected to any of the elements in the original patch. For clarity, a few examples of element patches and sampling points are depicted in Figure 7. Here, the node at which the super-convergent stress is to be computed is indicated in purple, the elements in the patch connected to the node are indicated in dark grey, and (if applicable) the elements included in the enlarged patch are indicated in light grey. Additionally, the locations of the sampling points are indicated as red triangles. The super-convergent stress component \(\sigma_{i}^{*}\) computed over a specific patch is given by \[\sigma_{i}^{*}=\boldsymbol{p}\left(x,\,y\right)\,\boldsymbol{a}_{i}=\left[1 \quad x\quad y\right]\begin{bmatrix}a_{i}^{1}\\ a_{i}^{2}\\ a_{i}^{3}\end{bmatrix} \tag{28}\] where \(\boldsymbol{a}_{i}\) are the degrees of freedom of the super-convergent stress component. The degrees of freedom are computed as \[\boldsymbol{a}_{i}=\boldsymbol{A}^{-1}\boldsymbol{b}_{i} \tag{29}\] where \[\boldsymbol{A}=\sum_{k=1}^{n_{\text{sp}}}\boldsymbol{p}\left(x_{k},\,y_{k} \right)^{T}\boldsymbol{p}\left(x_{k},\,y_{k}\right)\quad\text{and}\quad \boldsymbol{b}_{i}=\sum_{k=1}^{n_{\text{sp}}}\boldsymbol{p}\left(x_{k},\,y_{k} \right)^{T}\sigma_{i}^{h}\left(x_{k},\,y_{k}\right) \tag{30}\] respectively. Here \(n_{\text{sp}}\) is the number of sampling points, \(x_{k}\) and \(y_{k}\) are the coordinates of the sampling points, and \(\sigma_{i}^{h}\) is the stress component at the sampling point (computed via (16)). Figure 7: Super-convergent sampling points. Using \(\mathbf{\sigma}^{*}\) a node-based approximation of the energy norm can be computed as \[e_{\mathcal{H}^{1}}\approx\left[\frac{1}{2}\sum_{i=1}^{n_{\text{el}}}\frac{|E_{i }|}{n_{\text{v}}^{i}}\sum_{j=1}^{n_{\text{v}}^{i}}\left[\left[\mathbf{\sigma}^{*} \left(\mathbf{x}_{j}\right)-\mathbf{\sigma}^{h}\left(\mathbf{x}_{j}\right)\right]^{T} \mathbb{D}^{-1}\left[\mathbf{\sigma}^{*}\left(\mathbf{x}_{j}\right)-\mathbf{\sigma}^{h} \left(\mathbf{x}_{j}\right)\right]\right]\right]^{0.5}\,, \tag{31}\] where \(n_{\text{el}}\) is the number of elements in the domain. Using this node-based approach to error approximation, the 'predicted' energy error after coarsening a particular patch of elements, i.e. the energy error-based coarsening indicator, is approximated over patch \(i\) as \[\Upsilon_{\text{FB}}^{i}=\left[\frac{1}{2}\frac{|E_{\text{p}_{i}}|}{n_{\text{v }}}\sum_{j=1}^{n_{\text{v}}}\left[\left[\mathbf{\sigma}^{*}\left(\mathbf{x}_{j} \right)-\bar{\mathbf{\sigma}}_{\text{p}_{i}}^{h}\left(\mathbf{x}_{j}\right)\right]^{T }\mathbb{D}^{-1}\left[\mathbf{\sigma}^{*}\left(\mathbf{x}_{j}\right)-\bar{\mathbf{\sigma} }_{\text{p}_{i}}^{h}\left(\mathbf{x}_{j}\right)\right]\right]\right]^{0.5}\,. \tag{32}\] Here \(|E_{\text{p}_{i}}|\) denotes the area of patch \(i\). Additionally, \(\bar{\mathbf{\sigma}}_{\text{p}_{i}}^{h}\) denotes the 'predicted' stress over the coarsened patch computed as the average of the element stresses on the patch. The 'predicted' energy error is not only useful for adaptive coarsening procedures but offers an interesting advantage to fully adaptive remeshing procedures too. For example, during an adaptive remeshing procedure a user selects a global and/or local error criterion or target. The 'predicted' energy error can then be used to determine whether the coarsening of a particular patch might result in unsatisfactorily high local and/or global errors after coarsening that would not meet the global and/or local error criterion and thus should not be performed. ### Selecting element patches to coarsen The procedure used to select patches of elements to coarsen comprises two steps. First, all of the element patches that are eligible for coarsening are identified. Then, from the set of eligible patches, those whose coarsening indicator value falls below a certain threshold are selected for coarsening. #### 5.3.1 Identifying eligible element patches For a patch of elements to be eligible for coarsening the geometry of the element created by the coarsening must not modify the overall geometry of the problem domain. In the case of problems with convex domains this step is trivial. However, if a problem domain is non-convex, or contains a hole, care must be taken to preserve the domain geometry. To determine the eligibility of a patch all of the nodes associated with the patch are considered. The patch's nodes are checked to see if they lie on a domain boundary or corner. If no nodes lie on a boundary or corner then the patch is eligible for coarsening. If any nodes do lie on a boundary or corner then a convex hull is created from the patch's nodes. The positions of the boundary and corner nodes are then checked to see if they are coincident with the boundary of the convex hull. If all boundary and corner nodes are coincident with the boundary of the convex hull then the patch is eligible for coarsening. This process is exemplified in Figure 8 where several nodes that define element patches are considered and are indicated in blue. The resulting convex hulls are indicated in green if the patch is eligible for coarsening and in red if the patch is not eligible. Figure 8: Examples of eligible and ineligible element patches. After the set of eligible patches has been determined it must be resolved to eliminate overlapping patches. This is done by first sorting the nodes defining the patches in ascending order based on their coarsening indicator values. Then, all the nodes associated with the first patch are identified. The rest of sorted node list is then checked to see if any of the remaining nodes in the list are present in the current node patch. Any of the remaining nodes that are present in the current node patch are removed from the node list. This process is then repeated iteratively until the end of the list of eligible nodes is reached. This procedure is detailed in Algorithm 1. ``` \(EligibleNodes\leftarrow\text{sort}(EligibleNodes,\text{ascending})\) \(nNodes\leftarrow\text{length}(EligibleNodes)\) for\(i\in(1,nNodes)\)do \(ThisDefiningNode\gets EligibleNodes(i)\) \(PatchNodes\leftarrow\text{GetPatchNodes}(ThisDefiningNode)\) for\(j\in(i,nNodes)\)do \(NodeToCheck\gets EligibleNodes(j)\) if\(NodeToCheck\) is member of \(PatchNodes\)then Mark \(NodeToCheck\) for deletion endif endfor Delete marked nodes from \(EligibleNodes\) \(nNodes\leftarrow\text{length}(EligibleNodes)\) endfor ``` **Algorithm 1** Remove overlapping element patches #### 5.3.2 Marking element patches The procedure for identifying patches of elements to coarsen is similar to that presented in [40]. A coarsening threshold percentage \(T=X\%\) is introduced from which an allowable threshold value \(T_{\text{val}}\) is determined using the list of resolved eligible nodes. The node \(X\%\) of the way down the resolved node list is found and the value of its coarsening indicator is set as \(T_{\text{val}}\). Then any node (and associated element patch) whose coarsening indicator value is less than or equal to \(T_{\text{val}}\) is marked for coarsening. ## 6 Numerical Results In this section numerical results are presented for a range of example problems of varying complexity to demonstrate the efficacy of the proposed coarsening procedures. The efficacy is evaluated in the \(\mathcal{H}^{1}\) error norm defined by \[||\widetilde{\mathbf{u}}-\mathbf{u}_{h}||_{1}=\left[\int_{\Omega}\left[|\widetilde{\bm {u}}-\mathbf{u}_{h}|^{2}+|\nabla\widetilde{\mathbf{u}}-\nabla\mathbf{u}_{h}|^{2}\right]\,d \Omega\right]^{0.5}\,, \tag{33}\] in which integration of \(\mathbf{u}_{h}\) is required over the domain. Since in the case of VEM formulations \(\mathbf{u}_{h}\) is only known on element boundaries a node-based approximation of (33) is used and is computed as \[\begin{split}||\widetilde{\mathbf{u}}-\mathbf{u}_{h}||_{1}& \approx\left[\sum_{i=1}^{n_{\text{rel}}}\frac{|E_{i}|}{n_{\text{v}}^{i}} \sum_{j=1}^{n_{\text{v}}}\Bigl{[}\left[\widetilde{\mathbf{u}}(\mathbf{x}_{j})-\mathbf{u} _{h}^{i}(\mathbf{x}_{j})\right]\cdot\left[\widetilde{\mathbf{u}}(\mathbf{x}_{j})-\mathbf{u}_{ h}^{i}(\mathbf{x}_{j})\right]\right.\\ &\left.+\left[\nabla\widetilde{\mathbf{u}}(\mathbf{x}_{j})-\Pi\mathbf{u}_{h}^ {i}(\mathbf{x}_{j})\right]:\left[\nabla\widetilde{\mathbf{u}}(\mathbf{x}_{j})-\Pi\mathbf{u}_{ h}^{i}(\mathbf{x}_{j})\right]\right]\right]^{0.5}.\end{split} \tag{34}\] Here \(\widetilde{\mathbf{u}}\) is a reference solution generated using an overkill mesh of biquadratic finite elements, the location of the \(j\)-th vertex is denoted by \(\mathbf{x}_{j}\), and \(\Pi\mathbf{u}_{h}\) is the gradient of \(\mathbf{u}_{h}\) computed via the projection operator (see (16)). In the examples that follow the material is isotropic with a Young's modulus of \(E=1\) Pa, a Poisson's ratio of \(\nu=0.3\), and where the shear modulus is computed as \(\mu=E/2\left[1+\nu\right]\). ### Punch The punch problem comprises a domain of width \(w=1\) m and height \(h=1\) m into which a punch of width \(w_{\text{p}}=0.2\)\(w\) is driven into the middle of the top edge. The bottom edge of the domain is constrained vertically and the midpoint of the bottom edge is fully constrained. The top edge is constrained horizontally and the punch is modelled as a uniformly distributed load with a magnitude of \(Q_{\mathrm{P}}=0.675\,\frac{\mathrm{N}}{\mathrm{m}}\) (see Figure 9(a)). A sample deformed configuration of the body with a Voronoi mesh is depicted in Figure 9(b) with the vertical displacement \(u_{y}\) plotted on the colour axis. The punch problem has a simple domain geometry which does not introduce any challenges in modelling this problem. While the punch introduces localized deformation only in its vicinity, the rest of the body experiences very little deformation (see Figure 9(b)). Thus, the punch problem is used to provide insight into the efficacy of the proposed coarsening procedures in cases of 'less challenging' problems. The mesh evolution during the coarsening process for the punch problem is depicted in Figure 10 for the case of the displacement-based coarsening procedure with \(T=20\%\) on structured and unstructured/Voronoi meshes. Meshes are shown at various coarsening steps with step 1 corresponding to the initial mesh. Similar mesh evolution is exhibited on both structured and Voronoi meshes. The mesh becomes increasingly coarse at the bottom of the domain while remaining fine around the region of the punch. Furthermore, the mesh density exhibits a smooth and graded transition from the finest region around the punch to the coarser regions further from the punch. In this example problem most of the deformation occurs around the location of the punch. The rest of the body experiences comparatively little deformation with the magnitude of the deformation decreasing with increasing distance from the punch (see Figure 9(b)). This behaviour is reflected in the mesh coarsening process, thus, the mesh evolution is sensible for this problem. Figure 9: Punch problem (a) geometry, and (b) sample deformed configuration of a Voronoi mesh. The distribution of the \(\mathcal{H}^{1}\) error over the domain during the mesh coarsening process for the punch problem is depicted in Figure 11. The \(\mathcal{H}^{1}\) error is depicted in a logarithmic scale on structured and Voronoi meshes for the case of the displacement-based coarsening procedure with \(T=20\%\). The error distribution exhibited in step 1, i.e. Figure 11(a), demonstrates that the mesh evolution illustrated in Figure 10 is indeed sensible, as discussed. The mesh evolution closely reflects the error distribution and the regions with the lowest errors are coarsened most aggressively. Furthermore, the error distribution over the domain becomes increasingly even as the coarsening procedure progresses. Since an optimal mesh would have a perfectly even error distribution, this behaviour demonstrates that the mesh becomes closer to optimal during the coarsening procedure. It is noted that due to the discrete increases in element size during coarsening, a perfectly even error distribution is not possible as it would require precise resizing of individual elements. Nevertheless, the improved error distribution during coarsening demonstrates the high degree of efficacy of the proposed procedure. Figure 10: Mesh coarsening process for the punch problem on structured and Voronoi meshes using the displacement-based coarsening procedure with \(T=20\%\). The convergence behaviour in the \(\mathcal{H}^{1}\) error norm of the VEM for the punch problem using the displacement-based and energy error-based coarsening procedures is depicted in Figure 12 on a logarithmic scale. Here, the convergence behaviour of the displacement-based and energy error-based procedures are presented on the top and bottom rows of figures respectively, with the results generated on structured and Voronoi meshes presented in the left and right columns of figures respectively. In each case results are presented for coarsening thresholds \(T=5\%\) and \(T=20\%\) to demonstrate the effect of the choice of \(T\) on the convergence behaviour. Additionally, uniform initial meshes of various discretization densities are considered. Even though the punch problem is 'less challenging' and can be suitably analysed using uniform meshes the benefit of the coarsening procedures is clear. Both coarsening procedures successfully coarsen the least critical elements in the domain, which eliminates the least important degrees of freedom. The benefit of the coarsening procedures is clear from the path of the \(\mathcal{H}^{1}\) error curves. The curve is initially almost horizontal as the least significant nodes and elements are coarsened and almost no error is introduced. As the procedure continues the coarsening spreads over the domain which inevitably does begin to introduce some error and the gradients of the \(\mathcal{H}^{1}\) error curves increase. However, the efficiency of the coarsened solutions is superior to that of uniform meshes (indicated by the black curve denoting the reference uniform mesh approach). That is, for a given number of degrees of freedom, the coarsened meshes yield a lower \(\mathcal{H}^{1}\) error than the uniform meshes. Additionally, similar behaviour is exhibited in the cases of both structured and Voronoi meshes. Interestingly, the behaviour does not appear to be significantly influenced by the choice of \(T\), with nearly identical performance exhibited in the cases of \(T=5\%\) and \(T=20\%\) for both coarsening procedures and on both mesh types. Finally, the efficacy of the coarsening procedures increases with increasing density of the initial uniform mesh. This behaviour is sensible because the more dense the initial mesh is, the more elements and the more degrees of freedom there are in the least critical portions of the domain. These regions can then be efficiently coarsened while introducing the least amount of \(\mathcal{H}^{1}\) error possible. Figure 11: \(\mathcal{H}^{1}\) error distribution during the coarsening process for the punch problem on structured and Voronoi meshes using the displacement-based coarsening procedure with \(T=20\%\). ### Plate with hole The plate with hole problem comprises a domain of width \(w=1\;\mathrm{m}\) and height \(h=1\;\mathrm{m}\) with a centrally located square hole (see Figure 13(a)). The left-hand edge of the plate is constrained horizontally and the bottom left-hand corner is fully constrained. The right-hand edge is subject to a prescribed traction of \(Q_{\mathrm{P}}=0.2\,\frac{\mathrm{N}}{\mathrm{m}}\). A sample deformed configuration of the plate with a Voronoi mesh is depicted in Figure 13(b) with the horizontal displacement \(u_{x}\) plotted on the colour axis. The boundary conditions of the plate with hole problem are simple and themselves do not introduce any challenges in modelling this problem. However, the sharp corners of the hole act as stress concentrators which introduce high stresses and complex deformation. Thus, this problem is used to provide insight into the efficacy of the proposed coarsening procedures in cases of'more challenging' problems. Figure 12: \(\mathcal{H}^{1}\) error vs \(n_{\mathrm{v}}\) for the punch problem. The mesh evolution during the coarsening process for the plate with hole problem is depicted in Figure 14 for the case of the energy error-based coarsening procedure with \(T=20\%\) on structured and Voronoi meshes. Meshes are shown at various coarsening steps with step 1 corresponding to the initial mesh. The coarsening behaviour is similar on both structured and Voronoi meshes. The mesh remains fine in the regions around the corners of the hole where the deformation is relatively complex. Additionally, the meshes remain fine in the regions undergoing the most severe deformations, i.e the right-hand edge of the hole, which experiences significant compression, and the right-hand edge of the plate, which experiences significant tension. Furthermore, since the right-hand portion of the domain experiences significantly more deformation than the left-hand side, the regions around the right-hand corners of the hole experience higher stresses and more complex deformation than the regions around the left-hand corners of the hole. This behaviour is reflected in the coarsening process with finer meshes preserved in a larger region surrounding the right-hand corners of the hole than the left-hand corners. Additionally, meshes are coarsened the most in regions experiencing relatively little and/or simple deformation such as the middle of the left-hand portion of the plate, and the top and bottom right-hand corners of the plate. Thus, the mesh evolution is sensible for this problem. Figure 13: Plate with hole (a) geometry, and (b) deformed configuration of a Voronoi mesh. The distribution of the \(\mathcal{H}^{1}\) error over the domain during the mesh coarsening process for the plate with hole problem is depicted in Figure 15. The \(\mathcal{H}^{1}\) error is depicted in a logarithmic scale on structured and Voronoi meshes for the case of the energy error-based coarsening procedure with \(T=20\%\). As observed in the case of the punch problem, the error distribution exhibited in step 1, i.e. Figure 15(a), demonstrates that the mesh evolution illustrated in Figure 14 is sensible. The mesh evolution reflects the error distribution and the regions with the lowest errors are the most coarsened. During the mesh coarsening process the distribution of error, again, becomes more even over the domain. However, in the case of this'more challenging' problem, the complex regions around the corners of the hole induce much higher and more localized stresses than those of the punch problem. Thus, even though the error is spread more evenly over much of the domain, the complex regions stand out as error hot spots. Figure 14: Mesh coarsening process for the plate with hole problem on structured and Voronoi meshes using the energy error-based coarsening procedure with \(T=20\%\). The convergence behaviour in the \(\mathcal{H}^{1}\) error norm of the VEM for the plate with hole problem using the displacement-based and energy error-based coarsening procedures is depicted in Figure 16 on a logarithmic scale. Here, the convergence behaviour of the displacement-based and energy error-based procedures are presented on the top and bottom rows of figures respectively, with the results generated on structured and Voronoi meshes presented in the left and right columns of figures respectively. In each case results are presented for coarsening thresholds \(T=5\%\) and \(T=20\%\) to demonstrate the effect of the choice of \(T\) on the convergence behaviour. Additionally, uniform initial meshes of various discretization densities are considered. The behaviour exhibited in Figure 16 is qualitatively similar to that observed in Figure 12 for the case of the punch problem. The benefit of the coarsening procedures is, again, clear with the solutions obtained from the coarsened meshes exhibiting higher accuracy than those obtained from uniform meshes comprising the same number of degrees of freedom. Furthermore, similar behaviour is exhibited in the cases of both structured and Voronoi meshes, the behaviour does not appear to be significantly influenced by the choice of \(T\), and the efficacy of the coarsening procedures increases with increasing density of the initial mesh. The most significant difference between the results presented in Figure 12 and Figure 16 is the increased efficacy of the coarsening procedures exhibited in the case of the plate with hole problem compared to the punch problem. The reason for this difference is the relative complexities of the two problems. The plate with hole problem has significantly more localized complexity around the corners of the hole, while the punch problem is comparatively simpler to model. Thus, in the plate with hole problem most of the error is localized around the corners of the hole where the fine mesh is preserved. The rest of the domain can be efficiently coarsened while introducing very little error. Thus, this'more challenging' problem and its more localized complexity is better suited to adaptive coarsening and the proposed coarsening procedures exhibit greater efficacy. Figure 15: \(\mathcal{H}^{1}\) error distribution during the coarsening process for the plate with hole problem on structured and Voronoi meshes using the energy error-based coarsening procedure with \(T=20\%\). ### L-shaped domain The L-shaped domain problem comprises a domain of width \(w=1\;\mathrm{m}\) and height \(h=1\;\mathrm{m}\) where the horizontal and vertical thickness of the L are \(\frac{w}{4}\) and \(\frac{h}{4}\) respectively. The bottom and left-hand edges of the domain are constrained vertically and horizontally respectively, with the bottom left corner fully constrained. The upper and right-hand edges are subject to prescribed displacements of \(\bar{u}_{y}=0.5\;\mathrm{m}\) and \(\bar{u}_{x}=0.5\;\mathrm{m}\) respectively, with the displacements of the edges unconstrained in the \(x\)- and \(y\)-directions respectively (see Figure 17(a)). A sample deformed configuration of the L with a Voronoi mesh is depicted in Figure 17(b) with the displacement magnitude \(|\mathbf{u}|\) plotted on the colour axis. The non-convex corner of the L-shaped geometry introduces a strong singularity that provides a significant challenge for numerical analysis techniques. Thus, this problem is used to provide insight into the efficacy of the proposed coarsening procedures in cases of'very challenging' problems. Figure 16: \(\mathcal{H}^{1}\) error vs \(n_{\text{v}}\) for the plate with hole problem. The mesh evolution during the coarsening process for the L-shaped domain problem is depicted in Figure 18 for the cases of the displacement-based and energy error-based coarsening procedures with \(T=20\%\) on Voronoi meshes. Meshes are shown at various coarsening steps with step 1 corresponding to the initial mesh. The coarsening behaviour is similar for both the displacement-based and energy-error based coarsening procedures. The mesh remains fine around the corner of the L. This is expected as the corner of the L induces a singularity in the solution and, as such, is the most complex region of the domain and requires the finest discretization possible. Conversely, the rest of the domain experiences very simple, almost linear, deformation, particularly in the regions furthest from the corner. Thus, a very low discretization density is required in these regions and they are increasingly coarsened as the number of coarsening steps/iterations performed increases. This dichotomy between fine and coarse mesh regions is expected for this problem, thus, indicating sensible mesh evolution. Furthermore, the meshes presented here generated by coarsening finer uniform initial meshes are qualitatively very similar to the meshes presented in [32] that were generated using adaptive refinement of initially uniform coarse meshes on the same example problem. This similarity exists for all of the example problems presented in this work and further emphasises the efficacy of the proposed coarsening procedures. Figure 17: L-shaped domain (a) geometry and (b) sample deformed configuration. The distribution of the \(\mathcal{H}^{1}\) error over the domain during the mesh coarsening process for the L-shaped domain problem is depicted in Figure 19. The \(\mathcal{H}^{1}\) error is depicted in a logarithmic scale on Voronoi meshes for the cases of the displacement-based and energy error-based coarsening procedures with \(T=20\%\). It is, again, clear from the error distribution in step 1, i.e. Figure 19(a), that the mesh evolution illustrated in Figure 18 is sensible. As seen in the previous examples, the error distribution over the domain becomes increasingly even during the mesh coarsening process. However, in this'very challenging' problem the singularity at the corner of the L is a clear error hotspot. In problems containing singularities it is not possible to create a perfectly uniform error distribution. Nevertheless, the improved error distribution over the rest of the problem domain demonstrates the efficacy of the proposed coarsening procedures. Figure 18: Mesh coarsening process for the L-shaped domain problem on Voronoi meshes for the displacement-based and energy error-based coarsening procedures with \(T=20\%\). The convergence behaviour in the \(\mathcal{H}^{1}\) error norm of the VEM for the L-shaped domain problem using the displacement-based and energy error-based coarsening procedures is depicted in Figure 20 on a logarithmic scale. Here, the convergence behaviour of the displacement-based and energy error-based procedures are plotted on the same axis to comparatively asses the performance of the procedures. These comparisons are made for the cases of structured (top row) and Voronoi meshes (bottom row), and for the coarsening thresholds \(T=5\%\) (left column) and \(T=20\%\) (right column). For this'very challenging' problem the proposed coarsening procedures exhibit a very high degree of efficacy on both structured and Voronoi meshes, and for both choices of \(T\). The coarsening procedures eliminate a significant portion of the number of degrees of freedom while introducing a negligible amount of error. The coarsening procedures exhibit good performance even on coarse initial uniform meshes. Similar levels of efficacy are exhibited by the displacement-based and energy-error based coarsening procedures. Differences are only evident in the very coarse mesh range where the displacement-based procedure slightly outperforms the energy error-based procedure. This is most likely because the accuracy of the recovered/super-convergent stresses, used in the computation of the energy error-based indicator, depend on the accuracy of the global solution. Thus, as the global accuracy decays in the very coarse mesh range the accuracy and efficacy of the energy error-based indicator decay too. The coarsening procedure exhibits very good performance for a large portion of the range until, quite suddenly, the error increases very rapidly. This is because the coarsening procedure ensures that patches of elements are coarsened at every step. For much of the range coarsening is focused at the ends of the L, i.e. far from the corner of the L. Eventually, it is no longer possible to coarsen these regions while preserving the domain geometry, which means that the area around the corner of the L is then coarsened. Since this area coincides with a singularity, the coarsening causes significant error increases. In practice, the coarsening procedure would be terminated before the most critical regions of a domain are coarsened. However, for investigative purposes, and to test the coarsening procedures as thoroughly as possible, it is chosen to run the coarsening procedure until no more coarsening is possible. Somewhat similar behaviour is also observed in the other example problems, however, it is the most evident/exaggerated in this example problem due to the strong singularity at the corner of the L and the simple deformation in the rest of the domain. The results presented here, and in the other example problems, demonstrate the power of the proposed coarsening procedures in improving the efficiency of a simulation by removing the least critical elements and degrees of freedom while maintaining a high level of solution accuracy. Furthermore, it has been found that the efficacy of the coarsening Figure 19: \(\mathcal{H}^{1}\) error distribution during the coarsening process for the L-shaped domain problem on Voronoi meshes using the displacement-based energy error-based coarsening procedure with \(T=20\%\). procedures is not particularly sensitive to the choice of the coarsening threshold \(T\). Thus, higher values of \(T\) can be used to coarsen the mesh using fewer coarsening steps, and significantly faster, than lower values while yielding a similar degree of efficacy. ## 7 Discussion and conclusion In this work two novel mesh coarsening indicators have been proposed that are suitable for virtual element applications. Additionally, a simple procedure for selecting patches of elements qualifying for coarsening has been presented along with a novel mesh coarsening procedure that is suitable for both structured and unstructured/Voronoi meshes. The proposed displacement-based and energy error-based coarsening indicators are computed over patches of elements and are motivated respectively by trying to; identify groups/patches of elements over which the displacement field is approximately linear, and predict the approximate error in the energy norm that would result from the coarsening of a particular patch of elements. The mesh coarsening procedure involves constructing the geometry of a coarsened element by creating a convex hull around the patch of elements identified for coarsening. The geometry of the convex hull is created using a novel edge straightening procedure. The proposed mesh coarsening procedures were studied numerically over a range of benchmark problems of varying complexity. For each problem the mesh evolution during coarsening was analysed along with the distribution of error over the problem domain. In terms of the mesh evolution, the efficacy of the proposed coarsening procedures was evident in several ways. Firstly, it was observed that the mesh density was significantly reduced in what were identified as the least critical portions of the domains. Secondly, similar performance was demonstrated in the cases of both Figure 20: \(\mathcal{H}^{1}\) error vs \(n_{\mathrm{v}}\) for the L-shaped domain problem.
2304.11394
Spin Statistics and Field Equations for any Spin
In this article we prove spin statistics theorem for arbitrary massive (A, B) field in a representation theoretic manner. General Gamma matrices are introduced, and explicit forms for low spin are calculated. Spin sums and twisted spin sums are introduced to prove spin statistics and derive field equations respectively, and their relations are discussed. Klein Gordan equation is just the condition on the 4-momentum. Dirac equation (or massive Weyl equations) and Proca equation are shown to be examples of our general field equation.
Zixuan Feng
2023-04-22T13:08:21Z
http://arxiv.org/abs/2304.11394v1
# Spin Statistics and Field Equations for any Spin ###### Abstract In this article we prove spin statistics theorem for arbitrary massive \((A,B)\) field in a representation theoretic manner. General Gamma matrices are introduced, and explicit forms for low spin are calculated. Spin sums and twisted spin sums are introduced to prove spin statistics and derive field equations respectively, and their relations are discussed. Klein Gordan equation is just the condition on the 4-momentum. Dirac equation (or massive Weyl equations) and Proca equation are shown to be examples of our general field equation. ###### Contents * 1 Introduction * 2 Gamma matrices * 2.1 General spin * 2.2 Examples * 3 Spin statistics for massive particles * 4 Field equations for massive particles * 4.1 General equation * 4.2 Examples * 5 Conclusions **Keywords:** Quantum field theory; Spin statistics; Gamma matrices; Dirac equation; Proca equation; Introduction In this article we prove the spin statistics theorem in a representation theoretic manner for arbitrary field spin \((A,B)\) for massive field. Also we derive general field equation for massive field, and shows that the Dirac equation and Proca equation are examples for it. The logic chain of this article follows Weinberg's path ([14]) of deriving quantum field theory. We first have a unitary representation of Poincare group on the state space (Fock space), then we define creation and annilation operators explicitly, and then use them to construct free quantum fields. In this approach the field equation does not serve as an equation that characterizes the field uniquely, but is a property that the field possesses. So field equation is not of great importance from certain point of view. Fock space representation \(\longrightarrow\) Creation and annilation \(\longrightarrow\) Free quantum field \(\longrightarrow\) Field equation (1) First we recall the construction for free quantum field. We define annihilation field \(\psi_{l}^{+}(x)\) and creation field \(\psi_{l}^{-}(x)\): \[\begin{split}\psi_{l}^{+}(x)&=\sum_{\sigma n}\int d^ {3}pu_{l}(x;\mathbf{p},\sigma,n)a(\mathbf{p},\sigma,n),\\ \psi_{l}^{-}(x)&=\sum_{\sigma n}\int d^{3}pv_{l}(x; \mathbf{p},\sigma,n)a^{\dagger}(\mathbf{p},\sigma,n)\end{split} \tag{2}\] We hope it to satisfy the Lorentz transformation of field: 1 Footnote 1: Our convention is different from that of [14]. **Axiom 1**.: (Lorentz transformation of field) We have a finite dimensional representation \(D\) of the universal cover \(SL(2,\mathbb{C})\) of the Lorentz group such that the 'Lorentz transformation of the field' are satisfied: \[\begin{split} U_{0}(\Lambda,a)^{-1}\psi_{l}^{+}(\lambda(\Lambda) x+a)U_{0}(\Lambda,a)&=D_{l\bar{l}}(\Lambda)\psi_{\bar{l}}^{+}(x)\\ U_{0}(\Lambda,a)^{-1}\psi_{\bar{l}}^{-}(\lambda(\Lambda)x+a)U_{0 }(\Lambda,a)&=D_{l\bar{l}}(\Lambda)\psi_{\bar{l}}^{-}(x)\\ &\forall\Lambda\in SL(2,\mathbb{C})\end{split} \tag{3}\] In order for Axiom 1 to satisfy, it is equivalent that \[\begin{split} u(x;\mathbf{q})=(2\pi)^{-3/2}e^{ip\cdot x}u( \mathbf{q})\\ v(x;\mathbf{q})=(2\pi)^{-3/2}e^{-ip\cdot x}v(\mathbf{q})\end{split} \tag{4}\] and: \[\begin{split} u(\mathbf{q})&=(m/q^{0})^{1/2}D(L(q))u (\mathbf{0})\\ v(\mathbf{q})&=(m/q^{0})^{1/2}D(L(q))v(\mathbf{0}) \end{split} \tag{5}\] and: \[\begin{split} u(\mathbf{0})D^{(j_{n})}(W)&=D(W)u( \mathbf{0})\\ v(\mathbf{0})D^{(j_{n})^{*}}(W)&=D(W)v(\mathbf{0}) \end{split} \tag{6}\] \(\forall W\in\mathcal{W}\), where we have used the convention: \(u(q)=u(\mathbf{q})\) denotes the matrix \(\{u_{l}(\mathbf{q},\sigma)\}_{l|\sigma}\), and the multiplication is the matrix multiplication. Considering a irreducible field representation of spin \((A,B)\), we write \[\begin{split}\psi^{AB}_{ab}(x)=&(2\pi)^{-3/2}\sum_ {\sigma}\int d^{3}p[e^{ip\cdot x}\kappa^{AB}u^{AB}_{ab}(\mathbf{p},\sigma)a( \mathbf{p},\sigma)+e^{-ip\cdot x}\lambda^{AB}v^{AB}_{ab}(\mathbf{p},\sigma)a^{ c\dagger}(\mathbf{p},\sigma)]\end{split} \tag{7}\] with \(\kappa\) and \(\lambda\) arbitrary constants waiting for determination. The next property we want the quantum field to possess is the microscopic causality: **Axiom 2**.: (Microscopic causality) We should have \[[\psi^{AB}_{ab}(x),\psi^{CD\dagger}_{cd}(y)]_{\mp}=0,\quad(x-y)^{2}>0 \tag{8}\] This requirement leads to the so called'spin statistics theorem', which is discussed in Section 3. The central object here is the so called'spin sum': \[(2p^{0})^{-1}\pi^{AB,CD}(\mathbf{p})\equiv u^{AB}(\mathbf{p})u^{CD\dagger}( \mathbf{p})=v^{AB}(\mathbf{p})v^{CD\dagger}(\mathbf{p}) \tag{9}\] and we can show that it may be expressed as a covariant polynomial of \(p^{\mu}\) using generalized gamma matrices, which we will discuss in Section 2. As a result we can fix a normalization of \(\kappa,\lambda\): \[\begin{split}\psi^{AB}_{ab}(x)=&(2\pi)^{-3/2}\sum_ {\sigma}\int d^{3}p[e^{ip\cdot x}u^{AB}_{ab}(\mathbf{p},\sigma)a(\mathbf{p}, \sigma)+e^{-ip\cdot x}(-)^{2B}v^{AB}_{ab}(\mathbf{p},\sigma)a^{c\dagger}( \mathbf{p},\sigma)]\end{split} \tag{10}\] where the factor \((-)^{2B}\) is essential for both spin statistics and the existence for a field equation where we want to combine the two equations for positive frequency part and the negative frequency part, as discussed in Section 4. In writing down a field equation, we use a modified version of spin sum, called 'twisted spin sum' in this article. ## 2 Gamma matrices ### General spin We can prove that for any half integers \(A,B,C,D,K\) satisfying \[\max\{|A-D|,|B-C|\}\leq K\leq\min\{A+D,B+C\} \tag{11}\] with \(2K,2A+2D,2B+2C\) having the same parity, 2 there exists a set of non-zero \((2A+1)(2B+1)\times(2C+1)(2D+1)\) matrices Footnote 2: This parity condition will be implicit when we write down the condition (11) afterwards. \[\begin{split}\underset{K}{ABCD}T^{\mu_{1}\mu_{2}\cdots\mu_{2K}},\quad\mu_{1},\mu_{2},\cdots,\mu_{2K}=0,1,2,3\end{split} \tag{12}\] with the properties: * \(T\) is symmetric in \(\mu\)'s. * \(T\) is traceless in \(\mu\)'s, i.e., \[g_{\mu_{1}\mu_{2}}T^{\mu_{1}\mu_{2}\cdots\mu_{2K}}=0\] (13) * \(T\) is a tensor in \(\mu\)'s, i.e., \[D^{AB}(\Lambda)T^{\mu_{1}\mu_{2}\cdots\mu_{2K}}D^{CD}(\Lambda)^{\dagger}= \Lambda_{\nu_{1}}{}^{\mu_{1}}\Lambda_{\nu_{2}}{}^{\mu_{2}}\cdots\Lambda_{\nu_ {2K}}{}^{\mu_{2K}}T^{\nu_{1}\nu_{2}\cdots\nu_{2K}}\] (14) where the ordinary matrix multiplication is understood on the LHS. Consider the vector space \(V\) consisting of all complex \((2A+1)(2B+1)\times(2C+1)(2D+1)\)-matrices. They furnish a representation of \(SL(2,\mathbb{C})\) by \[M\mapsto D^{AB}(\Lambda)MD^{CD\dagger}(\Lambda) \tag{15}\] One can see that this is isomorphic to the \((A,B)\otimes(D,C)\) representation, which is because the complex conjugate of \(D^{CD}\) is isomorphic (but not directly equal) to \(D^{DC}\). And we have 3 Footnote 3: From now on we use the convention. \[\begin{split}(A,B)\otimes(D,C)=(A\otimes D,B\otimes C)& =(\bigoplus_{|A-D|\leq K_{1}\leq A+D}K_{1},\bigoplus_{|B-C|\leq K _{2}\leq B+C}K_{2})\\ &=\bigoplus_{|A-D|\leq K_{1}\leq A+D,|B-C|\leq K_{2}\leq B+C}(K _{1},K_{2})\end{split} \tag{16}\] So it contains a \((K,K)\)-subrep. By the fact 4 that \((K,K)\)-representation of \(SL(2,\mathbb{C})\) consists of all symmetric traceless tensors of rank \(2K\), so we define \(T\)'s to be the standard basis in this description and we are done. Footnote 4: See [11] ### Examples Pauli matrices \(\sigma^{\mu}\) are the \(T\) matrices for \((\frac{1}{2},0,\frac{1}{2},0)\), i.e. \(\sigma^{\mu}=\)\({}^{1/2,0,1/2,0}_{1\ /2}T^{\mu}\), where \[\sigma^{0}=I=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\quad\sigma^{1}=X=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\quad\sigma^{2}=Y=\begin{pmatrix}0&-i\\ i&0\end{pmatrix}\quad\sigma^{3}=Z=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix} \tag{17}\] Indeed we see in the decomposition \((\frac{1}{2},0)\otimes(0,\frac{1}{2})=(\frac{1}{2},\frac{1}{2})\) there is no other subreps, so we only need to show they obey the property \[D^{1/2,0}(\Lambda)\sigma^{\mu}D^{1/2,0\dagger}(\Lambda)=\Lambda_{\nu}{}^{\mu} \sigma^{\nu} \tag{18}\] or its Lie algebra level (where \(J^{\mu\nu}\) denotes the image of Lie algebra basis under representation): \[i[J^{\mu\nu},\sigma^{\rho}]=-g^{\rho\mu}\sigma^{\nu}+g^{\rho\nu}\sigma^{\mu} \tag{19}\] This is by the definition of \((\frac{1}{2},0)\) representation: \[\begin{split} J_{1}&\mapsto\frac{1}{2}X\\ J_{2}&\mapsto\frac{1}{2}Y\\ J_{3}&\mapsto\frac{1}{2}Z\\ K_{1}&\mapsto-\frac{i}{2}X\\ K_{2}&\mapsto-\frac{i}{2}Y\\ K_{3}&\mapsto-\frac{i}{2}Z\end{split} \tag{20}\] Similarly we can show that \(\bar{\sigma}^{\mu}\equiv(I,-\sigma)\) are the \(T\) matrices for \((0,\frac{1}{2},0,\frac{1}{2})\), i.e. \(\bar{\sigma}^{\mu}=\genfrac{}{}{0.0pt}{}{0,1/2,0,1/2}{1/2}\,T^{\mu}\). ## 3 Spin statistics for massive particles The results in this subsection apply only when the field representation is irreducible and the particle is massive. Let's compute \([\psi^{AB}_{ab}(x),\psi^{CD\dagger}_{cd}(y)]_{\mp}\). Firstly we have the equality \[\begin{split}{}_{\mp}=&(2\pi)^{-3/2}\int d^{3}p(2p^ {0})^{-1}\pi^{AB,CD}_{ab,cd}(\mathbf{p})\\ &\times[\kappa^{AB}\kappa^{CD*}e^{ip\cdot(x-y)}\mp\lambda^{AB} \lambda^{CD*}e^{-ip\cdot(x-y)}]\end{split} \tag{21}\] where **Definition 3.1**.: (Spin sum) Define the spin sum as \[(2p^{0})^{-1}\pi^{AB,CD}(\mathbf{p})\equiv u^{AB}(\mathbf{p})u^{CD\dagger}( \mathbf{p})=v^{AB}(\mathbf{p})v^{CD\dagger}(\mathbf{p}) \tag{22}\] It is a \((2A+1)(2B+1)\times(2C+1)(2D+1)\)-matrix. **Remark 3.1**.: We use the convention that \(u^{AB}(p)\) is a matrix \((u^{AB}_{ab}(p,\sigma))_{ab,\sigma}\), having \((2A+1)(2B+1)\) rows and \(2j+1\) columns. The above is understood as a matrix multiplication. Written explicitly in components, it reads \[(2p^{0})^{-1}\pi^{AB,CD}_{ab,cd}(\mathbf{p})\equiv\sum_{\sigma}u^{AB}_{ab}( \mathbf{p},\sigma)u^{CD*}_{cd}(\mathbf{p},\sigma)=\sum_{\sigma}v^{AB}_{ab}( \mathbf{p},\sigma)v^{CD*}_{cd}(\mathbf{p},\sigma) \tag{23}\] But for convenience, we adapt the more concise notation involving matrix multiplication. We immediately see its Lorentz transformation property: **Proposition 3.1**.: \[\pi^{AB,CD}(\Lambda p)=D^{AB}(\Lambda)\pi^{AB,CD}(p)D^{CD}(\Lambda)^{\dagger}\] (24) Proof.: \[D^{AB}(\Lambda)\pi^{AB,CD}(p)D^{CD\dagger}(\Lambda) =2p^{0}D^{AB}(\Lambda)u^{AB}(p)u^{CD\dagger}(p)D^{CD\dagger}(\Lambda)\] (25) \[=2(\Lambda p)^{0}u^{AB}(\Lambda p)D^{j}(W(\Lambda,p))D^{j\dagger} (W(\Lambda,p))u^{CD\dagger}(\Lambda p)\] \[=2(\Lambda p)^{0}u^{AB}(\Lambda p)u^{CD\dagger}(\Lambda p)\] \[=\pi^{AB,CD}(\Lambda p)\] The major part of the proof of spin statistics is that we can express the spin sum as a polynomial with a good parity property, and it is this polynomial's parity that determine the boson/fermion of the particle. The parity is used in absorbing the minus sign in \(e^{-ip\cdot x}\) of \(\psi^{(-)}\). 5 Footnote 5: Until not we do not assume the particle to be massive. **Theorem 3.1**.: 6 Footnote 6: Weinberg in [20] proved the special case when \(\mathbf{p}=(0,0,p^{3})\). But it does not directly imply the general case. In the massive case there is a 4-variables polynomial \(P\) of coefficients \((2A+1)(2B+1)\times(2C+1)(2D+1)\)-matrices, such that on the mass shell its value always equal the spin sum \[\pi^{AB,CD}(\mathbf{p},\sqrt{\mathbf{p}^{2}+m^{2}})=P(\mathbf{p},\sqrt{ \mathbf{p}^{2}+m^{2}}) \tag{26}\] and it is either an odd function or an even function determined by the parity of \(2A+2D\): \[P(-\mathbf{p},-p^{0})=(-1)^{2A+2D}P(\mathbf{p},p^{0}) \tag{27}\] for all 4-vectors \(p\). Let's prove the theorem. First we can prove that the initial value \(\pi^{AB,CD}(\mathbf{0})\) can be expressed as a linear combination of above matrices \[\pi^{AB,CD}(\mathbf{0})=\sum_{K\text{satisfying }(11)}\xi_{K}^{ABCDABCD}_{K}T^{0 _{1}0_{2}\cdots 0_{2K}} \tag{28}\] This is because by (24), \(\pi^{AB,CD}(\mathbf{0})\) is invariant under the representation restricted to \(SU(2)\), so it must lie in the direct sum of 0-subrepresentations of \(SU(2)\) when we decompose \[\begin{split}(A,B)\otimes(D,C)|_{SU(2)}&=\bigoplus_{| A-D|\leq K_{1}\leq A+D,|B-C|\leq K_{2}\leq B+C}(K_{1},K_{2})|_{SU(2)}\\ &=\bigoplus_{|A-D|\leq K_{1}\leq A+D,|B-C|\leq K_{2}\leq B+C} \bigoplus_{|K_{1}-K_{2}|\leq K_{3}\leq K_{1}+K_{2}}K_{3}\end{split} \tag{29}\] We see that each \(0\)-subrep of \(SU(2)\) comes from a \((K,K)\)-subrep of \(SL(2,\mathbb{C})\) where \(K\) satisfies (11). Also, by (14), \(T^{00\cdots 0}\) is non-zero and invariant under \(SU(2)\), so they form a basis for the \(0\)-subspace of \(SU(2)\). Finally, combine (24), (14) and (28), we have the equality on mass shell \[\pi^{AB,CD}(\mathbf{p})=\sum_{K\text{satisfying \eqref{eq:mass_eq}}}\xi_{K}^{ ABCDABCD}{}_{K}T^{\mu_{1}\mu_{2}\cdots\mu_{2K}}m^{-2K}p_{\mu_{1}}p_{\mu_{2}} \cdots p_{\mu_{2K}} \tag{30}\] So we define the polynomial as the RHS. The proof is done. With this theorem we can finally begin the proof for spin statistics. First we show that we need only verify a simpler case: **Lemma 3.1**.: 7 Footnote 7: Weinberg in [20] mentioned and used this result. For \([\psi_{ab}^{AB}(x),\psi_{cd}^{CD\dagger}(y)]_{\mp}\) vanishes for space-like \(x-y\), it is sufficient that it vanishes when \(x^{0}=y^{0}\). Proof.: We can choose a Lorentz transformation \(\Lambda\) such that \((\Lambda x)^{0}=(\Lambda y)^{0}\). Under Lorentz transformation we have \[[\psi^{AB}(x),\psi^{CD\dagger}(y)]_{\mp}=U_{0}(\Lambda)^{-1}D^{AB}(\Lambda^{- 1})[\psi^{AB}(\Lambda x),\psi^{CD\dagger}(\Lambda y)]_{\mp}D^{CD\dagger}( \Lambda^{-1})U_{0}(\Lambda) \tag{31}\] So it follows. Now we rewrite the above polynomial: substitute all exponentials of \(p^{0}\) of degree larger than \(1\) by substituting \((p^{0})^{2}=\mathbf{p}^{2}+m^{2}\). Then the value on mass shell do not change (but the polynomial changes) and can be written as \[\pi^{AB,CD}(\mathbf{p})=P(\mathbf{p})+2\sqrt{\mathbf{p}^{2}+m^{2}}Q(\mathbf{p}) \tag{32}\] where \(P\) and \(Q\) are \(3\)-variables polynomials in \(\mathbf{p}\) alone, with \[\begin{split} P(-\mathbf{p})&=(-)^{2A+2D}P(\mathbf{p })\\ Q(-\mathbf{p})&=-(-)^{2A+2D}Q(\mathbf{p})\end{split} \tag{33}\] For \(x-y\) space-like, by the above lemma we can adopt a Lorentz frame in which \(x^{0}=y^{0}\), and thus \[\begin{split}\mp=&[\kappa^{AB}\kappa^{CD*}\mp(-)^{2A +2D}\lambda^{AB}\lambda^{CD*}]P(-i\nabla)\Delta_{+}(\mathbf{x}-\mathbf{y},0)\\ &+[\kappa^{AB}\kappa^{CD}\pm(-)^{2A+2D}\lambda^{AB}\lambda^{CD*}] Q(-i\nabla)\delta^{3}(\mathbf{x}-\mathbf{y})\end{split} \tag{34}\] In order that this should vanish when \(\mathbf{x}\neq\mathbf{y}\), we must have \[\kappa^{AB}\kappa^{CD*}=\pm(-1)^{2A+2D}\lambda^{AB}\lambda^{CD*} \tag{35}\] For the case where \(A=C\) and \(B=D\). \[|\kappa^{AB}|^{2}=\pm(-)^{2A+2B}|\lambda^{AB}|^{2} \tag{36}\] This is possible if and only if \[\pm(-1)^{2A+2B}=+1 \tag{37}\] and \[|\kappa^{AB}|^{2}=|\lambda^{AB}|^{2} \tag{38}\] Return to the general case we have \[\frac{\kappa^{AB}}{\kappa^{CD}}=(-1)^{2A-2C}\frac{\lambda^{AB}}{\lambda^{CD}} \tag{39}\] To summarize what we have proved above: **Theorem 3.2**.: (Spin statistics, massive irreducible case) 89 Footnote 8: This result is stated in [11], but there is no rigorous proof. Footnote 9: The factor \((-)^{2B}\) is irrelevant when we just consider one field, but is essential when considering different fields. In order for all fields \(\psi^{AB}(x)\) constructed from a massive particle with spin \(j\) to satisfy Axiom 1, it is equivalent that whether it is boson or fermion depends on whether \(2j\) is even or odd, and \(\lambda^{AB}=(-)^{2B}\kappa^{AB}c\), where \(c\) is a constant independent of \(A,B\). **Example 3.1**.: Massive scalar field has \((A,B)=(0,0)\) and \(j=0\), so it describes bosons. Massive vector field has \((A,B)=(\frac{1}{2},\frac{1}{2})\) and \(j=0,1\), so it describes bosons. Massive Weyl field has \((\frac{1}{2},0)\) or \((0,\frac{1}{2})\) and \(j=\frac{1}{2}\), so it describes fermions. So we can choose a normalization for each \((A,B)\) field separately, and for \(c\) we can adjust the phase in the definition of creation and annilation operators and \(c=1\). As the result we have: \[\psi^{AB}_{ab}(x)= (2\pi)^{-3/2}\sum_{\sigma}\int d^{3}p[e^{ip\cdot x}u^{AB}_{ab}({ \bf p},\sigma)a({\bf p},\sigma)+e^{-ip\cdot x}(-)^{2B}v^{AB}_{ab}({\bf p}, \sigma)a^{c\dagger}({\bf p},\sigma)]\] ## 4 Field equations for massive particles ### General equation For a quantum field \(\psi^{+}_{l}(x)=(2\pi)^{-3/2}\int d^{3}pe^{ip\cdot x}u_{l}({\bf p},\sigma)a({ \bf p},\sigma)\), denote by \[\psi^{(+)AB}_{l}(p)=(2\pi)^{-3/2}u^{AB}_{l}({\bf p},\sigma)a({\bf p},\sigma) \tag{41}\] its Fourier transformation, and the similarly \[\psi_{l}^{(-)AB}(p)=(2\pi)^{-3/2}(-)^{2B}v_{l}^{AB}({\bf p},\sigma)a^{\dagger}({ \bf p},\sigma) \tag{42}\] Clearly by unitarity of \(D^{CD}|_{SU(2)}\), we have \(u^{CD\dagger}(0)u^{CD}(0)=I\), so we have \[\begin{split}\psi^{(+)AB}(p)&=\Pi^{AB,CD}(p)\psi^{(+ )CD}(p)\\ \psi^{(-)AB}(p)&=(-)^{2B+2D}\Pi^{AB,CD}(p)\psi^{(-) CD}(p)\end{split} \tag{43}\] where we have defined: **Definition 4.1**.: (Twisted spin sum) Define the twisted spin sum \[\begin{split}\Pi^{AB,CD}(p)&\equiv D^{AB}(Lp)u^{AB} ({\bf 0})u^{CD\dagger}({\bf 0})D^{CD-1}(Lp)\\ &=D^{AB}(Lp)v^{AB}({\bf 0})v^{CD\dagger}({\bf 0})D^{CD-1}(Lp) \end{split} \tag{44}\] We have its transformation rule just like for spin sum: **Proposition 4.1**.: \[\Pi^{AB,CD}(\Lambda p)=D^{AB}(\Lambda)\Pi^{AB,CD}(p)D^{CD-1}(\Lambda)\] (45) Proof.: \[\begin{split}& D^{AB}(\Lambda)\Pi^{AB,CD}(p)D^{CD-1}(\Lambda)=D^{ AB}(\Lambda)D^{AB}(Lp)u^{AB}(0)u^{CD\dagger}(0)D^{CD-1}(Lp)D^{CD-1}( \Lambda)\\ &=D^{AB}(L(\Lambda p))D^{AB}(W(\Lambda,p))u^{AB}(0)u^{CD\dagger} (0)D^{CD-1}(W(\Lambda,p))D^{CD-1}(L(\Lambda p))\\ &=D^{AB}(L(\Lambda p))D^{AB}(W(\Lambda,p))u^{AB}(0)u^{CD\dagger} (0)D^{CD\dagger}(W(\Lambda,p))D^{CD-1}(L(\Lambda p))\\ &=D^{AB}(L(\Lambda p))u^{AB}(0)D^{j}(W(\Lambda,p))D^{j\dagger}(W( \Lambda,p))u^{CD\dagger}(0)D^{CD-1}(L(\Lambda p))\\ &=D^{AB}(L(\Lambda p))u^{AB}(0)u^{CD\dagger}(0)D^{CD-1}(L( \Lambda p))\\ &=\Pi^{AB,CD}(\Lambda p)\end{split}\] (46) where we have used the unitarity of \(D^{CD}|_{SU(2)}\) in the third line. So we have the similar result for twisted spin sum: **Theorem 4.1**.: \[\begin{split}&\text{\bf Theorem 4.1}\\ &\text{\bf Theorem 4.2}\\ &\text{\bf Theorem 4.3}\\ &\text{\bf Theorem 4.4}\\ &\text{\bf Theorem 4.5}\\ &\text{\bf Theorem 4.6}\\ &\text{\bf Theorem 4.7}\\ &\text{\bf Theorem 4.8}\\ &\text{\bf Theorem 4.9}\\ &\text{\bf Theorem 4.10}\\ &\text{\bf Theorem 4.11}\\ &\text{\bf Theorem 4.12}\\ &\text{\bf Theorem 4.13}\\ &\text{\bf Theorem 4.14}\\ &\text{\bf Theorem 4.15}\\ &\text{\bf Theorem 4.16}\\ &\text{\bf Theorem 4.17}\\ &\text{\bf Theorem 4.18}\\ &\text{\bf Theorem 4.19}\\ &\text{\bf Theorem 4. For the proof we just modify the representation on \(V\) by \(M\mapsto D^{AB}(\Lambda)MD^{CD-1}(\Lambda)\), and the LHS of the transformation rule for \(T\) matrices by \(D^{AB}(\Lambda)T^{\cdots}D^{CD-1}\). Note that \(C,D\) are interchanged, because \(\Lambda\mapsto D^{CDT}(\Lambda^{-1})\) is isomorphic to \(D^{CD}\), not \(D^{DC}\) as in the spin sum case. At this point we can derive from (43) two field equations: \[\begin{split}\psi^{(+)AB}(x)&=\Pi^{AB,CD}(-i \partial)\psi^{(+)CD}(x)\\ \psi^{(-)AB}(x)&=(-)^{2A+2C}(-)^{2B+2D}\Pi^{AB,CD}( -i\partial)\psi^{(-)CD}(x)\end{split} \tag{50}\] where we have abused the notation to let \(\Pi^{AB,CD}(-i\partial)\) instead of \(P(-i\partial)\) denote replacing \(p_{\mu}\) by \(-i\partial_{\mu}\) in the polynomial \(P\). The two phases in the second line cancel, so we have the following one field equation: **Theorem 4.2**.: (Field equation of any spin, massive case) 1011 Footnote 10: [17] neglect the phase of negative energy part. Footnote 11: In the combination of positive and negative part, the phase \((-)^{2B}\) plays an essential role. The following field equation is satisfied: \[\psi^{AB}(x)=\Pi^{AB,CD}(-i\partial)\psi^{CD}(x) \tag{51}\] So clearly the field equation is not unique: we can adjust the polynomial \(P\) by replacing \((p_{0})^{2}\) with \(\mathbf{p}^{2}+m^{2}\) in any monomial, which do not change the value on the mass shell. But this is just adding a term of Klein-Gordan operator connected by several partial derivative operators. The degrees of monomials in \(P\) satisfy: \[\max\{|A-C|,|B-D|\}\leq d/2\leq\min\{A+C,B+D\} \tag{52}\] As a special case, the degrees of \(\Pi^{j,0;j,0}\) and \(\Pi^{0,j;0,j}\) are all \(0\). So we do not have a nontrivial field equation for \((j,0)\) or \((0,j)\) field with itself from the above procedure. 12 And \(\Pi^{j,0;0,j}\) and \(\Pi^{0,j;j,0}\) must be homogeneous of degree \(2j\). This is the case that we have a homogeneous (except one term) field equation. 13 Footnote 12: This coincides with Weinberg. Footnote 13: This is the result in Weinberg. But in the general case, the polynomial may not be homogeneous. Let's talk about the relation between ordinary spin sum and twisted spin sum. As is used in the proof of the field equation, \(D^{CD-T}\) is isomorphic (but not generally equal) \(D^{CD}\), and \(D^{CD*}\) is isomorphic to \(D^{DC}\). So we may expect that \(\pi^{AB,CD}(p)\) is almost the same as \(\Pi^{AB,DC}(p)\), because their Lorentz transformation rule is under the same representation. Define \[\Omega_{CD}:\begin{pmatrix}\mathbb{C}^{(2C+1)(2D+1)}&\rightarrow\mathbb{C}^{ (2C+1)(2D+1)}\\ e_{(c-1)(2D+1)+d}&\mapsto&e_{(d-1)(2C+1)+c}\end{pmatrix} \tag{53}\] This is just a basis transformation \[\Omega_{CD}:\begin{pmatrix}V^{C}\otimes V^{D}&\to V^{D}\otimes V^{C}\\ v\otimes w&\mapsto w\otimes v\end{pmatrix} \tag{54}\] So clearly \[\Omega_{CD}u^{CD}(\mathbf{0})=u^{DC}(\mathbf{0}) \tag{55}\] Also we can show that \(\Omega_{CD}\) is a homomorphism of representations: \[D^{DC-\dagger}(\Lambda)\Omega_{CD}=\Omega_{CD}D^{CD}(\Lambda) \tag{56}\] This is because \(\exp(-ig)^{-\dagger}=\exp(-ig^{\dagger})\) implies that the Lie algebra representation of \(D^{DC-\dagger}\) is: \[\mathbf{J}\mapsto \mathbf{J}^{D}\otimes I^{C}+I^{D}\otimes\mathbf{J}^{C} \tag{57}\] \[\mathbf{K}\mapsto +i(\mathbf{J}^{D}\otimes I^{C}-I^{D}\otimes\mathbf{J}^{C})\] So putting these together we have \[u^{DC\dagger}(\mathbf{0})D^{DC-1}(Lp) =u^{CD\dagger}(\mathbf{0})\Omega_{CD}^{\dagger}\Omega_{CD}^{- \dagger}D^{CD\dagger}(Lp)\Omega_{CD}^{\dagger} \tag{58}\] \[=u^{CD\dagger}(\mathbf{0})D^{CD\dagger}(Lp)\Omega_{CD}^{\dagger}\] Comparing with what we called the spin sum \[\pi^{AB,CD}(\mathbf{p})\equiv(2p^{0})u^{AB}(\mathbf{p})u^{CD\dagger}(\mathbf{ p})=(2m)D^{AB}(Lp)u^{AB}(\mathbf{0})u^{CD\dagger}(\mathbf{0})D^{CD\dagger}(Lp) \tag{59}\] we derive: **Proposition 4.2**.: (Relation between spin sum and twisted spin sum) \[\Pi^{AB,DC}(p)=\frac{1}{2m}\pi^{AB,CD}(p)\Omega_{CD}^{\dagger} \tag{60}\] Now we can formulate the field equation in terms of ordinary spin sum: **Theorem 4.3**.: (Field equation of any spin, massive case) The following field equation is satisfied: \[\psi^{AB}(x)=\frac{1}{2m}\pi^{AB,DC}(-i\partial)\Omega_{DC}^{\dagger}\psi^{CD} (x) \tag{61}\] In the special case where one of \(C,D\) is \(0\), \(\Omega_{CD}\) is the identity matrix, so the twisted spin sum is the same as the spin sum. This is the case in [10]. This can be seen more directly by noticing \[D^{0,j}(\Lambda) =D^{j,0.-\dagger}(\Lambda) \tag{62}\] \[D^{j,0}(\Lambda) =D^{0,j.-\dagger}(\Lambda)\] is not only isomorphic as representations, but directly equal in their standard basis. ### Examples The first example is massive spin-\(\frac{1}{2}\) particles with field rep \((\frac{1}{2},0)\) or \((0,\frac{1}{2})\). The corresponding field equation is what we called Weyl equations or Dirac equation. We can easily see that the zero momentum value coefficients and spin sums are: \[\begin{pmatrix}u^{1/2,0}(\mathbf{0})\\ u^{0,1/2}(\mathbf{0})\end{pmatrix}=\begin{pmatrix}1&0\\ 0&1\\ 1&0\\ 0&1\end{pmatrix} \tag{63}\] \[\begin{pmatrix}\pi^{1/2,0,1/2,0}(\mathbf{0})&\pi^{1/2,0,0,1/2}(\mathbf{0})\\ \pi^{0,1/2,1/2,0}(\mathbf{0})&\pi^{0,1/2,0,1/2}(\mathbf{0})\end{pmatrix}=2m \begin{pmatrix}I_{2}&I_{2}\\ I_{2}&I_{2}\end{pmatrix} \tag{64}\] By (11), the the upper-left term and the lower-right term when the momentum varies equal a polynomial of degree 1, while lower-left term and the upper-right term when the momentum varies equal polynomials of degree 0. And by the \(T\) matrices for \((1/2,0,1/2,0)\) and \((0,1/2,0,1/2)\) computed in section 2, the expansion of zero momentum spin sums are: \[\pi^{1/2,0,1/2,0}(\mathbf{0})=2m\sigma^{0},\quad\pi^{0,1/2,0,1/2}(\mathbf{0}) =2m\bar{\sigma}^{0} \tag{65}\] so we have \[\begin{pmatrix}\pi^{1/2,0,1/2,0}(p)&\pi^{1/2,0,0,1/2}(p)\\ \pi^{0,1/2,1/2,0}(p)&\pi^{0,1/2,0,1/2}(p)\end{pmatrix}=2\begin{pmatrix}-p_{ \mu}\sigma^{\mu}&mI_{2}\\ mI_{2}&-p_{\mu}\bar{\sigma}^{\mu}\end{pmatrix} \tag{66}\] We abbreviate \(\varphi=\psi^{1/2,0},\chi=\psi^{0,1/2}\), and the field equations (61) write: \[\varphi =\frac{1}{2m}\pi^{1/2,0,0,1/2}(-i\partial)\varphi \tag{67}\] \[\varphi =\frac{1}{2m}\pi^{1/2,0,1/2,0}(-i\partial)\chi\] \[\chi =\frac{1}{2m}\pi^{0,1/2,1/2,0}(-i\partial)\chi\] \[\chi =\frac{1}{2m}\pi^{0,1/2,0,1/2}(-i\partial)\varphi\] The first and the third line are trivial: \[\varphi =\varphi \tag{68}\] \[\chi =\chi\] and the other two lines are: \[m\varphi =i\sigma^{\mu}\partial_{\mu}\chi \tag{69}\] \[m\chi =i\bar{\sigma}^{\mu}\partial_{\mu}\varphi\] They are exactly Weyl equations for massive particles. The second example is massive spin-1 particles with field rep \((\frac{1}{2},\frac{1}{2})\). The corresponding field equation is what we called Proca equation, also called the massive Maxwell equations. The field rep \((\frac{1}{2},\frac{1}{2})\) is not directly equal, but is equivalent to the vector representation (or called canonical representation) \(D(\Lambda)=\Lambda\) of \(SO^{+}(1,3)\). We take this form. The spin-1 rep of \(su(2)\) is defined as: \[J_{z}\mapsto\begin{pmatrix}1&0&0\\ 0&0&0\\ 0&0&-1\end{pmatrix}\qquad\quad J_{+}\mapsto\begin{pmatrix}0&\frac{1}{\sqrt{2} }&0\\ 0&0&\frac{1}{\sqrt{2}}\\ 0&0&0\end{pmatrix}\qquad\quad J_{-}\mapsto\begin{pmatrix}0&0&0\\ \frac{1}{\sqrt{2}}&0&0\\ 0&\frac{1}{\sqrt{2}}&0\end{pmatrix} \tag{70}\] and the vector representation maps everything to itself: \[J_{z}=-i\begin{pmatrix}0&0&0&0\\ 0&0&1&0\\ 0&-1&0&0\\ 0&0&0&0\end{pmatrix}\qquad\quad J_{\pm}=\frac{-i}{\sqrt{2}}\begin{pmatrix}0&0&0 &0\\ 0&0&0&\mp i\\ 0&0&0&1\\ 0&\pm i&-1&0\end{pmatrix} \tag{71}\] Under the vector representation, the eigenvectors of z-axis angular-momentum is: \[e_{0}=\begin{pmatrix}0\\ 0\\ 0\\ 1\end{pmatrix}\qquad\quad e_{+}=J_{+}e_{0}=-\frac{1}{\sqrt{2}}\begin{pmatrix}0 \\ 1\\ i\\ 0\end{pmatrix}\qquad\quad e_{-}=J_{-}e_{0}=\frac{1}{\sqrt{2}}\begin{pmatrix}0 \\ 1\\ -i\\ 0\end{pmatrix} \tag{72}\] So the initial value of the coefficients is: \[u(0)=\begin{pmatrix}0&0&0\\ -\frac{1}{\sqrt{2}}&0&\frac{1}{\sqrt{2}}\\ -\frac{i}{\sqrt{2}}&0&-\frac{i}{\sqrt{2}}\\ 0&1&0\end{pmatrix} \tag{73}\] and the initial value of the spin sum is: \[\pi(0)=2m\begin{pmatrix}0&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix} \tag{74}\] We can show that \[{}^{vec,vec}_{1}(T^{\mu\rho})^{\nu\sigma}=g^{\mu\sigma}g^{\nu\rho} \tag{75}\] Indeed we should verify: \[\Lambda^{\mu}{}_{\nu}(T^{ab})^{\nu}{}_{\rho}(\Lambda^{-1})^{\rho}{}_{\sigma}= \Lambda_{\rho}{}^{a}\Lambda_{\nu}{}^{b}(T^{\rho\nu})^{\mu}{}_{\sigma} \tag{76}\] This is equivalent to \[\Lambda^{\mu}{}_{\nu}g^{a\nu}g^{b}{}_{\rho}\Lambda_{\sigma}{}^{\rho}=\Lambda_ {\rho}{}^{a}\Lambda_{\nu}{}^{b}g^{\mu\rho}g^{\nu}{}_{\sigma} \tag{77}\] which is obvious. Also clearly we have \[{}^{vec,vec}_{0}T=I_{4} \tag{78}\] So the initial value of the spin sum is decomposed as: \[\pi(0)=2m[I_{4}-\begin{pmatrix}1&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}]=2m[^{vec,vec}_{0}(T)^{\circ\circ}-^{vec,vec}_{1}(T^{\mu \rho})^{\circ\circ}] \tag{79}\] So the field equation is \[(I_{4}-\frac{\partial_{\mu}\partial_{\rho}}{m^{2}}^{vec,vec,vec}_{1}(T^{\mu \rho})^{\circ\circ})B_{\circ}=B_{\circ} \tag{80}\] It is equivalent to: \[\partial_{\mu}\partial_{\rho}g^{\mu\sigma}g^{\nu\rho}B_{\sigma}=0 \tag{81}\] So the field equation becomes: \[\partial^{\nu}\partial^{\mu}B_{\mu}=0 \tag{82}\] Taking another \(\partial_{\nu}\) and using Klein-Gordan equation, we have: \[\partial^{\mu}B_{\mu}=0 \tag{83}\] This, combined with Klein-Gordan equation, is equivalent to the famous Proca equation: \[\partial_{\mu}(\partial^{\mu}B^{\nu}-\partial^{\nu}B^{\mu})+m^{2}B^{\nu}=0 \tag{84}\] **Remark 4.1**.: Notice that for vector field of massless particles with helicity-1, (83) is a gauge fixing condition we artificially add to the field, while in the massive case it is a field equation that must be satisfied. So the vector field for massive spin-1 particles do not have gauge invariance. ## 5 Conclusions In this article we proved spin statistics for arbitrary massive \((A,B)\) field. The quantity'spin sum' \(\pi^{AB}(p)\) is also used to derive the massive field equations, after a little modification. Klein-Gordan equation is just a condition of 4-momentum in this context. Explicit calculation shows that (Massive) Weyl equation, Dirac equation and Proca equation are all examples of our general field equation. In the case of \((A,B,C,D)=(j,0,0,j)\) or \((A,B,C,D)=(0,j,j,0)\), the polynomial of the twisted spin sum is homogeneous, so the field equation contains a constant term plus a homogeneous part, thanks to the condition on \(K\). But in the general case it may not be homogeneous. In this article we just proved the 'existence' part of the polynomial of the spin sum, what remains undiscovered is an explicit formula for the coefficients \(\xi^{ABCD}_{K}\). Weinberg in [10] discussed its explicit formula. Also in the massless case the proof in this article cannot be directly used. One solution is to derive and explicit expansion for the spin sum \(\pi^{j,0}(p)\) and \(\pi^{0,j}(p)\), as in [10], and then pass to the \(m\to 0\) limit. Another is to notice that \((A,B)\) fields can be constructed out of \((j,0)\) and \((0,j)\) fields. Also, Rarita-Schwinger equation remains undiscovered in this article.
2310.02511
Prepare Ansatz for VQE with Diffusion Model
The Variational Quantum Eigensolver (VQE) is a quantum algorithm used to find the ground state energy of a given Hamiltonian. The key component of VQE is the ansatz, which is a trial wavefunction that the algorithm uses to approximate the ground state. Designing a good ansatz can significantly improve the performance of the VQE algorithm. Typical ansatz structures include the Unitary Coupled Cluster (UCC) ansatz and the Hardware-Efficient Ansatz (HEA). The primary distinction between these two structures lies in their dependence on the problem and hardware. The UCC ansatz is tailored to the target Hamiltonian, whereas the HEA is determined by the hardware topology. We believe that an intermediate approach could combine the benefits of the UCC ansatz while introducing additional parameters to increase its expressiveness and capability. In this paper, we propose utilizing a diffusion model to facilitate the generation of ansatz. We create a sequence of UCC ansatzes as training data and input this data into the diffusion model. The model then generates quantum circuits that have a similar structure to the input data. These quantum circuits are subsequently tested using a VQE task to evaluate their performance. This approach provides a systematic method for generating ansatzes that maintain a similar structure while incorporating additional parameters, enhancing their expressiveness and capability. We validate on small molecules that the diffusion model can help prepare ansatz circuits for VQE.
Yilin Shen
2023-10-04T01:12:35Z
http://arxiv.org/abs/2310.02511v1
# Prepare Ansatz for VQE with Diffusion Model ###### Abstract The Variational Quantum Eigensolver (VQE) is a quantum algorithm used to find the ground state energy of a given Hamiltonian. The key component of VQE is the ansatz, which is a trial wavefunction that the algorithm uses to approximate the ground state. Designing a good ansatz can significantly improve the performance of the VQE algorithm. Typical ansatz structures include the Unitary Coupled Cluster (UCC) ansatz and the Hardware-Efficient Ansatz (HEA). The primary distinction between these two structures lies in their dependence on the problem and hardware. The UCC ansatz is tailored to the target Hamiltonian, whereas the HEA is determined by the hardware topology. We believe that an intermediate approach could combine the benefits of the UCC ansatz while introducing additional parameters to increase its expressiveness and capability. In this paper, we propose utilizing a diffusion model to facilitate the generation of ansatz. We create a sequence of UCC ansatzes as training data and input this data into the diffusion model. The model then generates quantum circuits that have a similar structure to the input data. These quantum circuits are subsequently tested using a VQE task to evaluate their performance. This approach provides a systematic method for generating ansatzes that maintain a similar structure while incorporating additional parameters, enhancing their expressiveness and capability. We validate on small molecules that the diffusion model can help prepare ansatz circuits for VQE. Variational Quantum Circuit, Quantum Computing, Variational Quantum Eigensolver, Diffusion Model ## I Introduction Quantum computing is a rapidly developing field with the potentials to solve complex problems [2, 8, 4, 15, 22]. As one of the most promising quantum algorithms, the Variational Quantum Eigensolver (VQE) has proven its effectiveness in simulating molecular behavior. VQE efficiently calculates the ground state energies of molecular systems, which is essential for understanding their properties and interactions [21, 11, 16]. In the VQE algorithm, a parameterized quantum circuit (PQC) is employed to approximate the quantum states associated with a molecular system. During the training process, the parameters within the PQC are iteratively updated to minimize the expectation values, which correspond to the ground state energy of the molecule. This technique allows for accurate and efficient approximation of the molecular system's properties and behavior [6, 3, 5]. Consequently, designing an appropriate parameterized quantum circuit, also known as an ansatz, is crucial for improving the performance of the VQE algorithm. The Unitary Coupled Cluster (UCC) [13] and Hardware-Efficient (HEA) [10] ansatz are commonly used structures in quantum computing. UCC incorporates physical information of the molecule, while HEA considers the hardware topology. UCC requires a large number of layers and gates, making it inefficient, while HEA ignores the physical information and can lead to suboptimal results. The Hardware-Efficient Ansatz (HEA) employs single-qubit parameterized gates on all qubits and two-qubit parameterized gates on all possible connections, with most of the gates being parameterized. On the other hand, the Unitary Coupled Cluster (UCC) Ansatz uses trotterization to simulate the exponential of Hamiltonian on qubits to approximate the states of the molecular system. The UCC Ansatz involves a lower proportion of parameterized gates compared to the HEA Ansatz. Recently, researchers propose to adopt Neural Architecture Search (NAS) [19] to search for more efficient ansatz structures. The proposed NAS method starts from multi-layer HEA as super-circuits, it samples sub-circuits and uses evolutionary algorithm to search for the better ansatz structure. This paper introduces a machine learning-based approach to create ansatz structures for VQE algorithms. Incorporation of the physical information of the target molecule system brings the advantages of the "gold standard" ansatz UCC over HEA. At the same time, UCC has fewer parameterized gates than HEA. Therefore, we propose to boost the flexibility of the current UCC by inserting some parameterized gates. We propose to use a diffusion model to generate ansatz structures that have a similar structure to UCC. Initially, we generate a set of UCC ansatz and encode them into images, which are then fed into the diffusion model. Once we obtain the generated images from the diffusion model, we decode them into quantum circuits and evaluate their performance on VQE tasks. The key insight is to use diffusion model to generate images that preserve the structures of the original images. We validate that the generated ansatz works for small molecules including \(H_{2},\;LiH\;and\;H_{2}O\). On these VQE tasks, the generated ansatz demonstrates superior performance over randomly generated ansatz. ## II Background ### _VQE and ansatz circuit_ The variational quantum eigensolver (VQE) uses hybrid quantum-classical computation to calculate eigenvalues of Hamiltonians. VQE has demonstrated its efficiency in solving the electronic Schrodinger equation for various small molecules. However, the performance of VQE largely depends on the selection of the variational ansatz that is used to represent the trial wave function. Therefore, constructing an effective ansatz is an active field of research. Once the parameterized circuit or the ansatz is generated, the ansatz parameters are then iteratively updated in a variational approach until the expectation value of the electronic Hamiltonian is minimized. The Hamiltonian of a quantum system can be described as: \[H=\sum_{i}h_{i}P_{i} \tag{1}\] where \(H\) is the Hamiltonian, \(h_{i}\) are coefficients, and \(P_{i}\) are the Pauli operators. Then ansatz circuit is adopted to generate a prepared quantum state: \[|\psi(\vec{\theta})\rangle=U(\vec{\theta})|0\rangle \tag{2}\] where \(\vec{\theta}\) is a vector of parameters, \(U(\vec{\theta})\) is a parameterized unitary operation, and \(|0\rangle\) is the initial quantum state. We can obtain the expectation value of the Hamiltonian: \[\langle H(\vec{\theta})\rangle=\langle\psi(\vec{\theta})|H|\psi(\vec{\theta})\rangle \tag{3}\] The expectation value (ground state energy) is our objective function to minimize: \[E(\vec{\theta})=\min_{\vec{\theta}}\langle H(\vec{\theta})\rangle \tag{4}\] In each iteration, the parameters are updated according to the optimization algorithm: \[\vec{\theta}_{k+1}=\vec{\theta}_{k}-\eta\nabla E(\vec{\theta}_{k}) \tag{5}\] where \(\vec{\theta}_{k}\) is the parameter vector at iteration \(k\), \(\eta\) is the learning rate, and \(\nabla E(\vec{\theta}_{k})\) is the gradient of the objective function with respect to the parameters at iteration \(k\). The molecular Hamiltonian in its electronic structure form: \[\hat{H}=\hat{H}_{1}+\hat{H}_{2} \tag{6}\] where \(\hat{H}_{1}\) represents the one-electron terms, and \(\hat{H}_{2}\) represents the two-electron terms. \[\hat{H}_{1} =\sum_{p,q}h_{pq}\hat{a}_{p}^{\dagger}\hat{a}_{q} \tag{7}\] \[\hat{H}_{2} =\frac{1}{2}\sum_{p,q,r,s}h_{pqrs}\hat{a}_{p}^{\dagger}\hat{a}_{ q}^{\dagger}\hat{a}_{r}\hat{a}_{s} \tag{8}\] where \(h_{pq}\) are the coefficients and \(\hat{a}_{p}^{\dagger}\) and \(\hat{a}_{q}\) are creation and annihilation operators for electron in molecular orbitals \(p\) and \(q\), respectively. where \(h_{pqrs}\) are the two-electron interaction coefficients, and \(\hat{a}_{p}^{\dagger},\hat{a}_{q}^{\dagger},\hat{a}_{r},\hat{a}_{s}\) are creation and annihilation operators for electrons in molecular orbitals \(p,q,r,s\), respectively. One typical ansatz is the hardware-efficient ansatz, the unitary matrix can be represented : \[U(\vec{\theta},\vec{\phi}) =\bigotimes_{i=1}^{n}R(\theta_{i},\phi_{i})\cdot\prod_{\langle i,j\rangle\in E}CZ_{i,j} \tag{9}\] \[\left|\psi(\vec{\theta},\vec{\phi})\right\rangle =U^{(L)}(\vec{\theta}^{(L)},\vec{\phi}^{(L)})\cdots U^{(1)}(\vec {\theta}^{(1)},\vec{\phi}^{(1)})\left|0\right\rangle^{\otimes n}\] (10) \[C(\vec{\theta},\vec{\phi}) =\langle\psi(\vec{\theta},\vec{\phi})|H|\psi(\vec{\theta},\vec{ \phi})\rangle \tag{11}\] Here, \(n\) represents the number of qubits, \(\vec{\theta}\) and \(\vec{\phi}\) are vectors of the parameters for the rotation gates, and \(E\) represents the set of edges for the entangling gates, \(H\) is the Hamiltonian of the system being studied and \(C(\vec{\theta},\vec{\phi})\) is now the objective function to minimize. An example of hardware-efficient ansatz is given in Figure 1, we can see that the single-qubit parameterized gates are inserted on all qubits and two-qubit parameterized gatesa are inserted on all available connections. Figure 1 demonstrates only one layer of hardware-efficient ansatz, while multiple layers are used in real VQE tasks. Another typical ansatz is the UCCSD ansatz, a chemistry-inspired ansatz. First, we need to define a single excitation operator (\(\hat{T}_{1}\)), and a double excitation operator (\(\hat{T}_{2}\)): \[\hat{T}_{1} =\sum_{i=1}^{Nocc}\sum_{a=1}^{N_{virt}}t_{i}^{a}\hat{a}_{a}^{ \dagger}\hat{a}_{i} \tag{12}\] \[\hat{T}_{2} =\sum_{i,j=1}^{Nocc}\sum_{a,b=1}^{N_{virt}}t_{ij}^{ab}\hat{a}_{a}^ {\dagger}\hat{a}_{b}^{\dagger}\hat{a}_{j}\hat{a}_{i} \tag{13}\] Here, \(N_{occ}\) and \(N_{virt}\) represent the number of occupied and virtual orbitals, respectively, and \(t_{ia}\) are the single excitation amplitudes and \(t_{ij}^{ab}\) are the double excitation amplitudes. \(\hat{a}^{\dagger}a\) and \(\hat{a}i\) are the creation and annihilation operators for the respective orbitals. Then we have the total excitation operator (\(\hat{T}\)) and the UCCSD unitary operator (\(\hat{U}\)): \[\hat{T} =\hat{T}_{1}+\hat{T}_{2} \tag{14}\] \[\hat{U} =e^{\hat{T}-\hat{T}^{\dagger}} \tag{15}\] After the UCCSD ansatz is applied to a reference state \(|\Phi_{0}\rangle\), we again have the objective function \(C(t_{ia},t_{ij}^{ab})\) to minimize. \[|\psi_{UCCSD}\rangle =\hat{U}\left|\Phi_{0}\right\rangle \tag{16}\] \[C(t_{i}^{a},t_{ij}^{ab}) =\langle\psi_{UCCSD}|H|\psi_{UCCSD}\rangle \tag{17}\] Figure 2 demonstrates a simple example of UCC ansatz, we can see that the circuits have multiple gates with only one parameterized \(R_{z}(\theta)\) gate. The UCCSD ansatz has been considered as the gold standard for the design of ansatz circuits. ### _Diffusion model_ Diffusion model is a machine learning model adapted from the diffusion probabilistic models, first introduced by Jascha Sohl-Dickstein in 2015 [17]. It is a type of generative model Fig. 1: One layer of hardware-efficient ansatz. The single-qubit parameterized gates are inserted on all qubits and two-qubit parameterized gates are inserted on all available connections. designed to remove Gaussian noise added to a graph while maintaining the graph's structure. This model has demonstrated its ability to preserve the underlying organization of the graph [7, 9, 18]. In this paper, we show that this characteristic can also be adopted to produce high-performance ansatz. Based on the scheme developed by Jonathan Ho [7], the training process for the diffusion model comprises two stages. The initial stage involves the gradual addition of Gaussian noise to an image, referred to as the forward process. Subsequently, the second stage (backward process) trains the parameters, enabling the model to learn noise reversal. For evaluation, the model employs parameters that are derived from the backward process on white noise, resulting in a new graph. ``` 1:repeat 2:\(x_{0}\sim q(x_{0})\) 3:\(t\sim\text{Uniform}(1,...,T)\) 4:\(\epsilon\sim N(0,I)\) 5: Take gradient descent step on \[\nabla_{\theta}\|\epsilon\sim\epsilon_{\theta}(\sqrt{\overline{\alpha}_{t}}x_ {0}+\sqrt{1-\overline{\alpha}_{t}}\epsilon,t)\|^{2}\] 6:until converged ``` **Algorithm 1** Jonathan Ho's Algorithm for training [7] ``` 1:\(x_{T}\sim N(0,I)\) 2:for\(t=T,...,1\)do 3:\(z\sim N(0,I)\) if \(t>1\), else \(z=0\) 4:\(x_{t-1}=\frac{1}{\sqrt{\overline{\alpha}_{t}}}(x_{t}-\frac{1-\alpha_{t}}{ \sqrt{1-\overline{\alpha}_{t}}}\epsilon_{\theta}(x_{t},t))+\sigma_{t}z\) 5:endfor 6:return\(x_{0}\) ``` **Algorithm 2** Jonathan Ho's Algorithm for sampling [7] As shown in the algorithm 1 and the algorithm 2, the process of diffusion model can be divided into the training part and the testing part. In the training part, for the forward process, we add Gaussian noise to the graph using a Markov Chain: \[q(x_{t-1}|x_{t})=N(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}I)\] with \(x_{t}\) representing the image's status at time t, and \(q(x_{t-1}|x_{t})\) representing the transition from \(x_{t-1}\) to \(x_{t}\). And \(\beta_{t}\) represents the coefficient of the noise we are adding at time t. Using the notation \(\alpha_{t}=1-\beta_{t}\) and \(\overline{\alpha}_{t}=\Pi\alpha_{t}\), the equation becomes: \[q(x_{t-1}|x_{t})=N(x_{t};\sqrt{\overline{\alpha}_{t}}x_{0},(1-\overline{\alpha }_{t})I)\] Thus a state transition of a graph at any time t can be simply represented by an equation in terms of the original state of the graph. In this experiment specifically, We chose a linear schedule of \(\beta\) from \(10^{-4}\) to \(0.01\). For the backward process, we simply take gradient descent step on \(\nabla_{\theta}\|\epsilon\sim\epsilon_{\theta}(\sqrt{\overline{\alpha}_{t}}x_ {0}+\sqrt{1-\overline{\alpha}_{t}}\epsilon,t)\|^{2}\), with \(\epsilon\) being the result of parameterization: \(x_{t}(x_{0},\epsilon)=\sqrt{\overline{\alpha}_{t}}x_{0}+\sqrt{1-\overline{ \alpha}_{t}}\epsilon\). We used a 2-D Unet for our training process, which consists of two downsampling layers and two upsampling layers based on [14]. In the sampling process, the diffusion model takes a random noise as a noisy image after timestep \(T\), and reverse the forward process using the trained parameters at each timestep, until the assumed "original image" is achieved. In our implementation, we take \(\sigma_{t}\) as a result of parameterization \(\sigma_{t}^{2}=\beta_{t}\) Fig. 3: In our proposed technique’s workflow, we begin by generating ansatz using the Unitary Coupled Cluster (UCC) method, which serves as our dataset. Next, we convert the ansatz circuits into images suitable for image processing and normalization. Afterward, the dataset is input into the diffusion model, and we collect the samples. These samples correspond to ansatz circuits that exhibit similar structures to the input ansatz. Finally, we evaluate these ansatz using the Variational Quantum Eigensolver (VQE) tasks. Fig. 2: Example of UCC ansatz, we can see from the figure that the proportion of parameterized gates is very limited. We may introduce more flexibility into the ansatz by adding more parameterized gates into it. ## III Methodology ### _Dataset Generation_ In the first step, we need to generate a group of ansatz that will later be transformed into images. We generate the UCC ansatz from a random Pauli string with the scheme developed in [20]. When we encouter a Pauli string like XXYZ, we add Hadamard gates onto the first two qubits and \(H_{y}\) gates onto the next qubit. We don't need to add extra gate for the remaining "Z". Then we connect the qubits with CNOT gates and insert a \(R_{z}(\theta)\) gate in the middle as the parameterized gate. The next step is to reverse the aforementioned Hadamard gates and CNOT gates to generate a symmetric form as demonstrated in Figure 2 Then we need to transform the generated ansatz circuits into images that can be handled by the diffusion model. In this paper, we propose to use different values of pixels to represent the different gates inside the ansatz circuits. In this way, we generate a groups of images that correspond with the group of UCC ansatz. The example is given in Figure 4, we can tell from the figures that UCC ansatz usually preservers a "V" shape. For each figure, the background is set to zero, which is black in images. For different numbers of qubits, we first generate random Pauli strings and their associated ansatz circuits. Next, these circuits are converted into images and undergo a normalization process, in which the images are resized to 28x28 dimensions. For each qubit number, we produce 10,000 images. ### _Image decoder_ Upon acquiring the image dataset derived from the generated UCC ansatz, we feed it into the diffusion model. We then obtain samples from the diffusion model, with the sampled images maintaining the structure of the input dataset, as illustrated in the Figure 5. The subsequent step involves decoding these images into quantum circuits, which can serve as ansatz in the VQE task. For the decoder, we first calculate the height and width required for the desired number of qubits, so that each gate can be represented by exactly one pixel. Given the ansatz generation scheme, the height of the graph should always be the number of qubits \(N\), and the width should always be twice the number of non-identity layers plus one. The fraction of identity layers with respect to the whole circuit of the generated image is approximated according to the fraction of the black pixels of each line. Then, we interpret the graph pixel by pixel until we have the entire ansatz. This allows us to accurately represent each gate in the ansatz and ensure that the decoder can properly interpret the image. The encoder and decoder is designed in a way that can best distinguish different types of gates as illustrated in the Table I. Figure 6 illustrates a quantum circuit following the decoding step. However, this quantum circuit does not guarantee optimal performance in VQE tasks. Consequently, we generate a set of ansatz candidates and assess their performance on VQE tasks. ### _VQE evaluation_ After acquiring the images from the diffusion model and decoding them into quantum circuits, we evaluate the performance of the generated ansatz circuits using VQE tasks for the molecules \(H_{2},\;LiH\;and\;H_{2}O\). We adopt the framework TorchQuantum [19] to evaluate the performance of these ansatz circuits. The Hamiltonians of the molecules are ob Fig. 4: Examples of the images that are generated from our UCC ansatz circuits. Fig. 5: Examples of the sampled images that are generated from the diffusion model. Fig. 6: Example of quantum circuits from decoder. Fig. 7: Example of randomly generated circuits. tained from Qiskit [1]. We also generate random ansatz for reference We evaluate the results by comparing them with the randomly produced circuits having an equal number of qubits and twice the gate count. The Figure 7 presents an example of a randomly generated ansatz. The findings reveal that the minimum energy obtained through random circuits significantly deviates from the one achieved by our devised ansatz. ## IV Results ### _Experimental setup_ Our ansatz is assessed within the TorchQuantum framework on a server equipped with dual Xeon E5-2630 v3 CPUs and 64 GB of RAM. The diffusion model's training and sampling are conducted on a server with an NVIDIA Tesla K40m GPU. For VQE task training, we employ the ADAM optimizer, set a maximum of 100 iterations with a learning rate of 0.1. ### _Performance of generated ansatz_ As shown in the Figure 8, for the VQE tasks of molecules \(H_{2}\), \(LiH\), and \(H_{2}O\), the ansatz generated by the diffusion model yields expectation values of -1.873, -8.921, and -52.396, respectively. Meanwhile, the randomly generated ansatz provides expectation values of -1.464, -5.944, and -29.472, respectively. As a result, the diffusion model-generated ansatz returns superior results by preserving the UCC ansatz structures. Such accuracy is achieved without the need for the NAS process proposed in [19], which introduces significant computational overhead. ## V Future work Large language models, such as ChatGPT [12], have had a significant impact recently. There is potential for the advancements in artificial intelligence to further benefit quantum computing research. It is expected that we will see an increase in the use of AI in quantum computing research. In this paper, we explores the use of AI models in the design of ansatz circuits for variational quantum algorithms. Our ansatz generator adopts diffusion model and creats ansatz circuits that preserve certain structures of the UCC ansatz. The efficacy of the generated ansatz can be further tested on noisy simulators and NISQ devices. In our technique, the encoder and decoder can be optimized to ensure that the generated circuits possess desirable features. It is also possible to further optimize the generation of ansatz circuits by considering the underlying hardware topology of the quantum devices. ## VI Conclusion The aim of this paper is to introduce the use of a diffusion model in generating ansatz circuits for the variational quantum eigensolver. Our objective is to keep certain structures from the UCC ansatz, while simultaneously inserting additional parameterized gates into the ansatz circuits. To achieve this, we first prepare a substantial set of random UCC ansatz and convert them into images that can be processed by the diffusion model. The diffusion model is then trained and sampled. The sampled images are then decoded into quantum circuits, which serve as our ansatz candidates. We assess the performance of these ansatz circuits on VQE tasks, and demonstrate that they exhibit superior accuracy when compared to random ansatz circuits of larger size. Fig. 8: The expectation values obtained from different ansatz types indicate that those from the diffusion model are more widely spread and, most importantly, can reach the optimal ground state energy of the molecules. In contrast, the randomly generated ansatz is unable of determining the accurate energy of the molecules. The diffusion model-generated ansatz eliminates the need for multi-layer UCCSD ansatz, as it already includes sufficient parameterized gates to guarantee the expressibiltiy of the ansatz circuits.
2305.18927
Evaluating the feasibility of using Generative Models to generate Chest X-Ray Data
In this paper, we explore the feasibility of using generative models, specifically Progressive Growing GANs (PG-GANs) and Stable Diffusion fine-tuning, to generate synthetic chest X-ray images for medical diagnosis purposes. Due to ethical concerns, obtaining sufficient medical data for machine learning is a challenge, which our approach aims to address by synthesising more data. We utilised the Chest X-ray 14 dataset for our experiments and evaluated the performance of our models through qualitative and quantitative analysis. Our results show that the generated images are visually convincing and can be used to improve the accuracy of classification models. However, further work is needed to address issues such as overfitting and the limited availability of real data for training and testing. The potential of our approach to contribute to more effective medical diagnosis through deep learning is promising, and we believe that continued advancements in image generation technology will lead to even more promising results in the future.
Muhammad Danyal Malik, Danish Humair
2023-05-30T10:36:30Z
http://arxiv.org/abs/2305.18927v1
# Evaluating the feasibility of using Generative Models to generate Chest X-Ray Data ###### Abstract In this paper, we explore the feasibility of using generative models, specifically Progressive Growing GANs (PG-GANs) and Stable Diffusion fine-tuning, to generate synthetic chest X-ray images for medical diagnosis purposes. Due to ethical concerns, obtaining sufficient medical data for machine learning is a challenge, which our approach aims to address by synthesising more data. We utilised the Chest X-ray 14 dataset for our experiments and evaluated the performance of our models through qualitative and quantitative analysis. Our results show that the generated images are visually convincing and can be used to improve the accuracy of classification models. However, further work is needed to address issues such as overfitting and the limited availability of real data for training and testing. The potential of our approach to contribute to more effective medical diagnosis through deep learning is promising, and we believe that continued advancements in image generation technology will lead to even more promising results in the future. _Index Terms_ - Data Synthesis, Disease Detection, Machine Learning, Stable Diffusion ## Introduction Chest X-Rays are by far the most common radiographic procedures used for medical diagnosis of lung disorders. Recently, advances have been made at a rapid pace in the field of Computer Aided Diagnosis (CAD), the automated detection of these diseases [1]. However, a notable issue with this approach is the lack of availability of medical data for these purposes. This is largely due to ethical concerns such as the privacy of the patients [2]. This is the problem we aim to solve with our approach. By using image generation models such as GANs and Stable Diffusion, we hope to find a way to utilise existing datasets to produce more data. This can help large classification models such as convolutional neural networks or vision transformers achieve higher accuracy, especially when there is a lack of data. The source code is available on our official GitHub repository (link provided at the end of the paper). ## Background Generative Adversarial Networks (GANs) consist of two neural networks, a generator G and a discriminator D, that are trained simultaneously. The generator takes random noise z as input and generates an image x, while the discriminator evaluates whether x is real or fake. The training objective is to minimise the following loss function: \[E_{x}[log(D(x))]\ +\ E_{z}[log(1\ -\ D(G(z)))]\] Progressive Growing GANs (PGGANs) gradually increase the resolution of the generated images during training. This is achieved by adding new layers to both the generator and discriminator as the training progresses. PGGANs improve training stability and scalability and have shown impressive results in generating high-quality images [3]. Stable Diffusion, on the other hand, is a generative model that learns to generate high-quality images by iteratively refining a noise vector through a series of diffusion steps. Unlike GANs, it does not require adversarial training, making it more stable and less prone to mode collapse [4]. Stable Diffusion consists of a diffusion process, which is a sequence of T steps; each step applies a diffusive process that smooths out the noise vector. The smoothing is done using a diffusion process that adds Gaussian noise to the input image. The formula below can be used to obtain the noisy image at a specific time step, \(t\). \[x_{t}=\sqrt{\alpha}_{t}x_{0}\ +\sqrt{1\ -\ \alpha}_{t}\varepsilon\] The model then generates an image by passing the final smoothed noise vector through a decoder network. The decoder network maps the smoothed noise vector to an image in pixel space. The training objective of Stable Diffusion is to maximise the log-likelihood of the training data, which is defined as the sum of the negative log-likelihood of the model's predictions over the training data. The model is trained using a maximum likelihood estimation method, which involves minimising the negative log-likelihood of the training data The formula for the reverse diffusion process can be approximated as expressed below. \[L_{simple}=\mathbb{E}_{t,x0,\epsilon}\left|\left|\left|\epsilon-\epsilon_{\theta }(x_{t^{\prime}}t)\right|\right|^{2}\right|\] Diffusion has shown impressive results in generating high-quality images and has been applied to various image synthesis tasks, including image inpainting, super-resolution, and image synthesis. It is a promising alternative to GANs and has the potential to generate more stable and diverse images [6]. ## Methodology ### _Dataset_ The dataset we opted to use for these experiments is the Chest X-ray 14 dataset by The National Institutes of Health [7], the primary research facility for conducting medical research in the United States of America. This is a large Chest X-ray dataset available to the public. It comprises the following classes of diseases: No Finding, Atelectasis, Cardiomegaly, Consolidation, Edema, Effusion, Emphysema, Fibrosis, Hernia, Infiltration, Mass, Nodule, Pleural Thickening, Pneumonia and Pneumothorax. ### _Progressive Growing GAN_ PG-GANs (Progressive Growing GANs) were used to synthesise Chest X-rays using the Chest X-ray 14 dataset by Segal et al. [8]. Using their implementation and pre-trained model weights, we generated several Chest X-rays as a baseline for comparison. These included No Finding (no disease) images along with Pneumonia Images. To overcome the issue of not having labels in PG-GANs, the authors used a separate feature-extractor model to generate latent vectors for separate diseases (post-training). Using this approach, they could let the GAN train first and then simply use existing images to generate latent vectors for different diseases. ### _Stable Diffusion Fine-Tuning_ Fine-tuning stable diffusion has been made more accessible than ever since the release of the DreamBooth Notebook [9]. We opted to use the code from this notebook to help us fine-tune Stable Diffusion v2.1-512px on the train set of the Chest X-ray 14 dataset. We intentionally avoided using the test set to avoid exposing our model to the test set, as it would later be used for evaluation. To prepare our data, we needed to add prompts to our images, as stable diffusion consists of both a text encoder and a U-Net. To do this, we needed each image file from the dataset to have a prompt associated with it. In DreamBooth, this can be done by either renaming the files or using text files. In our provided implementation, we have written code to extract the images and each of their labels, rename them based on the diseases found in the image and save them as a new dataset. The implementation and the processed dataset are available on our GitHub Repository. This data was then used to fine-tune the stable diffusion models. The different checkpoints were saved to assess possible overfitting and evaluate the model's performance at each checkpoint during the training process. ### _Adding Prompts for Bounding Boxes (Stable Diffusion)_ After fine-tuning the model, we had control over what type of disease would be present in the generated image but not where in the image it would be. To add this extra level of control, we utilised the bounding boxes provided in the dataset. These bounding boxes were only present in the test set, so we were not able to utilise as many images. To achieve this, we used a similar method as in sub-section III. Utilising the DreamBooth framework [9] once again, we prepared the data by adding code to extract not only the disease but also a custom label for the position of the finding based on the x and y coordinates of the bounding boxes. These images were once again saved in a folder to use for training, using the same process as highlighted above. This would allow us to generate images where we can explicitly specify the position of the finding, for example, 'top left'. ## Results ### _I. Real Images from Dataset_ ## Evaluation ### _II. Qualitative Analysis_ From a visual standpoint, all of the images generated look extremely convincing, except for perhaps the position-specific images. This is likely due to the small number of samples available with bounding boxes (\(<\)1000) as compared to the data available without bounding boxes (\(>\)80,000 in the train set). As for the regular stable diffusion model, the results produced have quite a high resolution as compared to the PG-GAN results. However, they do look less convincing when viewed side-by-side. Overall, both the PG-GAN and the Stable Diffusion results show impressive detail and variation in the samples produced. ### _II. Quantitative Analysis_ As an objective analysis of the realism of the generated images, we trained a classification model on a small subset of both real images and real \(+\) synthesised images. We kept the overall size of the subset the same for both tests. The model was evaluated on a subset of the test set with a size of 300. This was done for one particular disease, Edema. Hence, the task was to classify the image as either No Disease or Edema. Please note that this test was only performed for our original fine-tuned stable diffusion model, i.e. Methodology section, sub-section III. The implementation is provided on the GitHub Repository. A similar evaluation was done by Segal et al. in their paper using their PG-GAN results, so we opted not to redo the test for that model [8]. As the results show, the accuracy did fall when using synthesised images as part of the dataset, so the images generated are not quite on par with real images yet. However, it is encouraging to see that the model performed reasonably well despite half the data being synthesised. There could be several reasons for the drop in accuracy, the most obvious one being that the images are simply not accurate enough to the real data. Another potential problem is overfitting. However, we have found little evidence of this happening, as the models were evaluated at several checkpoints and produced widely varying images. Another possible reason for the observed discrepancy in the performance of the classification model on real versus synthesised images could be the limited number of images with diseases in the original dataset. Our generative model had many more No Finding images to train on than those with diseases, so perhaps it was not able to learn the features associated with each disease as well as it could have. The dataset has roughly 75% of its images with No Finding and only 25% for all the other classes combined. ## Conclusion This experiment showed that although using Stable Diffusion for the purpose of generating synthetic medical data seems promising, there is still some work to be done. Perhaps the problems faced and be tackled in future work. The results were still promising, however, showing that even with synthetic data, classification models can perform reasonably well at CAD tasks. With the rapid improvement in image generation technology, there will no doubt soon come a time when using these methods will be able to increase the effectiveness of medical diagnosis through Deep Learning exponentially. (All the sample outputs, models etc, can be found on the official GitHub Repository) ## Acknowledgment This paper is written as part of the final project of CS437 (Deep Learning) at the Lahore University of Management Sciences. It is an independent study with no sponsors.
2302.14162
Distributed Fixed-Time Consensus Control for Multiple AUV Systems with Input Saturations
This study proposes a new distributed control method based on an adaptive fuzzy control for multiple collaborative autonomous underwater vehicles (AUVs) to track a desired formation shape within a fixed time. First, a formation control protocol based on a fixed-time backstepping sliding mode control is designed, in which the consensus cooperative tracking errors for each AUV will be formulated. Then, to compensate for the saturated control torques, an adaptive auxiliary variable is introduced. Finally, a fixed-time adaptive fuzzy logic control (FLC) is derived to approximate the unknown dynamics, in which the adaptive laws of the FLC is derived such that the adaptive signals and errors can be convergent within a fixed time. The fixed time convergence is desired in practice because it provides an exciting property that the global convergence of the whole system is independent with the initial states of the AUVs. The computer simulation results for a consensus formation control of four AUVs show that the proposed formation control can provide high tracking performance with lower and smoother control efforts
Mien Van, Yuzhu Sun, Stephen Mcllvanna, Minh-Nhat Nguyen, Federico Zocco, Zhijie Liu, Hsueh-Cheng Wang
2023-02-27T21:49:06Z
http://arxiv.org/abs/2302.14162v1
# Distributed Fixed-Time Consensus Control for Multiple AUV Systems with Input Saturations ###### Abstract This study proposes a new distributed control method based on an adaptive fuzzy control for multiple collaborative autonomous underwater vehicles (AUVs) to track a desired formation shape within a fixed time. First, a formation control protocol based on a fixed-time backstepping sliding mode control is designed, in which the consensus cooperative tracking errors for each AUV will be formulated. Then, to compensate for the saturated control torques, an adaptive auxiliary variable is introduced. Finally, a fixed-time adaptive fuzzy logic control (FLC) is derived to approximate the unknown dynamics, in which the adaptive laws of the FLC is derived such that the adaptive signals and errors can be convergent within a fixed time. The fixed time convergence is desired in practice because it provides an exciting property that the global convergence of the whole system is independent with the initial states of the AUVs. The computer simulation results for a consensus formation control of four AUVs show that the proposed formation control can provide high tracking performance with lower and smoother control efforts. Multiple collaborative AUVs, Control of AUVs, Fixed-time convergence, Fuzzy logic system. ## I Introduction Robotics have been extensively applied for many challenging applications, ranging from manufacturing, agriculture, space and ocean applications [1, 2]. In some applications, the use of single robots or autonomous vehicles has limited efficiency, due to the limitation of sensing, endurance and payload carrying. To increase the efficiency of robots and autonomous vehicles for these applications, a concept of multiple collaborative robotics or swam robotics have been introduced [3]. For underwater environment, multiple collaborative autonomous underwater vehicles (AUVs) have shown their great efficiency for many challenging applications like seabed monitoring, wind turbine inspection, marine debris monitoring and cleaning, etc [4]. However, controlling multiple AUVs working collaboratively is not a trivial task because the effects of nonlinear dynamics, communication delay between AUVs, and the effects of underwater environmental disturbances, i.e., waves, currents, etc., become more severe in underwater environment [5]. Many elegant control methods have been investigated for increasing the tracking accuracy and robustness of multi-agent systems. Optimal controllers using distributed optimization have been developed [6]. A safe optimal controller based on control barrier function (CBF) has been proposed in [7]. Another approach based on reinforcement learning has been developed for the collaborative control of multi-agent systems [8]. Model predictive control (MPC) has been explored for multi-agent systems generally [9, 10], and for multiple AUV systems specifically [11]. Although optimal controllers and MPCs provide a good tracking performance when the full knowledge of the system can be known in advance, it is difficult to handle the disturbances and/or uncertainty components. In order to handle disturbance/uncertainty components, robust controllers have been extensively developed [12]. Robust controllers are particularly efficient for the control system of AUVs due to their robustness against the nonlinear effects of underwater working conditions. Due to its strong robustness against disturbances (i.e., matched disturbances), sliding mode control (SMC) techniques have been developed [14, 15]. Despite the advantages of high robustness, SMC generates high chattering, which can cause significant oscillations within the operation of mechanical systems. To reduce the chattering for SMC, a distributed bio-inspired SMC has been proposed for multiple AUVs [16]. However, the conventional SMCs do not provide finite time/fixed time convergence for the systems. To provide a finite time convergence, finite time consensus control methods have been developed for multi-agent systems [17, 18]. To obtain both finite time convergence and higher robustness, finite time sliding mode controllers have also been introduced [19, 20]. Finite time controllers have also been employed for single AUV system [21] and multiple AUV systems [22, 23]. In [24], a terminal SMC has been developed for the formation tracking control of multiple AUVs. The main drawback of the finite time controllers is that the convergence time of the system is dependent on the initial states of the systems. This issue, unfortunately, prevents the applicability of finite time controllers for many practical applications because, in practice, some initial states of some agents are unavailable or unknown. To overcome this drawback, fixed-time controllers have been studied recently [25, 26]. The use of fixed-time controllers can provide a fixed-time convergence, which is independent with the initial states, for the multi-agent systems [27, 28, 29]. One of the issues that reduces the tracking performance of robotic systems and multi-agent systems is the effects of unknown components such as unknown system parameters, friction terms, and faults, etc [30]. This becomes even more severe for AUVs due to the severe effects of external environmental disturbances, especially for multiple collaborative AUV systems [31]. To approximate the unknown components, many learning techniques have been extensively developed. An iterative learning method has been employed for multi-agent systems [32]. An adaptive NN has been developed for multi-agent systems [33, 34], and for multiple AUVs [35]. Adaptive fuzzy logic controllers have also been developed to take the knowledge of human about the dynamic system into the design to increase the approximation performance of the FLC [36, 37]. Adaptive fixed-time FLCs have been developed to preserve the advantages of both fixed-time convergence property and the approximation capacity of FLC in [38, 39]. However, the adaptive laws of the existing fixed-time FLCs do not provide fixed-time convergence for the system. In practice, it is desired that all the adaptive laws of the system can be convergent within a fixed-time to guarantee the global convergence of the system within a fixed-time. This is the main motivation of this paper. Input saturation is another important consideration in the design of practical controllers for single agent and multi-agent systems since, in practice, the control efforts of actuators (i.e., motors) are limited [40, 41]. Many efforts have been spent to find an effective mechanism to mitigate the effects of input saturation. In general, to reduce the effects of saturated control torques, an auxiliary design system can be employed [42, 43]. In summary, there are existing research gaps for formation tracking control for multiple AUVs, which will be addressed in this study: (i) the fixed-time convergence of the design of controllers for multiple AUV systems, (ii) the fixed-time convergence of the adaptive laws of the adaptive FLCs, (iii) the input saturation problem needs to be addressed within the design of distributed formation control of multiple AUV systems. To address the research gaps, a new fixed-time distributed formation tracking control for multiple AUV systems is proposed. The distributed fixed-time consensus formation will be derived based on a backstepping SMC method. To approximate the unknown components, an adaptive fixed-time FLC will be developed, in which the adaptive laws of FLC will be derived such that it can be convergent within a fixed-time to guarantee a global fixed-time convergence for the system. Furthermore, an auxiliary adaptive function will be introduced into the fixed-time controller to compensate for the effects of the overhead control efforts. The effectiveness of the new control algorithm will be tested on a consensus formation of four AUVs and compared with the counterpart distributed SMC based on a computer simulation. To highlight the novelties of this paper, we compare the proposed method with the existing approaches as follows: * Unlike the existing distributed consensus formation controllers for multiple AUV systems [15], this paper develops a fixed-time distributed formation algorithm for AUVs using a backstepping SMC method to preserve the merits of Lyapunov stability of the backstepping control, high robustness of SMC and bounded convergence time of the fixed-time control theory. * Unlike the existing adaptive fixed-time fuzzy controllers [38, 39], which do not guarantee a fixed time convergence for the adaptive laws of FLC, this paper develops a new adaptive fixed time fuzzy law to guarantee the fixed time convergence of the adaptive weights of FLC. This ensures a global fixed time convergence of the whole collaborative multiple AUVs system. * Unlike the existing consensus formation controllers for multiple AUVs [4, 22, 15], which do no consider the input saturation issues, this paper incorporates an adaptive auxiliary function into the fixed-time distributed consensus controller to handle the problem of saturated control efforts. ## II Fixed-time stability and convergence, Fuzzy logic, graph theory and problem formulation ### _Fixed-time stability_ A typical nonlinear system can be represented as follows [44]: \[\dot{\xi}(t)=f(\xi(t)),\quad\xi(t_{0})=\xi_{0},\quad\xi\in\Re^{n} \tag{1}\] where \(f(\cdot):\Re^{n}\rightarrow\Re^{n}\) is a possibly discontinuous vector field. The fixed time convergence is determined for system (1) when it is globally finite-time stable and its convergent time is bounded regardless the initial states of the system, i.e., \(\forall\xi_{0}\in\Re^{n}\), \(T(\xi_{0})\leq T_{\text{max}}\) is satisfied, where \(T_{\text{max}}\) is a positive constant. **Lemma 1** ([44]): _If a positive definite continuous function \(V(\xi):\Re^{n}\rightarrow\Re\) for system (1) satisfies \(\dot{V}(\xi)\leq-\chi_{1}V^{\varrho}(\xi)-\chi_{2}V^{\varsigma}(\xi)\) for some \(\chi_{1}>0\), \(\chi_{2}>0\), \(\varrho>1\), and \(0<\varsigma<1\), then system (1) is determined as a globally fixed-time stable system. The convergence time can be calculated independently with the initial states of system (1) as follows:_ \[T(\xi_{0})\leq\frac{1}{\chi_{1}(\varrho-1)}+\frac{1}{\chi_{2}(1-\varsigma)}. \tag{2}\] **Lemma 2** ([44]): _If a positive definite continuous function \(V(\xi):\Re^{n}\rightarrow\Re\) for system (1) satisfies \(\dot{V}(\xi)\leq-\chi_{1}V^{\varrho}(\xi)-\chi_{2}V^{q}(\xi)+\varphi\) for some \(\chi_{1}>0\), \(\chi_{2}>0\), \(p>1\), \(0<q<1\), and \(0<\varphi<\infty\), then system (1) is called a practically fixed-time stable system. Futhermore, the solution of system (1) has a residual set:_ \[\lim_{t\to T}\xi\|\xi\|\leq\min\{\chi_{1}^{\frac{-1}{p}}\left(\frac{ \varphi}{1-\kappa}\right)^{\frac{1}{p}},\chi_{2}^{\frac{-1}{q}}\left(\frac{ \varphi}{1-\kappa}\right)^{\frac{1}{q}}\} \tag{3}\] _where \(\kappa\) satisfies \(0<\kappa<1\). The settling time can be calculated independently with the initial states of the system as follows:_ \[T(\xi_{0})\leq\frac{1}{\chi_{1}\kappa(1-p)}+\frac{1}{\chi_{2}\kappa(q-1)}. \tag{4}\] ### _Fuzzy Logic System_ Given a vector of input, i.e., \(Z=(z_{1},z_{2},...,z_{n})^{T}\in\Re^{n}\) and an output variable, i.e., \(y=f(Z)\in\Re\), a fuzzy logic system can be used to map from the input to the output. The fuzzy rules of fuzzy logic system can be described as: \[Rule\,j:\text{If}\,z_{1}\text{ is }A_{1}^{j}\text{ and }...\text{ and }z_{n}\text{ is }A_{n}^{j}\text{ then }y\text{ is }B^{j} \tag{5}\] where \(A_{1}^{j}\), \(A_{2}^{j}\),..., \(A_{n}^{j}\) and \(B^{j}\) represent fuzzy sets. The fuzzy output can be obtained as: \[y=\sum\limits_{j=1}^{\sum\limits_{n}w_{j}\left(\frac{\Pi}{\prod\limits_{i=1}^{ n}\mu_{A_{i}^{j}}(z_{i})}\right)}\Big{/}\sum\limits_{j=1}^{\sum\limits_{n} \left(\frac{\Pi}{\prod\limits_{i=1}^{n}\mu_{A_{i}^{j}}(z_{i})}\right)}=\text{w}^ {T}\Psi(Z) \tag{6}\] where \(h\) specifies the number of fuzzy rules used, and \(\mu_{A_{i}^{j}}(z_{i})\) represents the membership function of \(z_{i}\). \(\text{w}=\left[w_{1},w_{2},...,w_{h}\right]^{T}\) represents the fuzzy weights, and \(\Psi(Z)=\left[\Psi_{1}(Z),\Psi_{2}(Z),...,\Psi_{h}(Z)\right]^{T}\) is a fuzzy basis vector, where its elements \(\Psi_{j}(Z)\) can be described as \[\Psi_{j}(Z)=\frac{\prod\limits_{i=1}^{n}\mu_{A_{i}^{j}}(z_{i})}{\sum\limits_{ j=1}^{n}\left(\frac{\pi}{\prod\limits_{i=1}^{n}\mu_{A_{i}^{j}}(z_{i})} \right)}. \tag{7}\] **Lemma 3** ([38, 39]): _Let \(f(Z)\) be a continuous function on a compact set \(\Omega\in\Re^{n}\), there exists a fuzzy logic system, i.e., \(\text{w}^{T}\Phi(Z)\), such that_ \[\sup\limits_{Z\in\Omega}\left|f(Z)-\text{w}^{T}\Psi(Z)\right|\leq\bar{\varrho} \tag{8}\] _where \(\bar{\varrho}\) is the fuzzy minimum approximation error, \(\Psi(Z)=\left[\Psi_{1}(Z),\Psi_{2}(Z),...,\Psi_{h}(Z)\right]^{T}\)/\(\sum\limits_{j=1}^{h}\Psi_{j}(Z)\) is the fuzzy basis function vector._ ### _Graph theory_ Consider a directed graph \(G=\{\Lambda,\Xi\}\) is used to describe the formation shape among a group of AUV vehicles, where \(\Lambda=\{\nu_{1},\nu_{2},...,\nu_{N}\}\) denotes \(N\) AUV followers, \(\Xi\subseteq\Lambda\times\Lambda\) is the set of edges. \(A=\left[\alpha_{ij}\right]\in\Re^{N\times N}\) denotes the weight of the edges, where \(\alpha_{ij}=\alpha_{ji}>0\) if there is an edge between AUVs \(i\) and \(j\), i.e., \((\nu_{j},\nu_{i})\in\Xi\), and \(\alpha_{ij}=\alpha_{ji}=0\) otherwise. Let \(B=\text{diag}\{b_{1},...,b_{N}\}\), where \(b_{i}>0\) indicates that the follower AUV \(i\) can receive the direct command signals from the AUV leader; under other conditions \(b_{i}=0\). The Laplacian matrix \(L=[l_{i,j}]\in\Re^{N\times N}\) with \(l_{i,j}=-\alpha_{i,j}\) for \(i\neq j\), and \(l_{i,i}=\sum_{j=1}^{N}\alpha_{i,j}\). It is assumed that the graph \(G\) is undirected and connected, and the desired trajectory information from the virtual leader will be transfered to at least one AUV, and thus not all the elements of B indentify to zero. Therefore, \(L+B>0\). ### _Dynamics of AUVs and Control Objective_ In this paper, a control method that can form the operations of \(N\) AUVs with the dynamics described in (9) in a consensus manner will be derived. \[\dot{\eta}_{i}=J_{i}(\eta_{2,i})v_{i}, \tag{9}\] \[M_{i}\dot{v}_{i}+C_{i}(v_{i})v_{i}+D_{i}(v_{i})v_{i}=u_{i}\left( \tau_{i}(t)\right)+d_{i}(t,\eta_{i},v_{i})\] where \(\eta_{i}=[\eta_{1,i},\eta_{2,i}]^{T}\in\Re^{6\times 1}\), \(\eta_{1,i}=[x_{i},y_{i},z_{i}]^{T}\in\Re^{3\times 1}\), \(\eta_{2,i}=[\phi_{i},\theta_{i},\psi_{i}]^{T}\in\Re^{3\times 1}\) denote the position and orientation of \(i\)-th AUV, respectively. \(v_{i}=[v_{1,i},v_{2,i}]^{T}\in\Re^{6\times 1}\), \(v_{1,i}=[v_{x,i},v_{y,i},v_{z,i}]^{T}\in\Re^{3\times 1}\), \(\upsilon_{2,i}=[\omega_{x,i},\omega_{y,i},\omega_{z,i}]^{T}\in\Re^{3\times 1}\) represents the translational and rotational velocities of \(i\)-th AUV, respectively. \(u_{i}\left(\tau_{i}(t)\right)\in\Re^{6\times 1}\), which will be described in (33), represents the control effort subject to saturation nonlinearity for the \(i\)-th AUV. The description of the inertia matrix \(M_{i}\in\Re^{6\times 6}\), the Coriolis and centripetal matrix \(C_{i}(v_{i})\in\Re^{6\times 6}\), the hydrodynamic matrix \(D_{i}(v_{i})\in\Re^{6\times 6}\) and the Jacobian matrix \(J_{i}(\eta_{2,i})\) can be found in [16]. \(d_{i}(t,\eta_{i},v_{i})\in\Re^{6\times 1}\) denotes the lumped model uncertainty and disturbance component in the system. _Control Objective:_ The objective of a distributed consensus controller is to design an appropriate controller for each AUV with the dynamics (9) so that the group of AUVs can: (i) form a desired formation shape, and (ii) follow a predefined trajectory, which is known as a virtual leader, within a fixed time. The desired formation shape of a group of AUVs can be determined by a specific relative postures, i.e., position and orientation, between AUVs. ## III Fixed-time Backstepping Sliding Mode Control Design for Consensus Formation Tracking Control Let \(\eta_{i}^{d}\), \(\dot{\eta}_{i}^{d}\) and \(\ddot{\eta}_{i}^{d}\) be the desired position, velocity and acceleration of the virtual leader. Define the position and orientation tracking errors between the objective trajectories and the reference trajectory for \(i\)-th AUV \((i\in\Gamma,\Gamma=\{1,...,N\})\) as follows: \[\varepsilon_{1,i} =\sum\limits_{j\in\Gamma}\alpha_{ij}(\eta_{i}-\eta_{j}-\delta_{ij}) +b_{i}(\eta_{i}-\eta_{i}^{d}-\delta_{id}) \tag{10}\] \[\dot{\varepsilon}_{1,i} =\sum\limits_{j\in\Gamma}\alpha_{ij}(\dot{\eta}_{i}-\dot{\eta}_{j}) +b_{i}(\dot{\eta}_{i}-\dot{\eta}_{i}^{d}).\] Here, \(\alpha_{ij}\geq 0\) and \(b_{i}\geq 0\) are defined as in section II.C. \(\delta_{ij}\) indicates the relative position and orientation between \(i\)-th AUV and \(j\)-th AUV \((j\in\Gamma)\). \(\delta_{id}\) denotes the relative posture between the \(i\)-th AUV and the reference trajectory (i.e., the virtual leader). All the AUVs are expected to have the same velocity and acceleration as the desired reference trajectory. Differentiating the velocity of tracking error \(\dot{\varepsilon}_{1,i}\) with respect to time, we have: \[\ddot{\varepsilon}_{1,i}=\sum\limits_{j\in\Gamma}\alpha_{ij}(\ddot{\eta}_{i}- \ddot{\eta}_{j})+b_{i}(\ddot{\eta}_{i}-\ddot{\eta}_{i}^{d}) \tag{11}\] where \(\ddot{\eta}_{i}\) and \(\ddot{\eta}_{j}\) represent the acceleration of \(i\)-th AUV and its neighbors \(j\in\Gamma\), respectively. Based on (9), the dynamic model of the \(i\)-th AUV can be expressed as: \[\ddot{\eta}_{i}= \Phi_{i}\left(v_{i},\eta_{i}\right)v_{i}+J_{i}\left(\eta_{2,i} \right)\Pi_{i}u\left(\tau_{i}(t)\right) \tag{12}\] \[+J_{i}(\eta_{2,i})\Pi_{i}d_{i}(t,\eta_{i},v_{i})\] where, \(\Pi_{i}=M_{i}^{-1}\) and \(\Phi_{i}(v_{i},\eta_{i})=\dot{J}_{i}(\eta_{2,i})-J_{i}(\eta_{2,i})\Pi_{i}C_{i}(v_{i})-J_ {i}(\eta_{2,i})\Pi_{i}D_{i}(v_{i})\). For facilitating the design of controllers later, the following matrices are defined: \(\tilde{\Phi}(v,\eta)=\text{diag}\{\Phi_{1}(v_{1},\eta_{1}),...,\Phi_{N}(v_{N}, \eta_{N})\}\), \(\tilde{\Pi}=\text{diag}\{\Pi_{1},...,\Pi_{N}\}\), \(\tilde{J}(\eta_{2})=\text{diag}\{J_{1}(\eta_{2,1}),...,J_{N}(\eta_{2,N})\}\) \(u\left(\tau(t)\right)=\left[u_{1}\left(\tau_{1}(t)\right),u_{2}\left(\tau_{2}(t) \right),...,u_{N}\left(\tau_{N}(t)\right)\right]^{T}\), \(d=\left[d_{1}(t,\eta_{1},v_{1}),d_{2}(t,\eta_{2},v_{2}),...,d_{N}(t,\eta_{N},v_{ N})\right]^{T}\). Therefore, \[\ddot{\eta}=\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_{2})\bar{\Pi}u \left(\tau(t)\right)+\bar{J}(\eta_{2})\bar{\Pi}d. \tag{13}\] **Assumption 1**: _The disturbance term \(J_{i}(\eta_{2,i})\Pi_{i}d_{i}(t,\eta_{i},v_{i})\) is bounded by the positive constant \(\tilde{\lambda}_{i}\):_ \[\|J_{i}(\eta_{2,i})\Pi_{i}d_{i}(t,\eta_{i},v_{i})\|\leq\tilde{\lambda}_{i},i\in\Gamma. \tag{14}\] _The parameter \(\tilde{\lambda}_{i}\) typically depends on the internal model uncertainties and external environmental disturbances (i.e., marine environment) of the vehicles._ _Letting the following variables:_ \[\begin{split}\breve{\varepsilon}_{1}&=[\varepsilon _{1,1},\varepsilon_{1,2},...,\varepsilon_{1,N}]^{T},\\ \breve{\varepsilon}_{2}&=[\dot{\varepsilon}_{1,1},\dot{\varepsilon}_{1,2},...,\dot{\varepsilon}_{1,N}]^{T},\end{split} \tag{15}\] _and_ \[\tilde{\eta}=[\tilde{\eta}_{1},\dot{\eta}_{2},...,\tilde{\eta}_{N}]^{T}. \tag{16}\] _Adding the results in (11), (15) and (16) to form the overall error dynamics as:_ \[\begin{split}\dot{\varepsilon}_{1}&=\varepsilon_{2 },\\ \dot{\varepsilon}_{2}&=\left(L+B\right)\left(\tilde{ \eta}-\mathbf{1}_{N}\otimes\tilde{\eta}^{d}\right),\end{split} \tag{17}\] _where \(\otimes\) denotes the Kronecker product between two matrices. \(\mathbf{1}_{N}\) stands for an \(N\times 1\) vector with unitary elements._ _Then, based on (17), a fixed time backstepping SMC can be designed as follows:_ **Step 1**_: The first sliding surface is selected as:_ \[s_{1}(t)=\bar{\varepsilon}_{1}(t). \tag{18}\] _Differentiating (18) yields_ \[\dot{s}_{1}(t)=\alpha_{s}(t), \tag{19}\] _where \(\alpha_{s}(t)=\dot{\varepsilon}_{1}(t)\) is identified as the virtual control of the system (19)._ _To stabilise the sliding surface \(s_{1}(t)\), the following virtual control input is designed:_ \[\alpha_{s}=-\left(k_{1}s_{1}+k_{2}s_{1}^{\gamma}+k_{3}s_{1}^{\varepsilon} \right), \tag{20}\] _where \(k_{1}>0\), \(k_{2}>\) and \(k_{3}>0\), and \(0<\gamma<1\) and \(\iota>1\)._ _Consider a candidate Lyapunov function below:_ \[V_{1}=\frac{1}{2}s_{1}^{T}s_{1}. \tag{21}\] _Adding the result in (_20_) into the derivative of (_21_), we obtain:_ \[\begin{split}\dot{V}_{1}&=s_{1}^{T}\dot{s}_{1}\\ &=-s_{1}^{T}\left(k_{1}s_{1}+k_{2}s_{1}^{\gamma}+k_{3}s_{1}^{ \varepsilon}\right)\\ &=-k_{1}s_{1}^{T}s_{1}-k_{2}\left(s_{1}^{T}s_{1}\right)^{\frac{ \iota+1}{2}}-k_{3}\left(s_{1}^{T}s_{1}\right)^{\frac{\iota+1}{2}}\\ &\leq-k_{2}\left(s_{1}^{T}s_{1}\right)^{\frac{\gamma+1}{2}}-k_{3} \left(s_{1}^{T}s_{1}\right)^{\frac{\iota+1}{2}}\\ &\leq-2^{\frac{\gamma+1}{2}}k_{2}\left(\frac{1}{2}s_{1}^{T}s_{1} \right)^{\frac{\gamma+1}{2}}-2^{\frac{\iota+1}{2}}k_{3}\left(\frac{1}{2}s_{1} ^{T}s_{1}\right)^{\frac{\iota+1}{2}}.\end{split} \tag{22}\] **Step 2**_: Define the second sliding surface:_ \[s_{2}=\bar{\varepsilon}_{2}-\left(L+B\right)\alpha_{s}. \tag{23}\] _The derivative of \(s_{2}\) is:_ \[\begin{split}\dot{s}_{2}&=\dot{\bar{\varepsilon}}_{2 }-\left(L+B\right)\dot{\alpha}_{s}\\ &=\left(L+B\right)\left(\tilde{\eta}-\mathbf{1}_{N}\otimes\tilde{ \eta}^{d}-\dot{\alpha}_{s}\right).\end{split} \tag{24}\] _A candidate Lyapunov function is selected as_ \[V_{2}=\frac{1}{2}s_{2}^{T}s_{2}. \tag{25}\] _Adding the result in (_24_) into the derivative of (_25_), we obtain:_ \[\begin{split}\dot{V}_{2}&=s_{2}^{T}\dot{s}_{2}\\ &=s_{2}^{T}\left(\left(L+B\right)\left(\tilde{\eta}-\mathbf{1}_{N} \otimes\tilde{\eta}^{d}-\dot{\alpha}_{s}\right)\right)\\ &=s_{2}^{T}\left(\left(L+B\right)\left(\bar{\Phi}(\upsilon,\eta) \upsilon+\bar{J}(\eta_{2})\bar{\Pi}u(\tau(t)\right)\right.\\ &\left.+\bar{J}(\eta_{2})\bar{\Pi}d-\mathbf{1}_{N}\otimes\tilde{ \eta}^{d}-\dot{\alpha}_{s}\right)\right).\end{split} \tag{26}\] _Based on (_26_), the backstepping sliding mode controller can be taken as_ \[u(\tau(t))=[\bar{J}(\eta_{2})\bar{\Pi}]^{-1}\left(-\bar{\Phi}(\upsilon,\eta) \upsilon+\mathbf{1}_{N}\otimes\tilde{\eta}^{d}+\dot{\alpha}_{s}+\tau^{{}^{ \prime}}\right), \tag{27}\] _where,_ \[\tau^{{}^{\prime}}=-\beta_{s}\text{sign}(s_{2})-k_{8}s_{2}-k_{9}s_{2}^{\gamma}-k _{10}s_{2}^{\lambda}, \tag{28}\] _where \(k_{8},k_{9},k_{10}\) are positive constants. \(\beta_{s}\) is chosen to be \(\beta_{s}>\tilde{\lambda}\), where \(\tilde{\lambda}=[\tilde{\lambda}_{1},\ldots,\tilde{\lambda}_{N}]^{T}\), which were defined as in Assumption 1._ _Inserting the control input in (_27_) and (_28_) into (_26_), we have:_ \[\begin{split}\dot{V}_{2}&=s_{2}^{T}\dot{s}_{2}\\ &=s_{2}^{T}\left(L+B\right)\left(-\beta_{s}\text{sign}(s_{2})-k_{8 }s_{2}-k_{9}s_{2}^{\gamma+1}\right.\\ &\left.-k_{10}s_{2}^{\iota+1}+\bar{J}(\eta_{2})\bar{\Pi}d\right)\\ &\leq-2^{\frac{\gamma+1}{2}}\left(L+B\right)k_{9}\left(\frac{1}{2}s_ {2}^{T}s_{2}\right)^{\frac{\gamma+1}{2}}\\ &-2^{\frac{\iota+1}{2}}\left(L+B\right)k_{10}\left(\frac{1}{2}s_ {2}^{T}s_{2}\right)^{\frac{\iota+1}{2}}.\end{split} \tag{29}\] **Step 3**_: We define a compounded candidate Lyapunov function:_ \[V=V_{1}+V_{2}. \tag{30}\] Differentiating (30) yields: \[\begin{split}\dot{V}&=\dot{V}_{1}+\dot{V}_{2}\\ &\leq-2^{\frac{\gamma+1}{2}}k_{2}\left(\frac{1}{2}s_{1}^{T}s_{1} \right)^{\frac{\gamma+1}{2}}-2^{\frac{\gamma+1}{2}}k_{3}\left(\frac{1}{2}s_{1} ^{T}s_{1}\right)^{\frac{\gamma+1}{2}}\\ &-2^{\frac{\gamma+1}{2}}(L+B)k_{9}\left(\frac{1}{2}s_{2}^{T}s_{2} \right)^{\frac{\gamma+1}{2}}\\ &-2^{\frac{\gamma+1}{2}}(L+B)k_{10}\left(\frac{1}{2}s_{2}^{T}s_{2 }\right)^{\frac{\gamma+1}{2}}\\ &\leq-2^{\frac{\gamma+1}{2}}\zeta_{1}\left(V_{1}^{\frac{\gamma+1 }{2}}+V_{2}^{\frac{\gamma+1}{2}}\right)\\ &-2^{\frac{\gamma+1}{2}}\zeta_{2}\left(V_{1}^{\frac{\gamma+1}{2} }+V_{2}^{\frac{\gamma+1}{2}}\right)\\ &\leq-2^{-\frac{\gamma+1}{2}}\zeta_{1}\left(V^{\frac{\gamma+1}{2 }}\right)-2^{\frac{\gamma+1}{2}}\zeta_{2}\left(V^{\frac{\gamma+1}{2}}\right). \end{split} \tag{31}\] where \(\zeta_{1}=\text{min}\left(k_{2},(L+B)\,k_{9}\right)\) and \(\zeta_{2}=\text{min}\left(k_{3},(L+B)\,k_{10}\right)\). Therefore, thanks to Lemma 1, the global fixed-time convergence can be established for the system (31), and the reaching time can be obtained as: \[T\leq\frac{2}{\zeta_{1}2^{\frac{\gamma+1}{2}}(1-\gamma)}+\frac{2}{2^{\frac{ \gamma+1}{2}}\zeta_{2}(t-1)}. \tag{32}\] ## IV Distributed Backstepping Fuzzy Sliding Mode Controller with Input Saturation The backstepping SMC presented in section III has two main shortcomings: (i) the bigger sliding gain is chosen based on Assumption 1, for which if the disturbance is big, the controller provides a big chattering in the system, (ii) the saturated control torque effects have not been considered. In this section, we introduce an auxiliary variable and a fuzzy approximation to overcome these shortcomings. The designed control input \(\tau_{i}(t)\in\Re^{n}\) is affected by the saturation nonlinearity and can be expressed as [41]: \[u_{i}(\tau_{i}(t))=\begin{cases}\text{sign}(\tau_{i}(t))\tau_{\text{max}_{i}}, &|\tau_{i}(t)|\geq\tau_{\text{max}_{i}},\\ \tau_{i}(t),&|\tau_{i}(t)|<\tau_{\text{max}_{i}},\end{cases} \tag{33}\] where \(\tau_{\text{max}_{i}}\) represents the maximum control torque allowed for joint \(i\). Furthermore, considering the input saturation, the saturated control torque can be approximated by \[u_{i}(\tau_{i})=g_{i}(\tau_{i})+\varsigma_{i}(\tau_{i}), \tag{34}\] where \(g_{i}(\tau_{i})\) is a smooth function. \(\varsigma_{i}(\tau_{i})\) is the bounded approximation error. \(g_{i}(\tau_{i})\) can be chosen as [41]: \[\begin{split} g_{i}(\tau_{i})&=\tau_{\text{max}_{i}} \times\text{tanh}\left(\frac{\tau_{i}}{\tau_{\text{max}_{i}}}\right)\\ &=\tau_{\text{max}},\frac{e^{\tau_{i}/\tau_{\text{max}_{i}}}-e^{- \tau_{i}/\tau_{\text{max}_{i}}}}{e^{\tau_{i}/\tau_{\text{max}_{i}}}+e^{-\tau_ {i}/\tau_{\text{max}_{i}}}}.\end{split} \tag{35}\] The approximation error \(\varsigma_{i}(\tau_{i})\) is bounded by \[|\varsigma_{i}(\tau_{i})|=|u_{i}(\tau_{i})-g_{i}(\tau_{i})|\leq\tau_{\text{ max}_{i}}(1-\text{tanh}(1))=\bar{\Delta}_{i}. \tag{36}\] Let: \(g(\tau(t))=[g_{1}(\tau_{1}(t)),g_{2}(\tau_{2}(t)),...,g_{N}(\tau_{N}(t))]^{T}\), \(\varsigma(\tau(t))=[\varsigma_{1}(\tau_{1}(t)),\varsigma_{2}(\tau_{2}(t)),...,\varsigma_{N}(\tau_{N}(t))]^{T}\). To compensate for the saturated controller, an adaptive auxiliary variable \(\mu\) is introduced as: \[\dot{\mu}=-\mu+\bar{J}(\eta_{2})\bar{\Pi}\left(g(\tau)-\tau\right). \tag{37}\] For this controller, **Step 1** is as in section III. **Step 2** will be re-designed as follows: **Step 2**: The error variable \(s_{2}\) can be redefined as \[s_{2}=\bar{\varepsilon}_{2}-(L+B)\alpha_{s}-(L+B)\mu. \tag{38}\] The derivative of \(s_{2}\) can be computed as \[\begin{split}\dot{s}_{2}&=\dot{\bar{\varepsilon}}_ {2}-(L+B)\dot{\alpha}_{s}-(L+B)\dot{\mu}\\ &=(L+B)\left(\bar{\eta}-\mathbf{1}_{N}\otimes\vec{\eta}^{d}- \dot{\alpha}_{s}-\dot{\mu}\right)\\ &=(L+B)\left(\bar{\Phi}(\upsilon,\eta)v+\bar{J}(\eta_{2})\bar{ \Pi}u(\tau)\right.\\ &+\bar{J}(\eta_{2})\bar{\Pi}d-\mathbf{1}_{N}\otimes\vec{\eta}^{d}- \dot{\alpha}_{s}-\dot{\mu}\right)\\ &=(L+B)\left(\bar{\Phi}(\upsilon,\eta)v+\bar{J}(\eta_{2})\bar{ \Pi}u(\tau)\right.\\ &\left.+\bar{J}(\eta_{2})\bar{\Pi}d-\mathbf{1}_{N}\otimes\vec{\eta} ^{d}-\dot{\alpha}_{s}\right.\\ &\left.+\mu-\bar{J}(\eta_{2})\bar{\Pi}\left(g(\tau)-\tau\right) \right)\\ &=(L+B)\left(\bar{\Phi}(\upsilon,\eta)v+J(\eta_{2})\bar{\Pi}( \varsigma(\tau)+d)\right.\\ &\left.-\mathbf{1}_{N}\otimes\vec{\eta}^{d}+\mu+\bar{J}(\eta_{2}) \bar{\Pi}\tau-\dot{\alpha}_{s}\right).\end{split} \tag{39}\] By using a FLC to approximate the lumped uncertainty and disturbance \(\bar{J}(\eta_{2})\bar{\Pi}(\varsigma(\tau)+d)\), the derivative of the error \(s_{2}\) in (39) can be represented as \[\begin{split}\dot{s}_{2}=&(L+B)\left(\bar{\Phi}( \upsilon,\eta)v+W^{*T}\Psi(Z)+\epsilon(t)\right.\\ &\left.-\mathbf{1}_{N}\otimes\vec{\eta}^{d}+\mu+\bar{J}(\eta_{2}) \bar{\Pi}\tau-\dot{\alpha}_{s}\right),\end{split} \tag{40}\] where \(\epsilon(t)=\bar{J}(\eta_{2})\bar{\Pi}\left(\varsigma(\tau)+d(t,\eta,v)\right)-W ^{*T}\Phi(Z)\). From (14) and (36) and Lemma 3, we can obtain \(||\epsilon(t)||\leq\bar{\epsilon}\), where \(\bar{\epsilon}>0\). A candidate Lyapunov function is defined as \[V_{3}(s_{2},\tilde{\theta})=\frac{1}{2}s_{2}^{T}s_{2}+\frac{1}{2}\tilde{\theta}^ {T}\tilde{\theta}, \tag{41}\] where \(\tilde{\theta}=\theta-\hat{\theta}\) is the weight approximation error with \(\theta=\max_{h\in H}\|W^{*}\|\), and \(\hat{\theta}\) is the approximation of \(\theta\). The derivative of \(V_{3}\) in (41) can be computed as: \[\begin{split}\dot{V}_{3}&=s_{2}^{T}\dot{s}_{2}- \tilde{\theta}^{T}\dot{\bar{\theta}}\\ &=s_{2}^{T}(L+B)\left(\bar{\Phi}(\upsilon,\eta)v+W^{*T}\Psi(Z)+ \epsilon(t)\right.\\ &\left.-\mathbf{1}_{N}\otimes\vec{\eta}^{d}+\mu+\bar{J}(\eta_{2}) \bar{\Pi}\tau-\dot{\alpha}_{s}\right)-\tilde{\theta}^{T}\dot{\tilde{\theta}}. \end{split} \tag{42}\] Let \(F_{sum}=\bar{\Phi}(\upsilon,\eta)v-\mathbf{1}_{N}\otimes\vec{\eta}^{d}+\mu- \dot{\alpha}_{s}\). Applying inequality principle, we have \[s_{2}^{T}{W^{*T}\Psi(Z)}\leq\frac{1}{2}+\frac{1}{2}s_{2}^{2}\theta\Psi(Z)^{T} \Psi(Z). \tag{43}\] Then, \(\dot{V}_{3}\) becomes: \[\begin{split}\dot{V}_{3}&\leq{s_{2}}^{T}(L+B)\left( \bar{J}(\eta_{2})\bar{\Pi}\tau+F_{sum}+\epsilon(t)\right.\\ &\left.+\frac{1}{2}s_{2}^{2}\hat{\theta}\Psi(Z)^{T}\Psi(Z)\right)+ \frac{1}{2}(L+B)-\tilde{\theta}^{T} Based on (44), a distributed control input is designed as: \[\begin{split}&\tau=\left(\bar{J}\left(\eta_{2}\right)\bar{\Pi} \right)^{-1}\left(-F_{sum}-\frac{1}{2}s_{2}^{2}\tilde{\theta}\Psi(Z)^{T}\Psi(Z) \right.\\ &\left.-\beta_{s}\text{sign}(s_{2})-k_{8}s_{2}-k_{9}s_{2}^{7}-k_{ 10}s_{2}^{t}\right),\end{split} \tag{45}\] where \(\beta_{s}\) is selected such that \(\beta_{s}>\bar{\epsilon}\). The adaptive law of the FLC can be selected as \[\dot{\tilde{\theta}}=\frac{1}{2}\left(L+B\right)s_{2}^{2}\Psi(Z)^{T}\Psi(Z)-w_ {1}\hat{\theta}^{\gamma}-w_{2}\hat{\theta}^{i}. \tag{46}\] Therefore, \[\begin{split}\dot{V}_{3}&\leq s_{2}^{T}(L+B)\left( -k_{8}s_{2}-k_{9}s_{2}^{7}-k_{10}s_{2}^{t}\right)\\ &+\frac{1}{2}(L+B)+\tilde{\theta}^{T}\left(w_{1}\hat{\theta}^{ \gamma}+w_{2}\hat{\theta}^{i}\right).\end{split} \tag{47}\] Using inequality: \[\tilde{\theta}\tilde{\theta}^{\gamma}\leq l_{1}\theta^{1+\gamma}-l_{2}\tilde{ \theta}^{1+\gamma}, \tag{48}\] \[\tilde{\theta}\tilde{\theta}^{\epsilon}\leq l_{1}\theta^{1+\epsilon}-l_{2} \tilde{\theta}^{1+\epsilon}. \tag{49}\] Taking the results in (47), (48) and (49) together, we have: \[\begin{split}\dot{V}_{3}&\leq(L+B)\left(-k_{8}s_{2 }-k_{9}s_{2}^{\gamma+1}-k_{10}s_{2}^{t+1}\right)\\ &+\frac{1}{2}(L+B)+w_{1}\left(l_{1}\theta^{1+\gamma}-l_{2}\tilde {\theta}^{1+\gamma}\right)\\ &+w_{2}\left(l_{1}\theta^{1+\epsilon}-l_{2}\tilde{\theta}^{1+ \epsilon}\right)\\ &\leq(L+B)\left(-k_{8}s_{2}-k_{9}s_{2}^{\gamma+1}-k_{10}s_{2}^{t +1}\right)\\ &-l_{2}\left(w_{1}\tilde{\theta}^{1+\gamma}+w_{2}\tilde{\theta}^ {1+\epsilon}\right)+\sigma,\end{split} \tag{50}\] where: \[\sigma=\frac{1}{2}\left(L+B\right)+l_{1}\left(w_{1}\theta^{1+\gamma}+w_{2} \theta^{1+\epsilon}\right). \tag{51}\] Therefore, \[\begin{split}\dot{V}_{3}&\leq-k_{9}\left(L+B\right)s _{2}^{\gamma+1}-l_{2}w_{1}\tilde{\theta}^{1+\gamma}\\ &-k_{10}\left(L+B\right)s_{2}^{t+1}-l_{2}w_{2}\tilde{\theta}^{1 +\epsilon}+\sigma\\ &\leq-2^{\frac{\gamma+1}{2}}\nu_{1}V_{3}^{\frac{1+\gamma}{2}}-2 ^{\frac{\gamma+1}{2}}\nu_{2}V_{3}^{\frac{1+\epsilon}{2}}+\sigma.\end{split} \tag{52}\] Here, \[\nu_{1}=\text{min}\left(k_{9}\left(L+B\right),l_{2}w_{1}\right) \tag{53}\] \[\nu_{2}=\text{min}\left(k_{10}\left(L+B\right),l_{2}w_{2}\right) \tag{54}\] Thus, according to Lemma 2, the value \(s_{2}\) and \(\tilde{\theta}\) will converge to zero. The convergence time can be calculated as \(T\leq\frac{2}{\nu_{1}(1-\gamma)}+\frac{2}{\nu_{2}(u-1)}\). **Step 3**: Define a candidate Lyapunov function: \[V=V_{1}+V_{3}. \tag{55}\] The derivative of the above Lyapunov function is \[\begin{split}\dot{V}&\leq-2^{\frac{\gamma+1}{2}} \lambda_{min}\{k_{2},k_{5}\}V_{1}^{\frac{\gamma+1}{2}}-2^{\frac{\gamma+1}{2}} \lambda_{min}\{k_{3},k_{6}\}V_{1}^{\frac{\gamma+1}{2}}\\ &-2^{\frac{\gamma+1}{2}}\nu_{1}V_{3}^{\frac{\gamma+1}{2}}-2^{ \frac{\gamma+1}{2}}\nu_{2}V_{3}^{\frac{\gamma+1}{2}}+\sigma\\ &\leq-2^{\frac{\gamma+1}{2}}\chi_{1}\left(V_{1}^{\frac{\gamma+1}{ 2}}+V_{3}^{\frac{\gamma+1}{2}}\right)\\ &-2^{\frac{\gamma+1}{2}}\chi_{2}\left(V_{1}^{\frac{\gamma+1}{2}}+V _{3}^{\frac{\gamma+1}{2}}\right)+\sigma\\ &\leq-2^{\frac{\gamma+1}{2}}\chi_{1}\left(V^{\frac{\gamma+1}{2}} \right)-2^{\frac{\gamma+1}{2}}\chi_{2}\left(V^{\frac{\gamma+1}{2}}\right)+ \sigma,\end{split} \tag{56}\] where \(\chi_{1}=\text{min}\{\nu_{1},\lambda_{min}\{k_{2},k_{5}\}\}\) and \(\chi_{2}=\text{min}\{\lambda_{min}\{k_{3},k_{6}\},\nu_{2}\}\). Therefore, according to Lemma 2, the global fixed-time convergence of the system is guaranteed. The settling time can be calculated as: \[T\leq\frac{2}{\chi_{1}2^{\frac{\gamma+1}{2}}\kappa(1-\gamma)}+\frac{2}{\chi_{ 2}2^{\frac{\gamma+1}{2}}\kappa(t-1)}. \tag{57}\] **Remark 1**: _The employment of \(sign\) function in (45) generates a chattering in the system. In order to reduce the chattering, the controller (45) can be revised as_ \[\begin{split}\tau&=\left(\bar{J}(\eta_{2})\bar{\Pi} \right)^{-1}\left(-F_{sum}-\frac{1}{2}s_{2}^{2}\tilde{\theta}\Psi(Z)^{T}\Psi(Z) \right.\\ &\left.-\beta_{s}(\frac{s_{2}}{||s_{2}||+\epsilon_{1}})-k_{8}s_{2 }-k_{9}s_{2}^{7}-k_{10}s_{2}^{t}\right),\end{split} \tag{58}\] _where \(\epsilon_{1}\) is a small positive number._ ## V Results and Discussions In this section, we validate the performance of the proposed algorithm. The dynamic model of each vehicle is described as in (9), where the parameters are selected as in Table I [16]. Fig. 1 illustrates the connection between AUVs and the virtual leader, and \(\alpha_{12}=a_{21}=\alpha_{23}=\alpha_{32}=\alpha_{34}=\alpha_{43}=1\). As illustrated in Fig. 1, in this considered communication topology, the desired trajectory will be communicated and given to the AUV-1, i.e., \(b_{1}=1\). Therefore, the \(L\) and \(B\) matrices can be calculated as: \[L=\begin{bmatrix}1&-1&0&0\\ -1&2&-1&0\\ 0&-1&2&-1\\ 0&0&-1&1\end{bmatrix},B=\begin{bmatrix}1&0&0&0\\ 0&0&0&0\\ 0&0&0&0&0\end{bmatrix}. \tag{59}\] \begin{table} \begin{tabular}{l l l l} \hline \hline Parameters & \(Value\) & Parameters & \(Value\) \\ \hline \(m_{i}\) & \(20\) & \(I_{s,i}\) & \(20\) \\ \(I_{y,i}\) & \(30\) & \(I_{s,i}\) & \(35\) \\ \(t_{vx,i}\) & \(-8\) & \(\iota_{vy,i}\) & \(-10\) \\ \(t_{vy,i}\) & \(-9\) & \(\iota_{0v,x,i}\) & \(-7\) \\ \(t_{vy,i}\) & \(-8\) & \(\iota_{0v,x,i}\) & \(-6\) \\ \(\iota_{vx,i}\) & \(-0.2\) & \(\iota_{vy,i}\) & \(-0.25\) \\ \(t_{\omega z,i}\) & \(-0.15\) & \(\iota_{0v,x,i}\) & \(-20\) \\ \(\iota_{\omega y,i}\) & \(-30\) & \(\iota_{y,i}\) & \(-35\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Parameters used in the simulation of _i_th AUV (\(i\in\{1,2,3,4\}\)) The moving trajectory of the virtual leader is selected as \(\eta^{d}(t)=[30-30e^{-t},5t,2t,0,0,0]^{T}\). The desired posture between AUVs are given by \(\delta_{12}=[0,10,0]^{T}\), \(\delta_{21}=[0,-10,0]^{T}\), \(\delta_{23}=[-10,0,0]^{T}\), \(\delta_{32}=[10,0,0]^{T}\), \(\delta_{34}=[0,-10,0]^{T}\), and \(\delta_{43}=[0,10,0]^{T}\). All the vehicles have the same orientation. The relative distance between the virtual leader and AUV 1 is \(\delta_{1d}=[20,0,0]^{T}\). It is assumed that the four AUVs will start from the initial positions: \(\eta_{1}(0)=[2,3,3,0.3,0,0.2]^{T}\), \(\eta_{2}(0)=[2.5,3.5,3.0,2.0,0.25]^{T}\), \(\eta_{3}(0)=[2,3,3,0.3,0,0.2]^{T}\), \(\eta_{4}(0)=[3,3,2,0.3,0,0.2]^{T}\), and \(v_{i}=0_{6\times 1},i\in\{1,2,3,4\}\) is set for the initial velocities of AUVs. The disturbance term is assumed to be: \[d_{i}(t,\eta_{i},v_{i})= [2.5\sin(t)-0.5v_{xi}^{2}-0.7\sin(v_{xi}v_{yi}),\] \[2.5\cos(t)+0.1v_{xi}^{2}+0.5\sin(v_{yi}),\] \[2.5\sin(t)+0.7v_{xi}^{2}+0.8\sin(v_{zi}),\] \[0.5\sin(t)+0.2v_{\phi i}^{3}, \tag{60}\] \[0.5\cos(t)-0.2v_{\theta i}^{2},\] \[0.5\sin(t)-0.4v_{\phi i}^{3},]^{T},\] \[(i\in\{1,2,3,4\}).\] Note that the above parameters are selected to be quite similar to the parameters used in [16] to facilite the comparison later. However, in this experiment, the disturbance term (60) is modeled to be more severe to include both environmental disturbances and model uncertainties. The selected parameters, which were chosen based on a trial-and-error procedure, for the proposed controller in this simulation are \(k_{1}=k_{8}=5\), \(k_{2}=k_{3}=k_{9}=k_{10}=0.4\), \(w_{1}=w_{2}=1\), \(\beta_{s}=20\). The parameters \(\gamma=5/7\) and \(\iota=7/5\). The control efforts are saturared by \(\tau_{\text{max}}=300Nm\). The proposed controller uses the below membership functions, which were tuned based on a trial-and-error procedure: \[\begin{array}{l}\mu_{A_{1}^{1}}=\exp\left(-(Z_{i}+7)^{2}/4\ \right),\mu_{A_{2}^{2}}=\exp\left(-(Z_{i}+5)^{2}/4\ \right),\mu_{A_{3}^{4}}=\exp\left(-(Z_{i}+3)^{2}/4\ \right),\mu_{A_{4}^{4}}=\exp\left(-(Z_{i}+1)^{2}/4\ \right),\mu_{A_{1}^{4}}=\exp\left(-(Z_{i}+ 0)^{2}/4\ \right),\mu_{A_{1}^{6}}=\exp\left(-(Z_{i}-1)^{2}/4\ \right),\mu_{A_{1}^{7}}=\exp\left(-(Z_{i}-3)^{2}/4\ \right),\mu_{A_{1}^{8}}=\exp\left(-(Z_{i}-5)^{2}/4\ \right),\mu_{A_{1}^{9}}=\exp\left(-(Z_{i}-7)^{2}/4\ \right).\end{array}\] The input of the FLC is \(Z_{i}=[\eta_{i},\upsilon_{i}]^{T}\). To reduce chattering, the controller (58) is used and \(\epsilon_{1}=0.01\). In order to highlight the superior performance of the proposed controller, it is analysed in a comparison with the distributed SMC [16]. The SMC can be designed as in Appendix A. The sliding gain of the SMC is selected as \(\beta_{0}=200\). Note that the SMC [16] has not considered the effects of the input saturation in the design. The tracking performances of the proposed controller are shown in Figs. 2, 3, 4, 5, while the performances of the SMC are shown in Figs. 6, 7, 8, 9. In particular, Fig. 2 shows the formation shape of the four AUVs under the proposed controller. Compared with the formation shape of the AUVs under the SMC controller shown in Fig. 6, the proposed controller provides faster and smoother convergence, as shown in Fig. 2. Figs. 3 and 4 show the tracking errors of \(\varepsilon_{1,i},(i=1,2,3,4)\) and \(\varepsilon_{2,i},(i=1,2,3,4)\) under the proposed controller, which are convergent to zero. The errors \(\varepsilon_{1,i}\) and \(\varepsilon_{2,i}\) under the effects of the SMC are shown in Figs. 7 and 8, respectively. Comparing Figs. 7 and 8 and Figs. 3 and 4, respectively, show that the SMC provides faster convergence for the tracking errors. However, this is because the sliding gain of the SMC was chosen to be a bigger value. Consequently, it leads to a much higher, possibly physically unrealisable, control efforts, as shown in Fig. 9. Fig. 5 shows the control efforts of the proposed controller, which are smoother and bounded by the \(\tau_{\text{max}}\). approximation have been employed. The computer simulation on a formation consensus control for four AUVs demonstrated that the fixed-time controller provided a quick response and stability for the group of AUVs and can handle the input saturation problem well. In future works, we will address the multiple constraints problem (i.e., working spaces constraints, states constraints and input constraints simultaneously) and obstacle avoidance problems for multiple collaborative AUVs. An physical experiment system based on the hardware systems of BlueROV2 robots are being developed, and the experimental results will be shown in future works. ## Appendix A: Design Distributed sliding mode control The SMC can be derived as follows [16]: First, the sliding surface is selected as \[s=k_{1}(L+B)\bar{\varepsilon}_{1}+\bar{\varepsilon}_{2} \tag{61}\] where \(k_{1}\) is a positive constant. The sliding mode control can be designed as \[\tau=[\bar{J}(\eta_{2})\bar{\Pi}]^{-1}\left(-\bar{\Phi}(v,\eta)v+\tau^{{}^{ \prime}}\right) \tag{62}\] Fig. 4: Velocity tracking error \(\varepsilon_{2}=e_{2}\) of the AUVs under the proposed controller Fig. 5: Control efforts \(\tau_{i}(i=1,2,3,4)\) of the AUVs under the proposed controller Fig. 3: Position tracking error \(\varepsilon_{1}=e_{1}\) of the AUVs under the proposed controller Fig. 6: The formation shape of four AUVs under the SMC controller where \(\tau^{\prime}\) is designed as \[\tau=[\bar{J}(\eta_{2})\bar{\Pi}]^{-1}\left(-k_{1}\bar{\varepsilon}_{2}+\textbf{1} _{N}\otimes\bar{\eta}^{d}-\beta_{0}\text{sign}(s)\right) \tag{63}\] To reduce the chattering, the controller (63) is revised as: \[\tau=[\bar{J}(\eta_{2})\bar{\Pi}]^{-1}\left(-k_{1}\bar{\varepsilon}_{2}+\textbf{1 }_{N}\otimes\bar{\eta}^{d}-\beta_{0}\frac{s}{||s||+\epsilon_{1}}\right)\,. \tag{64}\] The convergence and stability of the SMC can be referred to [16].
2306.06091
Testing the light scalar meson as a non-$q\bar q$ state in semileptonic $D$ decays
To distinguish between the normal $q\bar q$ and exotic diquark-antidiqark ($q^2\bar q^2$) contents of the lowest-lying scalar meson ($S_0$), we investigate the semileptonic $D\to S_0 e^+\nu_e, S_0\to M_1 M_2$ decays, where $M_{1(2)}$ represents a pseudoscalar meson. With the form factors extracted from the current data, we calculate ${\cal B}(D_s^+\to \sigma_0 e^+\nu_e,\sigma_0\to\pi^0\pi^0) =(12.9^{+6.3}_{-4.9})\times 10^{-4}$ and $(0.8^{+1.2}_{-0.7})\times 10^{-4}$ for the $q\bar q$ and $q^2\bar q^2$ quark structures, respectively, and compare them to the experimental upper limit: $6.4\times 10^{-4}$. It is clearly seen that $S_0$ prefers to be the $q^2\bar q^2$ bound state. Particularly, ${\cal B}_{q\bar q}(D_s^+\to \sigma_0 e^+\nu_e,\sigma_0\to\pi^+\pi^-) =(25.8^{+12.5}_{-\;\,9.8})\times 10^{-4}$ and ${\cal B}_{q^2\bar q^2}(D_s^+\to \sigma_0 e^+\nu_e,\sigma_0\to\pi^+\pi^-) =(1.5^{+2.4}_{-1.3})\times 10^{-4}$ are predicted to deviate far from each other, useful for a clear experimental investigation.
Yu-Kuo Hsiao, Shu-Qi Yang, Wen-Juan Wei, Bai-Cian Ke
2023-06-09T17:52:08Z
http://arxiv.org/abs/2306.06091v1
# Testing the light scalar meson as a non-\(q\bar{q}\) state ###### Abstract To distinguish between the normal \(q\bar{q}\) and exotic diquark-antidiqark (\(q^{2}\bar{q}^{2}\)) contents of the lowest-lying scalar meson (\(S_{0}\)), we investigate the semileptonic \(D\to S_{0}e^{+}\nu_{e},S_{0}\to M_{1}M_{2}\) decays, where \(M_{1(2)}\) represents a pseudoscalar meson. With the form factors extracted from the current data, we calculate \(\mathcal{B}(D_{s}^{+}\to\sigma_{0}e^{+}\nu_{e},\sigma_{0}\to\pi^{0}\pi^{0})=( 12.9^{+6.3}_{-4.9})\times 10^{-4}\) and \((0.8^{+1.2}_{-0.7})\times 10^{-4}\) for the \(q\bar{q}\) and \(q^{2}\bar{q}^{2}\) quark structures, respectively, and compare them to the experimental upper limit: \(6.4\times 10^{-4}\). It is clearly seen that \(S_{0}\) prefers to be the \(q^{2}\bar{q}^{2}\) bound state. Particularly, \(\mathcal{B}_{q\bar{q}}(D_{s}^{+}\to\sigma_{0}e^{+}\nu_{e},\sigma_{0}\to\pi^{ +}\pi^{-})=(25.8^{+12.5}_{-~{}9.8})\times 10^{-4}\) and \(\mathcal{B}_{q^{2}\bar{q}^{2}}(D_{s}^{+}\to\sigma_{0}e^{+}\nu_{e},\sigma_{0} \to\pi^{+}\pi^{-})=(1.5^{+2.4}_{-1.3})\times 10^{-4}\) are predicted to deviate far from each other, useful for a clear experimental investigation. pacs: 12.38.-t, 12.38.Gc, 12.38.Gc, 12.38.Gc Introduction The lowest-lying scalar meson (\(S_{0}\)), such as \(f_{0}\equiv f_{0}(980)\), \(a_{0}\equiv a_{0}(980)\), or \(\sigma_{0}\equiv f_{0}(500)\), has an unclear quark content. One might think of \(S_{0}\) the normal p-wave \(q\bar{q}\) meson [1; 2; 3; 4; 5; 6; 7], the compact diquark-antidiquark (\(q^{2}\bar{q}^{2}\)) tetraquark [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19], or the molecular \(M_{1}M_{2}\) bound state [20; 21; 22; 23; 24; 25; 26], where \(M_{1(2)}\) represents a pseudoscalar meson. The nature of the light scalar meson thus remains a puzzle of the hadronization. Several decay processes have been used to solve the puzzle [27; 28; 29; 30; 31; 32]. Particularly, the semileptonic \(D\to S_{0}e^{+}\nu_{e}\) weak decays involve the \(S_{0}\) formation, and advantageously, there is no QCD effect caused by the lepton pair besides the \(D\to S_{0}\) transition. A clean test-bed for exploring the scalar meson is hence provided. BESIII collaboration has recently measured the semileptonic \(D_{s}^{+}\) decays, whose branching fractions are reported as [33; 34] \[{\cal B}(D_{s}^{+}\to f_{0}e^{+}\nu_{e},f_{0}\to\pi^{+}\pi^{-}) = (1.72\pm 0.13\pm 0.10)\times 10^{-3}\,,\] \[{\cal B}(D_{s}^{+}\to f_{0}e^{+}\nu_{e},f_{0}\to\pi^{0}\pi^{0}) = (7.9\pm 1.4\pm 0.3)\times 10^{-4}\;,\] \[{\cal B}(D_{s}^{+}\to\sigma_{0}e^{+}\nu_{e},\sigma_{0}\to\pi^{0} \pi^{0}) < 6.4\times 10^{-4}\;(90\%\ {\rm C.L.})\,. \tag{1}\] As a parameterization of the \(D_{s}^{+}\) to \(f_{0}\) transition [6], the form factor \(f^{+}(p^{2})\) with \(p^{2}=(p_{D_{s}}-p_{f_{0}})^{2}\) plays a key role in interpreting the branching fraction, which has been extracted as \(f^{+}(0)|V_{cs}|=0.504\pm 0.017\pm 0.035\) by BESIII [33] with \(f^{+}(0)=f^{+}(p^{2})\) at \(p^{2}=0\) and \(|V_{cs}|\) the \(c\to s\) Cabibbo-Kobayashi-Maskawa (CKM) matrix element. In addition, the fitted \(f^{+}(0)\) has been used to test the model calculations [33], which is believed to help to investigate the constituents of the scalar meson. In fact, \(f^{+}(0)\) extracted from a single decay of \(D_{s}^{+}\to f_{0}e^{+}\nu,f_{0}\to\pi^{+}\pi^{-}\) can barely be applied to distinguish one quark content for \(S_{0}\) from the other. Only when \(f^{+}(0)\) is related to those of the other \(D_{s}^{+}\to S_{0}e^{+}\nu_{e},S_{0}\to M_{1}M_{2}\) decays, it becomes possible to discern the \(q\bar{q}\) or \(q^{2}\bar{q}^{2}\) nature of the scalar meson. The reason is that the light scalar mesons actually mix with each other, and the two possible quark structures have different scenarios for the mixing. Taking the \(f_{0}-\sigma_{0}\) mixing as our example, the mixing angle for \(q\bar{q}\) is commonly studied as \(|\theta|\simeq 20^{\circ}\)[7], whereas the one for \(q^{2}\bar{q}^{2}\) is less than \(10^{\circ}\)[12], which would cause sizable distinction for the branching fractions. In this report, we propose to re-fit \(f^{+}(0)\) in the \(D_{s}^{+}\to f_{0}e^{+}\nu_{e},f_{0}\to\pi^{+}\pi^{-}\) decay, which needs to agree with the experimental extraction. Then, we will relate \(f^{+}(0)\) to those of the other semileptonic \(D_{s}^{+}\) decays, and calculate the branching fractions for the different quark structures. Likewise, the scalar formation will be investigated in the semileptonic \(D^{+(0)}\) decays. Since the mixing scenarios can cause sizable distinction for the branching fractions, by comparing our calculations to the current data, it is anticipated that one of the quark contents will be preferred. ## II Formalism According to the effective Hamiltonians of the quark-level \(c\to qe^{+}\nu_{e}\) decays [36], the amplitude of \(D\to S_{0}e^{+}\nu_{e}\) is written as \[{\cal M}(D\to S_{0}e^{+}\nu_{e}) = \frac{G_{F}}{\sqrt{2}}V_{cq}\langle S_{0}|\bar{q}\gamma_{\mu}(1- \gamma_{5})c|D\rangle\bar{u}_{\nu}\gamma_{\mu}(1-\gamma_{5})v_{e}\,, \tag{2}\] where \(G_{F}\) is the Fermi constant, \(V_{cq}\) the CKM matrix element, and \(q\) denotes \(s\) or \(d\). Specifically, we explore \(S_{0}\) as \(a_{0}\), \(f_{0}\), and \(\sigma_{0}\). In Eq. (2), one parameterizes the matrix elements of the \(D\) to \(S_{0}\) transition as [37] \[\langle S_{0}|(\bar{q}c)|D\rangle = i(p_{D}+p_{S_{0}})_{\mu}f^{+}(p^{2})+i\frac{m_{D}^{2}-m_{S_{0}}^ {2}}{p^{2}}p_{\mu}[f^{0}(p^{2})-f^{+}(p^{2})]\,, \tag{3}\] where \((\bar{q}c)=\bar{q}\gamma_{\mu}(1-\gamma_{5})c\), \(p_{\mu}=(p_{D}-p_{S_{0}})_{\mu}\), and \(f^{+,0}(p^{2})\) are the form factors. For the momentum dependence, \(f^{+,0}(p^{2})\) are presented with the double and single pole representations, respectively, given by [33; 37] \[f^{+}(p^{2})=\frac{f(0)}{(1-p^{2}/m_{\rm A}^{2})^{2}}\,,\;f^{0}(p^{2})=\frac{ f(0)}{1-p^{2}/m_{\rm B}^{2}}\,, \tag{4}\] with \(f(0)\) at \(p^{2}=0\) and the pole masses \(m_{\rm A}\) and \(m_{\rm B}\). The scalar mesons appear as resonances in \(D\to S_{0}e^{+}\nu_{e},S_{0}\to M_{1}M_{2}\). We follow Ref. [38] to use the Flatte formula for the two-channel resonances of \(f_{0}\) and \(a_{0}\), whereas the resonance of \(\sigma_{0}\) is by the Breit-Wigner model [35]. The resonant amplitudes of \(S_{0}\to M_{1}M_{2}\) are thus presented as \[{\cal R}_{f_{0}} \equiv {\cal R}(f_{0}\to\pi\pi)=\frac{C_{f_{0}\to\pi\pi}}{m_{f_{0}}^{2} -t-im_{f_{0}}(g_{f_{0}\pi\pi}\rho_{\pi\pi}+g_{f_{0}KK}\rho_{KK})}\,,\] \[{\cal R}_{a_{0}} \equiv {\cal R}(a_{0}\to\pi\eta)=\frac{C_{a_{0}\to\pi\eta}}{m_{a_{0}}^{ 2}-t-i(g_{a_{0}\pi\eta}^{2}\rho_{\pi\eta}+g_{a_{0}KK}^{2}\rho_{KK})}\,,\] \[{\cal R}_{\sigma_{0}} \equiv {\cal R}(\sigma_{0}\to\pi\pi)=\frac{C_{\sigma_{0}\to\pi\pi}}{m_{ \sigma_{0}}^{2}-t-i\,m_{\sigma_{0}}\Gamma_{\sigma_{0}}}\,, \tag{5}\] with the denominators from [39; 40], [41; 42], and [35; 43], respectively. Moreover, we define \(t\equiv(p_{M_{1}}+p_{M_{2}})^{2}\), \(\rho_{M_{1}M_{2}}\equiv[(1-m_{+}^{2}/t)(1-m_{-}^{2}/t)]^{1/2}\) with \(m_{\pm}=m_{M_{1}}\pm m_{M_{2}}\), and \(\Gamma_{\sigma_{0}}\equiv(\rho_{\pi\pi}/\bar{\rho}_{\pi\pi})\Gamma_{\sigma_{0}}^ {0}\) with \(\bar{\rho}_{\pi\pi}=\rho_{\pi\pi}(t)\) at \(t=m_{\sigma_{0}}^{2}\), while \(\langle M_{1}M_{2}|S_{0}\rangle\equiv C_{S_{0}\to M_{1}M_{2}}\) explains the \(S_{0}\to M_{1}M_{2}\) strong decay. Subsequently, the amplitude of \(D\to S_{0}e^{+}\nu_{e},S_{0}\to M_{1}M_{2}\) decay is given by \({\cal M}(D\to S_{0}e^{+}\nu,S_{0}\to M_{1}M_{2})={\cal R}_{S_{0}}{\cal M}(D \to S_{0}e^{+}\nu_{e})\). The two possible quark contents of \(S_{0}\) need the investigation, written as \[|f_{0}\rangle=\cos\theta_{I}|s\bar{s}\rangle+\sin\theta_{I}|n\bar{ n}\rangle\,,\] \[|\sigma_{0}\rangle=-\sin\theta_{I}|s\bar{s}\rangle+\cos\theta_{I} |n\bar{n}\rangle\,,\] \[|a_{0}^{0}\rangle=|\sqrt{1/2}(u\bar{u}-d\bar{d})\rangle\,,|a_{0}^ {-}\rangle=|d\bar{u}\rangle\,, \tag{6}\] for the \(q\bar{q}\) structures [12; 28], and \[|f_{0}\rangle=\cos\theta_{II}|n\bar{n}s\bar{s}\rangle+\sin\theta _{II}|u\bar{u}d\bar{d}\rangle\,,\] \[|\sigma_{0}\rangle=-\sin\theta_{II}|n\bar{n}s\bar{s}\rangle+\cos \theta_{II}|u\bar{u}d\bar{d}\rangle\,,\] \[|a_{0}^{0}\rangle=|\sqrt{1/2}(u\bar{u}-d\bar{d})s\bar{s}\rangle \,,|a_{0}^{-}\rangle=|d\bar{u}s\bar{s}\rangle\,, \tag{7}\] for the \(q^{2}\bar{q}^{2}\) structures [12], where \(|n\bar{n}\rangle\equiv|\sqrt{1/2}(u\bar{u}+d\bar{d})\rangle\) and \(|n\bar{n}s\bar{s}\rangle\equiv|\sqrt{1/2}(u\bar{u}+d\bar{d})s\bar{s}\rangle\), and \(\theta_{I,II}\) are the \(f_{0}-\sigma_{0}\) mixing angles. For the \(q\bar{q}\) structures, we derive that \[\langle f_{0},\sigma_{0}|(\bar{d}c)|D^{+}\rangle=(\sin\theta_{I}, \cos\theta_{I})\times\langle f_{n}|(\bar{d}c)|D^{+}\rangle\,,\] \[\langle a_{0}^{0}|(\bar{d}c)|D^{+}\rangle=\sqrt{1/2}\langle a_{0}^ {-}|(\bar{d}c)|D^{0}\rangle=-\langle f_{n}|(\bar{d}c)|D^{+}\rangle\,,\] \[\langle f_{0},\sigma_{0},a_{0}^{0}|(\bar{s}c)|D_{s}^{+}\rangle=( \cos\theta_{I},-\sin\theta_{I},0)\times\langle f_{s}|(\bar{s}c)|D_{s}^{+} \rangle\,, \tag{8}\] with \(f_{n}\equiv|n\bar{n}\rangle\) and \(f_{s}\equiv|s\bar{s}\rangle\). The matrix elements of the \(D\to S_{0}\) transition for the \(q^{2}\bar{q}^{2}\) structures can be a little bit more complicated, given by \[\langle f_{0}|(\bar{d}c)|D^{+}\rangle=\cos\theta_{II}\langle F_{ ns}|(\bar{d}c)|D^{+}\rangle+\sin\theta_{II}\langle F_{ud}|(\bar{d}c)|D^{+} \rangle\,,\] \[\langle\sigma_{0}|(\bar{d}c)|D^{+}\rangle=-\sin\theta_{II}\langle F _{ns}|(\bar{d}c)|D^{+}\rangle+\cos\theta_{II}\langle F_{ud}|(\bar{d}c)|D^{+} \rangle\,,\] \[\langle a_{0}^{0}|(\bar{d}c)|D^{+}\rangle=\sqrt{1/2}\langle a_{0}^ {-}|(\bar{d}c)|D^{0}\rangle=-\langle F_{ns}|(\bar{d}c)|D^{+}\rangle\,,\] \[\langle f_{0},\sigma_{0},a_{0}^{0}|(\bar{s}c)|D_{s}^{+}\rangle=( \cos\theta_{II},-\sin\theta_{II},0)\times\langle F_{ns}|(\bar{s}c)|D_{s}^{+} \rangle\,, \tag{9}\] with \(F_{ns}\equiv|n\bar{n}s\bar{s}\rangle\) and \(F_{ud}\equiv|u\bar{u}d\bar{d}\rangle\). By taking \(S_{0}\) as the tetraquark, the \(D^{+}\to d\bar{d}\to F_{ns(ud)}\) transition needs an additional quark pair from \(g\to s\bar{s}(u\bar{u})\), where \(g\) denotes a gluon occurring from the vacuum. In the \(D_{s}^{+}\to s\bar{s}\to F_{ns}\) transition, \(g\to u\bar{u}\) or \(g\to d\bar{d}\) is needed. For the resonant four-body \(D(p_{D})\to S_{0}(p_{S_{0}})e^{+}(p_{e})\nu(p_{\nu}),S_{0}(p_{S_{0}})\to M_{1}(p_{1 })M_{2}(p_{2})\) decay, one has \(s\equiv(p_{e}+p_{\nu})^{2}\equiv m_{e\nu}^{2}\), \(t\), and the angular variables \((\theta_{\bf M},\theta_{\bf L},\phi)\) in the phase space [44; 45; 46]. We illustrate the angular variables in Fig. 1, where \(\theta_{\bf M(L)}\) is the angle between \(\vec{p}_{1}\) (\(\vec{p}_{e}\)) in the \(M_{1}M_{2}\) (\(e\nu\)) rest frame and the resonant \(S_{0}\) moving direction (the line of flight of \(e\nu\) system) in the \(D\) meson rest frame, while \(\phi\) is the angle between the \(M_{1}M_{2}\) and \(e\nu\) planes, formed by the momenta of the \(M_{1}M_{2}\) and \(e\nu\) systems, respectively, in the \(D\) meson rest frame. The partial decay width thus reads [47; 48] \[d\Gamma=\frac{|\vec{\cal M}|^{2}}{4(4\pi)^{6}m_{D}^{3}}X\alpha_{\bf M}\alpha_{ \bf L}\,ds\,dt\,d\!\cos\theta_{\bf M}\,d\!\cos\theta_{\bf L}\,d\phi\,, \tag{10}\] where \(X=[(m_{D}^{2}-s-t)^{2}/4-st]^{1/2}\), \(\alpha_{\bf M}=\lambda^{1/2}(t,m_{1}^{2},m_{2}^{2})/t\), and \(\alpha_{\bf L}=\lambda^{1/2}(s,m_{e}^{2},m_{\nu}^{2})/s\), with \(\lambda(a,b,c)=a^{2}+b^{2}+c^{2}-2ab-2bc-2ca\). The allowed regions of the variables are \((m_{e}+m_{\nu})^{2}\leq s\leq(m_{D}-\sqrt{t})^{2}\), \((m_{1}+m_{2})^{2}\leq t\leq(m_{D}-m_{e}-m_{\nu})^{2}\), \(0\leq\theta_{\bf M,L}\leq\pi\), and \(0\leq\phi\leq 2\pi\). ## III Numerical results In the numerical analysis, we use \(V_{cs}=1-\lambda^{2}\) and \(V_{cd}=-\lambda\) with \(\lambda=0.22453\pm 0.00044\) in the Wolfenstein parameterization [35], and the mixing angles in the \(q\bar{q}\) and \(q^{2}\bar{q}^{2}\) pictures are given by [7; 12] \[\theta_{I}=(158.7\pm 4.0)^{\circ}\,,\ \theta_{II}=(174.6^{+3.4}_{-3.2})^{ \circ}\,. \tag{11}\] We follow Refs. [35; 37] to use \((m_{\rm A},m_{\rm B})=(2.46,2.32)\) GeV and \((2.42,2.34)\) GeV for the \(D_{s}^{+}\to S_{0}\) and \(D^{+(0)}\to S_{0}\) transitions, respectively. In Eq. (5), the denominators involve the resonant masses (decay width) and the coupling constants, which are fitted to agree with the partial distributions of \(D\to S_{0}e^{+}\nu,S_{0}\to M_{1}M_{2}\) in the \(M_{1}M_{2}\) invariant mass spectra. The fitted values of \(f_{0}\), \(a_{0}\), and \(\sigma_{0}\) can be found in Refs. [39; 40], [49; 50; 51], and [35; 40], respectively, given by \[(m_{f_{0}},g_{f_{0}\pi\pi})=(940,199\pm 30)\ {\rm MeV}\,,\ g_{f_{0} KK}=(3.0\pm 0.3)g_{f_{0}\pi\pi}\,,\] \[(m_{a_{0}},g_{a_{0}\pi\eta})=(999,324\pm 15)\ {\rm MeV}\,,\ g_{a_{0} KK}^{2}=(1.03\pm 0.14)g_{a_{0}\pi\eta}^{2}\,,\] \[(m_{\sigma_{0}},\Gamma^{0}_{\sigma_{0}})=(500,500)\ {\rm MeV}\,. \tag{12}\] One determines \(C_{S_{0}\to M_{1}M_{2}}\) to be [42; 52] \[C_{f_{0}\to\pi^{+}\pi^{-}}=\sqrt{2}C_{f_{0}\to\pi^{0}\pi^{0}}=(1.5\pm 0.1)\ {\rm GeV}\,,\] \[C_{a_{0}^{-(0)}\to\pi^{-(0)}\eta}=(2.5\pm 0.2)\ {\rm GeV}\,,\] \[C_{\sigma_{0}\to\pi^{+}\pi^{-}}=\sqrt{2}C_{\sigma_{0}\to\pi^{0} \pi^{0}}=(3.9\pm 0.1)\ {\rm GeV}\,. \tag{13}\] Using the measured \({\cal B}(D_{s}^{+}\to f_{0}e^{+}\nu_{e},f_{0}\to\pi^{+}\pi^{-})\) in Table 1 and the parameters in Eqs. (11, 13), and (12), we extract \(F^{D_{s}^{+}\to f_{0}}\equiv f(0)\) for \(D_{s}^{+}\to f_{0}\). Likewise, we determine \(F^{D^{0}\to a_{0}^{-}}\) from \({\cal B}(D^{0}\to a_{0}^{-}e^{+}\nu_{e},a_{0}^{-}\to\pi^{-}\eta)\). The fitted form factors are given by \[F^{D_{s}^{+}\to f_{0}}=0.55\pm 0.03\,,\ F^{D^{0}\to a_{0}^{-}}=0.46\pm 0.06\,, \tag{14}\] \begin{table} \begin{tabular}{l l l} \hline decay channel & our work: \((q\bar{q},q^{2}\bar{q}^{2})\) & experimental data \\ \hline \hline \(10^{3}{\cal B}(D_{s}^{+}\to f_{0}e^{+}\nu_{e},f_{0}\to\pi^{+}\pi^{-})\) & (input, input) & \(1.7\pm 0.2\)[33] \\ \(10^{4}{\cal B}(D_{s}^{+}\to\sigma_{0}e^{+}\nu_{e},\sigma_{0}\to\pi^{+}\pi^{-})\) & \((25.8^{+12.5}_{-9.8},1.5^{+2.4}_{-1.3})\) & — \\ \(10^{4}{\cal B}(D_{s}^{+}\to f_{0}e^{+}\nu_{e},f_{0}\to\pi^{0}\pi^{0})\) & \((8.3\pm 1.5,8.3\pm 1.5)\) & \(7.9\pm 1.4\)[34] \\ \(10^{4}{\cal B}(D_{s}^{+}\to\sigma_{0}e^{+}\nu_{e},\sigma_{0}\to\pi^{0}\pi^{0})\) & \((12.9^{+6.3}_{-4.9},0.8^{+1.2}_{-0.7})\) & \(<6.4\)[34] \\ \hline \(10^{4}{\cal B}(D^{0}\to a_{0}^{-}e^{+}\nu_{e},a_{0}^{-}\to\pi^{-}\eta)\) & (input, input) & \(1.3\pm 0.3\)[35] \\ \(10^{4}{\cal B}(D^{+}\to a_{0}^{0}e^{+}\nu_{e},a_{0}^{0}\to\pi^{0}\eta)\) & \((1.6\pm 0.5,1.6\pm 0.5)\) & \(2.0\pm 0.9\)[35] \\ \(10^{5}{\cal B}(D^{+}\to f_{0}e^{+}\nu_{e},f_{0}\to\pi^{+}\pi^{-})\) & \((0.5\pm 0.2,3.8\pm 1.2)\) & \(<2.8\)[53] \\ \(10^{4}{\cal B}(D^{+}\to\sigma_{0}e^{+}\nu_{e},\sigma_{0}\to\pi^{+}\pi^{-})\) & \((3.7^{+1.1}_{-0.9},8.4^{+2.4}_{-2.1})\) & \(6.3\pm 0.5\)[35] \\ \hline \end{tabular} \end{table} Table 1: Branching fractions of the resonant \(D\to S_{0}e^{+}\nu_{e},S_{0}\to M_{1}M_{2}\) decays in the \(q\bar{q}\) and \(q^{2}\bar{q}^{2}\) structures to be compared to the data. The error receives the uncertainties from the mixing angle, form factor, and strong coupling constants. where the errors come from the data. In terms of the mixing relations in Eqs. (8, 9), we relate \(F^{D_{s}^{+}\to f_{0}}\) and \(F^{D^{0}\to a_{0}^{-}}\) to those of the other decays, and calculate the branching fractions in Table 1 in comparison with the data. ## IV Discussions and conclusions We consider the four-body \(D\to S_{0}e^{+}\nu,S_{0}\to M_{1}M_{2}\) decay for the branching fraction, instead of using \({\cal B}(D\to S_{0}e^{+}\nu,S_{0}\to M_{1}M_{2})\simeq{\cal B}(D\to S_{0}e^{+} \nu){\cal B}(S_{0}\to M_{1}M_{2})\), such that the \(S_{0}\to M_{1}M_{2}\) resonant effect is taken into account. In Eq. (14), \(F^{D_{s}^{+}\to f_{0}}=0.55\pm 0.03\) being close to the value of \(0.52\pm 0.04\) by BESIII justifies our determination, with which we obtain \(F^{D_{s}^{+}\to\sigma_{0}}_{(q\bar{q},q^{2}\bar{q}^{2})}=-(\tan\theta_{I},\tan \theta_{II})\times F^{D_{s}^{+}\to f_{0}}=(0.22\pm 0.01,0.052\pm 0.003)\) for the \(q\bar{q}\) and \(q^{2}\bar{q}^{2}\) structures, respectively. We thus calculate the branching fractions as \[{\cal B}^{D_{s}^{+}\to f_{0}\to\pi^{0}\pi^{0}}_{(q\bar{q},q^{2}\bar{q}^{2})}= (8.3\pm 1.5,8.3\pm 1.5)\times 10^{-4}\,,\] \[{\cal B}^{D_{s}^{+}\to\sigma_{0}\to\pi^{0}\pi^{0}}_{(q\bar{q},q^{2} \bar{q}^{2})}=(12.9^{+6.3}_{-4.9},0.8^{+1.2}_{-0.7})\times 10^{-4}\,, \tag{15}\] with \({\cal B}^{D\to S_{0}\to M_{1}M_{2}}_{(q\bar{q},q^{2}\bar{q}^{2})}\equiv{\cal B }_{(q\bar{q},q^{2}\bar{q}^{2})}(D\to S_{0}e^{+}\nu_{e},S_{0}\to M_{1}M_{2})\), where both \({\cal B}^{D_{s}^{+}\to f_{0}\to\pi^{0}\pi^{0}}_{(q\bar{q},q^{2}\bar{q}^{2})}\) are consistent with the data. By comparing \({\cal B}^{D_{s}^{+}\to\sigma_{0}\to\pi^{0}\pi^{0}}_{(q\bar{q},q^{2}\bar{q}^{2})}\) with the experimental upper bound in Table 1: \({\cal B}_{ex}<6.4\times 10^{-4}\), it is clearly seen that the \(S_{0}\) meson favors to be the tetraquark. In the semileptonic \(D^{+}\) decays, it is given that \(F^{D^{+}\to a_{0}^{0}}_{q\bar{q}}=F^{D^{+}\to a_{0}^{0}}_{q^{2}\bar{q}^{2}}=- \sqrt{1/2}F^{D^{0}\to a_{0}^{-}}\) with the value of \(F^{D^{0}\to a_{0}^{-}}\) in Eq. (14). We hence obtain \({\cal B}^{D^{+}\to a_{0}^{0}\to\pi^{0}\eta}_{(q\bar{q},q^{2}\bar{q}^{2})}=(1.6 \pm 0.5,1.6\pm 0.5)\times 10^{-4}\) interpreting the data. We also obtain \[{\cal B}^{D^{+}\to f_{0}\to\pi^{+}\pi^{-}}_{(q\bar{q},q^{2}\bar{q}^{2})}=(0.5 \pm 0.2,3.8\pm 1.2)\times 10^{-5}\,,\] \[{\cal B}^{D^{+}\to\sigma_{0}\to\pi^{+}\pi^{-}}_{(q\bar{q},q^{2} \bar{q}^{2})}=(3.7^{+1.1}_{-0.9},8.4^{+2.4}_{-2.1})\times 10^{-4}\,, \tag{16}\] where we have used \((F^{D^{+}\to f_{0}}_{q\bar{q}},F^{D^{+}\to\sigma_{0}}_{q\bar{q}})=(\sin\theta_{ I},\cos\theta_{I})\times F^{D^{+}\to a_{0}^{0}}\) and \(F^{D^{+}\to f_{0}}_{q^{2}\bar{q}^{2}}=-\cos\theta_{II}\)\(\times F^{D^{+}\to a_{0}^{0}}\) with \(\sin\theta_{II}\simeq 0\) being safely neglected, whereas \(F^{D^{+}\to\sigma_{0}}_{q^{2}\bar{q}^{2}}\) cannot be simply related to \(F^{D^{+}\to a_{0}^{0}}\), due to that \(\sigma_{0}=\cos\theta_{II}F_{ud}\) and \(a_{0}^{0}=F_{ns}\) involve the different quark pairs from \(g\to u\bar{u}\) and \(g\to s\bar{s}\), respectively, in the \(D^{+}\to d\bar{d}\to S_{0}\) transitions. To estimate \({\cal B}^{D^{+}\to\sigma_{0}\to\pi^{+}\pi^{-}}_{(q^{2}\bar{q}^{2})}\), the \(SU(3)\) flavor \([SU(3)_{f}]\) symmetry has been applied, such that \(g\to u\bar{u}\) is not distinguished from \(g\to s\bar{s}\). We hence assume \(F_{ud}=\sqrt{2}F_{ns}\), which leads to \(F^{D^{+}\to\sigma_{0}}=-\cos\theta_{II}\sqrt{2}F^{D^{+}\to a_{0}^{0}}\). In Eq. (16), \({\cal B}^{D^{+}\to f_{0}\to\pi^{+}\pi^{-}}_{(q\bar{q})}=(0.5\pm 0.2)\times 10^{-5}\) corresponds to \(\sin^{2}\theta_{I}\) as small as 0.13, reflecting the fact that \(f_{0}\) is mainly a \(s\bar{s}\) bound state in the \(q\bar{q}\) structure. It seems that \({\cal B}_{(q^{2}\bar{q}^{2})}^{D^{+}\to f_{0}\to\pi^{+}\pi^{-}}=3.8\times 10^{-5}\) deviates from \({\cal B}_{ex}(D^{+}\to f_{0}e^{+}\nu_{e},f_{0}\to\pi^{+}\pi^{-})<2.8\times 10^{-5}\), which is, however, still allowed to be within the upper limit by considering the uncertainty. For the \(\sigma_{0}\) decay modes, \({\cal B}_{(q\bar{q})}^{D^{+}\to\sigma_{0}\to\pi^{+}\pi^{-}}=(3.7^{+1.1}_{-0.9} )\times 10^{-4}\) is in tension with \({\cal B}_{ex}=(6.3\pm 0.5)\times 10^{-4}\), whereas \({\cal B}_{(q^{2}\bar{q}^{2})}^{D^{+}\to\sigma_{0}\to\pi^{+}\pi^{-}}=(8.4^{+2.4 }_{-2.1})\times 10^{-4}\) shows its consistency. Therefore, we conclude that it is too early to say which of the two quark constituents can interpret the semileptonic \(D^{+}\) decays, while the calculations have not presented decisive disagreements with the data. Since \(\bar{B}_{s}^{0}\to J/\Psi(f_{0},\sigma_{0})\) and \(\bar{B}^{0}\to J/\Psi(f_{0},\sigma_{0})\), respectively, proceed through the \(\bar{B}_{s}^{0}\to s\bar{s}\to(f_{0},\sigma_{0})\) and \(\bar{B}^{0}\to d\bar{d}\to(f_{0},\sigma_{0})\) transitions, their relative decay rates have been proposed to distinguish between the quark content of the scalars being quark-antiquark or tetraquark [28]. Indeed, the following measurement of \(\bar{B}_{s}^{0}\to J/\Psi f_{0},f_{0}\to\pi^{+}\pi^{-}\) by LHCb suggests that the mixing angle is less than \(7.7^{\circ}\)[54], consistent with the tetraquark interpretation [12]. We demonstrate that the semileptonic \(D^{+}\) decays are not as good as the \(D_{s}^{+}\) ones to discern the \(q\bar{q}\) or \(q^{2}\bar{q}^{2}\) structures for \(S_{0}\). One possible reason is that the resonant peaks from \(D^{+}\to\rho^{0}\to e^{+}\nu,\rho^{0}\to\pi^{+}\pi^{-}\) and \(D^{+}\to S_{0}\to e^{+}\nu,S_{0}\to\pi^{+}\pi^{-}\) might overlap in the \(\pi^{+}\pi^{-}\) invariant mass spectrum, such that the interference could cause a less accurate analysis. In contrast, the semileptonic \(D_{s}^{+}\) decays of \(D_{s}^{+}\to\sigma_{0}\ell^{+}\nu_{\ell},\sigma_{0}\to\pi^{+(0)}\pi^{+(0)}\) with \(\ell=(e,\mu)\) are able to avoid the resonant peak from \(\rho^{0}\). Particularly, \({\cal B}_{(q\bar{q})}^{D_{s}^{+}\to\sigma_{0}\to\pi^{+}\pi^{-}}=(25.8^{+12.5}_ {-~{}9.8})\times 10^{-4}\) and \({\cal B}_{(q^{2}\bar{q}^{2})}^{D_{s}^{+}\to\sigma_{0}\to\pi^{+}\pi^{-}}=(1.5^{ +2.4}_{-1.3})\times 10^{-4}\) are predicted to be far from each other, benefiting an experimental clarification. In summary, we have studied the semileptonic \(D^{+,0},D_{s}^{+}\to S_{0}e^{+}\nu_{e},S_{0}\to M_{1}M_{2}\) decays, where the resonant effect from the broad decay width of \(S_{0}\) has been considered. To calculate the branching fractions, we have used the current data to extract the form factors of the \(D_{s}^{+}\to f_{0}\) and \(D^{0}\to a_{0}^{-}\) transitions, and have related them to those of the other semileptonic \(D_{(s)}^{+}\) decays. We have hence obtained \({\cal B}(D_{s}^{+}\to\sigma_{0}e^{+}\nu_{e},\sigma_{0}\to\pi^{0}\pi^{0})=(12.9 ^{+6.3}_{-4.9})\times 10^{-4}\) and \((0.8^{+1.2}_{-0.7})\times 10^{-4}\) for the \(q\bar{q}\) and \(q^{2}\bar{q}^{2}\) structures, respectively, compared to the experimental upper limit of \(6.4\times 10^{-4}\). Clearly, it has been shown that \(S_{0}\) prefers to be the tetraquark state. We have studied the \(D_{s}^{+}\to\sigma_{0}e^{+}\nu_{e},\sigma_{0}\to\pi^{+}\pi^{-}\) with \({\cal B}_{q\bar{q}}=(25.8^{+12.5}_{-~{}9.8})\times 10^{-4}\) and \({\cal B}_{q^{2}\bar{q}^{2}}(1.5^{+2.4}_{-1.3})\times 10^{-4}\). With the two predicted numbers deviating far from each other, it indicated that future measurements can make a clarification for the nature of the scalar meson. ###### Acknowledgements. YKH was supported in parts by NSFC (Grants No. 11675030 and No. 12175128). BCK was supported in parts by NSFC (Grants No. 11875054 and No. 12192263) and Joint Large-Scale Scientific Facility Fund of the NSFC and CAS (Grant No. U2032104).
2301.02250
The Colorado Ultraviolet Transit Experiment (CUTE) Mission Overview
Atmospheric escape is a fundamental process that affects the structure, composition, and evolution of many planets. The signatures of escape are detectable on close-in, gaseous exoplanets orbiting bright stars, owing to the high levels of extreme-ultraviolet irradiation from their parent stars. The Colorado Ultraviolet Transit Experiment (CUTE) is a CubeSat mission designed to take advantage of the near-ultraviolet stellar brightness distribution to conduct a survey of the extended atmospheres of nearby close-in planets. The CUTE payload is a magnifying NUV (2479~--~3306 Ang) spectrograph fed by a rectangular Cassegrain telescope (206mm x 84mm); the spectrogram is recorded on a back-illuminated, UV-enhanced CCD. The science payload is integrated into a 6U Blue Canyon Technology XB1 bus. CUTE was launched into a polar, low-Earth orbit on 27 September 2021 and has been conducting this transit spectroscopy survey following an on-orbit commissioning period. This paper presents the mission motivation, development path, and demonstrates the potential for small satellites to conduct this type of science by presenting initial on-orbit science observations. The primary science mission is being conducted in 2022~--~2023, with a publicly available data archive coming on line in 2023.
Kevin France, Brian Fleming, Arika Egan, Jean-Michel Desert, Luca Fossati, Tommi T. Koskinen, Nicholas Nell, Pascal Petit, Aline A. Vidotto, Matthew Beasley, Nicholas DeCicco, Aickara Gopinathan Sreejith, Ambily Suresh, Jared Baumert, P. Wilson Cauley, Carolina Villarreal DAngelo, Keri Hoadley, Robert Kane, Richard Kohnert, Julian Lambert, Stefan Ulrich
2023-01-05T19:00:00Z
http://arxiv.org/abs/2301.02250v1
# The Colorado Ultraviolet Transit Experiment (\(Cute\)) Mission Overview ###### Abstract Atmospheric escape is a fundamental process that affects the structure, composition, and evolution of many planets. The signatures of escape are detectable on close-in, gaseous exoplanets orbiting bright stars, owing to the high levels of extreme-ultraviolet irradiation from their parent stars. The _Colorado Ultraviolet Transit Experiment_ (\(CUTE\)) is a CubeSat mission designed to take advantage of the near-ultraviolet stellar brightness distribution to conduct a survey of the extended atmospheres of nearby close-in planets. The \(CUTE\) payload is a magnifying NUV (2479 - 3306 A) spectrograph fed by a rectangular Cassegrain telescope (206mm \(\times\) 84mm); the spectrogram is recorded on a back-illuminated, UV-enhanced CCD. The science payload is integrated into a 6U Blue Canyon Technology XB1 bus. \(CUTE\) was launched into a polar, low-Earth orbit on 27 September 2021 and has been conducting this transit spectroscopy survey following an on-orbit commissioning period. This paper presents the mission motivation, development path, and demonstrates the potential for small satellites to conduct this type of science by presenting initial on-orbit science observations. The primary science mission is being conducted in 2022 - 2023, with a publicly available data archive coming on line in 2023. 0000-0002-8820-7885]Kevin France 0000-0002-4882-7885]Brian Fleming 0000-0002-4882-7885]Arika Egan 0000-0002-4880-7885]Jean-Michel Desert 0000-0002-4880-3333]Luca Fossati 0000-0002-1883-2277]Nicholas Nell 0000-0002-4703-3473]Pascal Petit 0000-0002-1883-0885]Aline A. Vidotto 0000-0002-0703-0733]Matthew Beasley 0000-0002-4880-0703]Nicholas DeCicco 0000-0002-0703-0733]Aickara Gopinathan Sreejith 0000-0002-4883-0885]Ambily Suresh 0000-0002-4880-0885]Jared Baumert 0000-0002-0703-0733]P. Wilson Cauley 0000-0002-0703-0733]Carolina Villarreal D'Angelo 0000-0002-0703-0733]Keri Hoadley 0000-0002-0703-3330]Robert Kane 0000-0002-0703-3333]Richard Kohnert 0000-0002-0703-3333]Julian Lambert 0000-0002-0703-3333]Stefan Ulrich ## 1 Introduction The history of observational astronomy has been marked by the push to ever larger and more capable telescopes and instruments. The 2010s witnessed the development of a new generation of large astronomical observatories. Both on the ground and in space, facilities 2 - 3 times the primary mirror diameter of the previous state-of-the-art were brought closer to fruition for implementation in the 2020s, 2030s, and 2040s, including the _James Webb Space Telescope_(Gardner et al., 2006; Rigby et al., 2022), thirty-meter class ground-based telescopes (e.g., Simard et al., 2016), and advanced ultraviolet/optical (UV/O) facilities such as the Large Ultraviolet/ Optical/ Infrared Surveyor (LUVOIR; LUVOIR Final Report 2019). The large mission studies conducted ahead of the 2020 Decadal Survey on Astronomy and Astrophysics drove the recommendation for NASA's suite of Future Great Observatories, a series of probe- and flagship-class missions offering many order-of-magnitude gains in the scientific grasp across numerous areas of astrophysics. In parallel with this large observatory development, numerous small telescope arrays have come on-line or have been expanded, and NASA's science divi sions made significant new investments in small satellites covering a range of scientific topics. Small telescopes at ground-based sites have excelled at detecting and characterizing new objects in the time-variable sky, including supernovae eruptions (Dong et al., 2016) and tidal disruption events (Hammerstein et al., 2022) from the Zwicky Transient Facility (Masci et al., 2019) and All Sky Automated Survey for SuperNovae (ASAS-SN) (Holoien et al., 2017). The impact of small telescopes has also been powerful for the detection of extrasolar planets, including many Jovian-sized planets from Wide-Angle Search for Planets (WASP) (Pollacco et al., 2006) and the Kilodegree Extremely Little Telescope (KELT) (Pepper et al., 2007), and some of the most promising rocky planets for study with \(JWST\) from the MEarth (Charbonneau et al., 2009), Transiting Planets and Planetesimals Small Telescope (TRAPPIST) (Gillon et al., 2011), and Search for habitable Planets EClipsing ULtra-cOOl Stars (SPECULOOS) (Burdanov et al., 2018) facilities. The recent decadal survey highlighted the power of small space-based telescopes, astronomical CubeSats and smallsats, for "monitoring of sources for weeks or months at time, and at wavelengths not accessible from the ground", complementing the _Hubble Space Telescope_'s surveys in areas of transmission spectroscopy (Sing et al., 2019; Cubillos et al., 2020) and exoplanet host star radiation fields (France et al., 2016; Loyd et al., 2018; Ramiaramanantsoa et al., 2021). NASA has embraced this opportunity with a dedicated funding line for astrophysics CubeSats (full mission life cycle cost \(<\) $10M) and the Pioneers program (mission cost $10M - $20M). In this paper, we present an overview of NASA's first UV astronomy CubeSat and the first grant-funded small satellite dedicated to the characterization of extrasolar planetary atmospheres, the _Colorado Ultraviolet Transit Experiment_ (CUTE). _CUTE_ conducts transit spectroscopy of short-period, giant planets in the near-UV (2479 - 3306 A) bandpass to access strong atomic transitions tracing atmospheric escape and the near-UV spectral slope of giant planet atmospheres that provide constraints on their composition. This paper presents the science background for and the technical implementation of the mission. The manuscript is laid out as follows: the scientific motivation for \(CUTE\) and its science objectives are presented in Section 2. Because \(CUTE\) is one of the first astronomy missions to be developed in a CubeSat framework, we present a description of the mission development and implementation path in Section 3. Section 4 presents the instrument design and high-level performance specifications (see also Fleming et al., 2018 for a description of \(CUTE\)'s science payload). Section 5 describes \(CUTE\)'s mission operations and we present early-release examples of the mission's on-orbit science data in Section 6. We conclude with a brief summary in Section 7. A detailed description of \(CUTE\)'s science instrument and on-orbit performance is presented in a companion paper by A. Egan. Mission operations and on-orbit commissioning (Suresh et al. - in prep), \(CUTE\)'s on-orbit data pipeline (Sreejith et al. - in press), and Early Release Science results (Egan et al. - in prep; Sreejith et al. - in prep) will be described in forthcoming papers. ## 2 CUTE Science Objectives Planetary escape processes play a key role in determining the chemical and physical state of planets both within and beyond our solar system. Atmospheric escape is thought to create the radius gap observed in the distribution of short-period exoplanets (Fulton and Petigura, 2018), likely driven by a combination of photoevaporative (Owen and Wu, 2017) and core-powered (Ginzburg et al., 2018) mass loss. Escape is also a fundamental process in the evolution of terrestrial worlds. For a planet to be habitable, our current view is that it must lose its primordial hydrogen atmosphere and acquire/generate (and retain) a secondary atmosphere (Lammer et al., 2018). Atmospheric escape is known to have shaped the early atmospheres of Venus, Earth, and Mars, which subsequently followed different evolutionary paths. The rapid hydrodynamic escape that is believed to have affected Venus, Earth and Mars in the past no longer takes place on any planet in the solar system. Therefore, we turn to short-period extrasolar planets as laboratories on which to study vigorous atmospheric loss. The first detection of exoplanet atmospheric escape was achieved by Vidal-Madjar et al. (2003) who used HI Ly\(\alpha\) transit observations in the far-ultraviolet (FUV) to observe the extended atmosphere of the Hot Jupiter HD209458b. This was followed by the detection of O i, C ii, Si iii and Mg i on the same planet (Vidal-Madjar et al., 2004, 2013; Linsky et al., 2010). These initial observations inspired several independent groups to develop 1D and 3D models to study both the physical characteristics of the upper atmospheres of close-in planets and the escaping gas and plasma surrounding them (e.g., Koskinen et al., 2007; Murray-Clay et al., 2009; Koskinen et al., 2013, 2013; Bourrier and Lecavelier des Etangs, 2013; Bourrier et al., 2016; Villarreal D'Angelo et al., 2018; Carolan et al., 2021). The interpretation of FUV transit measurements has often been controversial (see Fossati et al., 2015 for a discussion). Recently, several atmospheric escape studies have shifted to the near-ultraviolet (NUV), where the stellar flux is much higher than in the FUV and the light curves are measured against a better-understood intensity distribution from the stellar photosphere (e.g.,Haswell et al., 2012; Llama & Shkolnik, 2015). The NUV includes the Fe ii complexes near 2400 and 2600 A, the Mg ii doublet at 2796/2803 A, the Mg i line at 2852 A, some of which have been detected on the Hot Jupiters WASP-12b, HD209458b, and WASP-121b (Fossati et al., 2010; Sing et al., 2019; Cubillos et al., 2020). We note that the Fe ii and Mg ii resonance lines in the near-UV trace the highly extended (and potentially escaping) exoplanet atmosphere, whereas optical band metal line detections made with ground-based telescopes trace the lower, bound atmospheric layers (Hoeijmakers et al., 2019; Casasayas-Barris et al., 2019; Cauley et al., 2019; Turner et al., 2020; Hoeijmakers et al., 2020; Casasayas-Barris et al., 2021; Deibert et al., 2021). The NUV also contains a pseudo-continuum that can probe scattering by high altitude clouds and gas phase silicon and magnesium (Lothringer et al., 2022), as well as the \(A\) - \(X\) bands of OH (3100 A). Furthermore, NUV transmission spectra give the unique opportunity to constrain the composition of the aerosols lying in the lower atmospheres (Cubillos et al., 2020). Depending on the temperature profile in the atmosphere, species like Si, Mg, and Fe are expected to condense to form clouds in the lower atmosphere, however, the calculations indicate that strong mixing, either by turbulence or global circulation, can inhibit cloud formation or allow for these species to be present in the upper atmosphere where they can escape (Koskinen et al., 2013; Cubillos et al., 2020; Koskinen et al., 2022). The comparison of continuum and atomic line absorption therefore acts as a diagnostic of cloud formation, elemental abundances and mass loss on close-in exoplanets (Lothringer et al., 2020; Cubillos et al., 2020). Model outputs can be used to translate observed planetary transit light curves into global mass-loss rates: the depth and shape of the light curves directly relate to the atmospheric parameters. Finally, UV transits with \(HST\) have provided evidence for time-variability, potentially arising from changing stellar high-energy input, orbital timescale changes in the planet's atmosphere, or variation in the star-planet magnetic environment. Lecavelier des Etangs et al. (2012) observed time-variable neutral hydrogen absorption in FUV transit observations of HD 189733b, possible due to the influence of high-energy stellar flares. NUV transit observations of the close-in giant planet WASP-12b by Fossati et al. (2010) found that the transit light curve of WASP-12b presents both an early ingress when compared to its optical transit and excess absorption during the transit (see also Haswell et al., 2012; Nichols et al., 2015). Possible explanations include atmospheric hydrodynamic mass-loss supporting a shock upstream of the planet's orbit or generating Figure 1: The \(CUTE\) instrument development from concept (instrument schematic, top) to telescope characterization (\(CUTE\) flight telescope in the test facilities at the University of Colorado, middle), to pre-delivery in-band spectral resolution test data (bottom). an accretion stream that produces an early ingress (Lai et al., 2010; Bisikalo et al., 2013; Turner et al., 2016) and a magnetically supported bow-shock 4 - 5 planetary radii upstream of the planet's orbital motion (analogous to the Earth-Sun system; Vidotto et al., 2010; Llama et al., 2011). \(CUTE\)'s primary science goal is to provide new constraints on the physics and chemistry of hot, Jovian-size exoplanets. The \(CUTE\) mission addresses this goal with the following observing program: 1. Measure NUV transmission spectra for a small survey of approximately 10 short period planets 2. Infer atmospheric escape rates and constrain the composition of the upper atmospheres of hot giant planets 3. Measure temporal variability in UV transit light curves by observing 6 - 10 transit observations per planet 4. Measure out-of-transit baseline fluxes to better characterize the stellar inputs to the planet's atmosphere and to capture light curve asymmetries \(CUTE\)'s instrument design and mission implementation was developed to enable the four key goals of the observing program. The spectral coverage and resolution of the \(CUTE\) (\(\Delta\)\(v\)\(<\) 300 km s\({}^{-1}\)) spectrograph provides ample separation of the relevant atomic, molecular, and continuum bands in this range (see, e.g., Figure 8 of Sing et al., 2019). \(CUTE\)'s mission design complements the instrument to meet the science goals of the mission. (1) We couple observations of the NUV continuum opacity, individual ionic tracers (Fe ii, Mg ii) with atmospheric chemistry, and hydrodynamic escape models to determine mass loss rates for \(CUTE\)'s targets. The sample size is driven by a combination of mission lifetime and instrumental sensitivity considerations. (2) \(CUTE\) measures the amplitude and slope of the NUV transmission curve to provide constraints on the chemistry and structure of the escaping atmosphere. The instrumental effective area was specified to enable multiple, wavelength resolved, NUV bands with sufficient photometric precision to distinguish the NUV transit radius from the white-light radius of the planet on all 10 targets and isolate transit spectra of the strongest absorption lines (e.g., Fe ii and Mg ii) on the brightest targets (addressing goals 1 and 2). The target sample was defined by estimating the detectability of excess NUV absorption; a combination of stellar brightness (V-magnitude), spectral type (A- and F-type stars have spectral energy distributions peaked in the NUV), planetary radius, effective planetary surface temperature, and gravity (hotter, lower-mass planets being more likely to exhibit extended atmospheres). (3) \(CUTE\)'s point-stare-repeat concept of operations is designed to make numerous visits to the same planet over the course of 4 to 8 weeks, building signal-to-noise for fainter targets and enabling measurements of light curve variability for brighter targets. (4) The same point-stare-repeat observing mode provides a wide stellar baseline to measure changes in the Mg ii activity and the increased dispersion of the photospheric and chromospheric continuum flux that indicate variability in the star's escape-driving XUV output. ## 3 Mission Implementation Path \(CUTE\) is NASA's first grant-funded UV/ Optical/ Infrared small satellite and first dedicated exoplanet spectroscopy mission. Given the novelty of this mission format for astrophysics science missions, we present a brief overview of the process, schedule, and cost of the mission here. The initial motivation for \(CUTE\) was discussed at a Keck/KISS workshop on exoplanet magnetic fields in August 2013, with the final science and measurement concept in place by the summer of 2015 following numerous informal discussions at science conferences. Fall 2015 was spent on science measurement definition and the development of the \(CUTE\) instrument design. \(CUTE\) was proposed as a four-year program through NASA's ROSES2015 call (submitted in March 2016), at an initial cost-to-launch of $3.3M, comparable to an astrophysics sounding rocket proposal but considerably lower cost than a stratospheric balloon program. \(CUTE\) was proposed and selected prior to the initiation of dedicated funding for astrophysics CubeSats, leading to a long delay between proposal submission and the start of funding (approximately 16 months; there was no Phase A or Concept Study period for \(CUTE\)). Long lead-time items such as the \(CUTE\) spacecraft bus, the rectangular telescope, holographically-ruled diffraction grating, and NUV-optimized CCD detectors were ordered a few months after selection in fall 2017. The \(CUTE\) spacecraft (Blue Canyon Technology, BCT) costs increased relative to the quote provided for the proposal. To accommodate the cost increases in a mission class without reserves, several descopes were implemented, including scaling back the spacecraft's attitude control system to a single star-tracker and eliminating engineering model radios. In 2018 and 2019, we developed the hardware test facilities that complemented the University of Colorado's existing UV vacuum calibration facilities (France et al., 2016) and conducted component-level characterization (e.g., groove efficiency of the diffraction gratings, trade study of Al vs. Al+MgF\({}_{2}\) grating coatings). Instrument assembly and characterization, integration into the spacecraft, and pre-delivery environmental testing (e.g., vibration testing, comprehensive performance testing, thermal vacuum testing, etc.) were completed in 2020 and 2021. The duration from the start of \(CUTE\) funding to delivery of the completed observatory was almost exactly four years, although approximately 10 months of schedule were lost to the COVID-19 pandemic. \(CUTE\) proposed to NASA's CubeSat Launch Initiative (CSLI) for launch support in fall 2017 and was selected for flight. The proposed spacecraft orbit, including the initial \(CUTE\) mission requirement documentation submitted to CSLI, requested a dawn-dusk (terminator), sun-synchronous orbit to enable uninterrupted orbital phase coverage of transiting planets and to minimize day/night thermal variations. Orbital altitude (450 - 600km) was a secondary consideration driven by desired mission lifetime. CSLI was unable to accommodate the requested sun-synchronous orbit within the time-period covered by the mission funding and \(CUTE\) was instead manifested in November 2019 as a secondary payload on NASA's Landsat 9 mission. The Landsat 9 launch was scheduled 8 months after \(CUTE\)'s targeted launch window. As a result, NASA provided a 6 month, $0.5M extension to the \(CUTE\) program in Fall 2019. Starting in Spring 2020, COVID delayed the Landsat 9 launch by another 9 months. COVID also impeded \(CUTE\)'s development timescale owing to supply-chain delays and the challenges of getting students, scientists, and engineers into \(CUTE\)'s labs for continued testing and development. The \(CUTE\) mission submitted a follow-on, competed APRA proposal in December 2020 to conduct mission operations and carry out the science program (bringing \(CUTE\)'s cost to complete the full science mission to approximately $5M). Of the 18 CubeSat missions originally manifested with Landsat 9, only four (including \(CUTE\)) would ultimately deliver and be flown on the mission. The integrated and tested observatory was delivered to NASA at Vandenberg Space Force Base (VSFB) in July 2021, and the \(CUTE\) team supported installation into the CubeSat dispenser on the ESPA ring. The mission was launched on September 27 2021 into a sun-synchronous orbit (\(\approx\)98\({}^{\circ}\), 10am Local Time of Ascending Node; LTAN) with a 560km apogee. \(CUTE\) deployed from the dispenser approximately two hours after launch; solar arrays deployed and the communication beacon started approximately 30 minutes later. \(CUTE\)'s beacons were identified by the amateur RF community on the first orbit and communications were established with the ground station at the University of Colorado on September 28 2021. We refer the reader to Section 4 and Suresh et al. (in preparation) for a description of the \(CUTE\) ground segment and commissioning program. We refer the reader to the \(SmallSat\) Conference proceeding by Egan et al. 2022 for a discussion of lessons learned during \(CUTE\) development and early on-orbit operations. \(CUTE\) is part of NASA's suborbital program, where student training and early-career mentorship are key ingredients to the definition of mission success. \(CUTE\)'s approach was built off of the framework of the NASA Sounding Rocket Program, which has a long history in the professional development of NASA's space scientist workforce. The core science and instrument team (defined as those working with \(CUTE\) for more than 2 years of the implementation phase) included two Ph.D. candidates (in astrophysics and aerospace engineering), four undergraduates, two postdoctoral researchers, two early career engineers (\(CUTE\) was the first job post-bachelors degree for the mission's lead mechanical and electrical engineers), and the early-career project scientist (Dr. Brian Fleming, who became the PI of a NASA sounding rocket program and the SPRITE CubeSat (Fleming, 2022) mission during the course of the \(CUTE\) development phase). Over the course of \(CUTE\)'s component-level and instrument test phase, the project employed another six undergraduate students in various laboratory and science program development tasks (e.g., target field checking for crowded fields). In addition to this, the operations team for \(CUTE\) (see Section 5) included an additional two undergraduate students, two graduate students, and one flight software undergraduate student. Taken as a whole, \(CUTE\) supported the mentoring and training for over 20 early-career scientists and engineers through the completion of the on-orbit commissioning phase. ## 4 Implementation: the \(Cute\) Science Payload The \(CUTE\) payload is a magnifying NUV spectrograph fed by a rectangular Cassegrain telescope. The spectrogram is recorded on a back-illuminated, UV-enhanced e2v CCD42-10 that is maintained at a nominal operating temperature (\(-15\) - \(-5^{\circ}\) C) by passive cooling through a radiator panel. \(CUTE\) employs the BCT XB1 bus to provide critical subsystems including power, command and data handling, communications, and attitude control (ADCS). Figure 1 shows an instrument schematic, optical testing of the telescope, and an in-band calibration spectrum from the flight instrument prior to instrument integration into the BCT chassis. The \(CUTE\) instrument is housed in 4U of the 6U spacecraft. The \(CUTE\) aperture is a 206 \(\times\) 84 mm, f/0.75 (in the cross-dispersion axis) primary mirror that is part of the f/2.6 Cassegrain telescope. The rectangular shape of the primary is matched to the long axis of the 6U CubeSat chassis and allows for 3 times more throughput than a 1U circular aperture (Fleming et al., 2018). The hyperbolic secondary mirror is cantilevered off of the primary mirror, which serves as the bench for the optical instrument, by means of an Invar tower (see Figure 1). A 15 \(\times\) 6 mm fold mirror redirects the beam 90\({}^{\circ}\) through a 141 \(\mu\)m \(\times\) 3.5 mm (60\({}^{\prime\prime}\)\(\times\) 1400\({}^{\prime\prime}\) projected) slit at the Cassegrain focus. The slit, manufactured by OSH Stencils, was polished on the incident side and angled 45\({}^{\circ}\) about the slit axis to redirect the field to an aspect camera for use in telescope performance testing and alignment with the BCT spacecraft during integration. The rectangular telescope design optimizes collecting area within the mass-volume constraints of the cubesat form factor, while the large sky field-of-view, increased cost, and mechanical stress at the primary mirror-secondary tower interface add design complications. Once through the slit, the starlight is diffracted, redirected, and magnified by a spherical, R = 86.1mm radius, 1714 gr mm\({}^{-1}\) aberration correcting, ion-etched holographic grating fabricated by Horiba Jobin-Yvon (Horiba J-Y). The holographic grating design was adopted to minimize scattered light in the system. A second fold mirror with an R\({}_{x}\) = 300 mm radius of curvature about the cross-dispersion dimension provides additional aberration corrections before the beam reaches the CCD. The final beam focal ratio is f/5.5 in the cross-dispersion axis, with a detector plate scale of 186\({}^{\prime\prime}\) mm\({}^{-1}\). The detector and custom avionics were tested and flight ruggedized by the \(CUTE\) team in their on-campus laboratories at the University of Colorado (Nell et al., 2021). The telescope was delivered fully assembled to CU by Nu-Tek Precision Optical. All mirrors are coated with MgF\({}_{2}\) + Al to prevent the formation of an oxide layer (AlO\({}_{3}\)). We elected to receive flight and flight-backup gratings coated in bare Al and MgF\({}_{2}\) + Al, respectively (coated by Horiba J-Y), to control for a potential efficiency anomaly similar to that seen on the COS NUV gratings (Wilkinson, 2002). Detailed pre-flight efficiency and environmental testing showed better performance with the bare Al grating, without a measurable loss in efficiency over time. As a result, the instrument team elected to fly the bare Al-coated mirror on the flight instrument. The design-prediction flight instrument performance curves are presented in Fleming et al. (2018) and the on-orbit instrument performance of the \(CUTE\) payload is presented in Egan et al. (2022 - this volume); we provide a brief summary of the key performance metrics in the following subsection. ### Instrument Specifications The final bandpass recorded by the CCD detector is 2479 - 3306 A (see Table 1), which is a slight change from the pre-flight projection owing to shifts in the optical system during ascent. The exact bandpass also varies by several A depending on the alignment of the stellar point spread function (PSF) in the spectrograph slit. The spectral resolving power of the instrument is \(\approx\) 750 (\(\Delta\lambda\)\(\approx\) 3.3 - 4.5 A across the bandpass, including the effects of spacecraft pointing jitter). Figure 2 shows a representative calibration spectrum from the \(CUTE\) on-orbit commissioning program. \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{ Instrument Metric} & On-orbit Value \\ \hline Bandpass & 2479 – 3306 Å \\ Spectral Resolution\({}^{a}\) & 3.9 Å \\ Cross-Dispersion Resolution\({}^{b}\) & \(\approx\) 30\({}^{\prime\prime}\) \\ Peak A\({}_{eff}\) & 27.5 cm\({}^{2}\) at 2500 Å \\ Background Flux Limit & 5 \(\times\) 10\({}^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\) Å\({}^{-1}\) \\ in 300s\({}^{b}\) & \\ \hline \end{tabular} \({}^{a}\)Average resolution over the bandpass, including spacecraft jitter. \({}^{b}\)Evaluated at 3000 Å. \end{table} Table 1: \(CUTE\) Instrument Specifications Figure 2: \(CUTE\) calibration observations of the O4 supergiant \(\zeta\) Puppis. The top plot shows the full-frame (2048 \(\times\) 515 pixel) calibration image and the bottom plot shows the extracted one-dimensional spectrum (flux calibrated against archival \(HST\) and \(IUE\) spectra). \(\zeta\) Puppis was selected as a calibration target because of the wealth of archival NUV spectra and the high photospheric temperature ensures that iron and magnesium lines in the spectrum are narrow, interstellar features. The system effective area is a function of the reflection efficiency of the optics (\(R\)), efficiency of the grating (\(\epsilon_{g}\)), and quantum efficiency of the detector (DQE), multiplied by the geometric collecting area of the telescope (A\({}_{geo}\)): \[A_{eff}(\lambda)=A_{geo}R^{5}(\lambda)\epsilon_{g}(\lambda)DQE(\lambda). \tag{1}\] The on-orbit effective area was measured by comparing \(CUTE\)'s observations (in units of electrons s\({}^{-1}\) A\({}^{-1}\)) with flux-calibrated observations from the \(IUE\) and \(HST\) archives. We measured \(A_{eff}=27.5\) - 19.0 cm\({}^{2}\) across the \(CUTE\) spectral range with a peak at approximately 2500 A. Component-level efficiencies were measured prior to instrument assembly in the UV calibration facilities at the University of Colorado (France et al., 2016; Egan et al., 2020). The component-based, pre-flight \(A_{eff}\) estimate was about 12% higher than the median effective area subsequently measured on-orbit (Egan et al. - this volume). We attribute the loss of sensitivity to two possible causes: particulate contamination during the failure of \(CUTE\)'s thermal-electric cooling system (which occurred during thermal vacuum testing) and contamination during the \(\sim\)2 months that \(CUTE\) sat in the CubeSat dispenser at VSFB prior to launch. A dry nitrogen purge was requested in order to minimize optical degradation following dispenser integration, but was not made available. The difference in effective area does not have a significant impact on target selection and detectability, however, the larger and variable thermal environment resulting from the loss of the active cooling system removes most of the stars fainter than the nominal target list. Combining the \(CUTE\) effective area with the on-orbit instrumental background level and the nominal 300 second exposure time for \(CUTE\)'s exoplanet surveys, we calculate the typical dispersion in the residual flux following background subtraction. This sets the minimum flux level that can be detected above the noise in a 300 second spectrum, which we refer to as the background flux limit. We measure a background flux limit of \(\approx\) 5 \(\times\) 10\({}^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\) A\({}^{-1}\) at 3000 A. ## 5 \(Cute\) Mission Operations The \(CUTE\) spacecraft includes a UHF (437.25 MHz) antenna with both transmission and receiving capabilities. The UHF link is used for uploading commands to the spacecraft and monitoring real-time telemetry during ground passes. \(CUTE\) also has an S-band (2402 MHz) downlink-only mode for primary science data transmission. The mission operations and ground station for \(CUTE\) are located at the Laboratory for Atmospheric and Space Physics in Boulder, Colorado. \(CUTE\) typically has 1 - 2 high-elevation (\(>\) 50\({}^{\circ}\)) passes and 1 - 2 low-elevation passes per day over the Boulder ground station, resulting in approximately 10 minutes per day of optimal downlink time. Figure 3 presents an illustration of the \(CUTE\) science operations observing mode, including science data acquisition, approximately monthly calibration activities, and data downlinks over the Boulder ground station. Figure 3: Schematic description of \(CUTE\) science and calibration observations. The CubeSat Operations Center at LASP utilizes the LASP ground station initially built for CSSWE and MinXSS, and the recently completed CSIM CubeSat mission for NASA's heliophysics division (Mason et al., 2016), using a combination of HYDRA and OASIS-CC (Flynn et al., 2021) for command and control. The mission operations are conducted by a team of professionals with experience from larger NASA flight missions (e.g., \(Kepler\), \(IXPE\), and several heliophysics missions) and a dedicated student operations group. The undergraduate and graduate student operators perform mission planning, operations, and health and status monitoring for the spacecraft; the science team has developed a graphical user interface tool to determine optimal target visibility and viewing conditions. The output of the science planning tool is processed into \(CUTE\)'s weekly operations plan to define the observing, charging, and communication activities for the week. To maximize operational simplicity, \(CUTE\) conducts single-target campaigns: we schedule multiple transit observations in a single command block (typically lasting 3 - 7 days) that is uploaded to the spacecraft. Each command block includes calibration exposures, science exposures, and data downlink periods. We repeat this exercise until 6 - 10 transits of a given planet have been executed. Interleaved with the transit observing blocks are dedicated downlink periods, typically between 3 and 5 days, to re-transmit science and calibration data that experienced low data-completion fractions in the initial downlink or were lost to spacecraft resets. The need to execute 1 - 3 additional data downlinks per transit campaign, driven by the frequent loss of fine-pointing control and resets on the spacecraft bus (1 - 2 events per week), is the limiting factor to \(CUTE\)'s operational efficiency. Pre-flight observation planning predicted 3 transits per week could be successfully executed and downlinked, which would complete 10 transits per target of the 10 target sample in approximately 8 months. The realized mission efficiency is projected to complete an average of 6 transits per target over a science mission lifetime of \(\sim\) 15 months, or approximately a factor of 3 reduction in efficiency compared to pre-flight estimates. A summary of the \(CUTE\) on-orbit operations and payload commissioning will be presented in a forthcoming paper (A. Suresh et al. - in prep.). ## 6 \(Cute\) Science Data Example In this section we present representative samples of \(CUTE\)'s individual science data products and a preliminary reduced transit light curve from the Early Release Science program. Detailed analyses of the wavelength dependent transit depths, interpretation and quantification of atmospheric composition and escape rates, and inter-comparison between different transit visits for the Early Release Data program will be presented in upcoming works (Egan et al. and Sreejith et al. 2022 - in prep). The goal here is to present flight data of exoplanets and their host stars to illustrate the features (and limitations) of \(CUTE\) observations as revealed by the first two targets of the Early Release Science program. Figure 4 (\(top\)) presents a standard spectral data product that \(CUTE\) transmits over the S-band downlink. These are "TRIM2D" data products, 2048 \(\times\) 100 pixel two-dimensional spectra with a 5 minute exposure time. The images are trimmed to reduce downlink volume. The 5 minute exposure time is typical of all \(CUTE\) spectra and is a balance of signal-to-noise for our target brightness, number of exposures possible per orbital night, and simplicity of operational planning. Each transit visit is buffered by a number of bias and dark exposures taken at similar celestial pointing, orbital position (latitude and longitude, and therefore similar temperature and illumination conditions), and elevation angle of the telescope with respect to the Earth limb. These calibration files are used to remove thermal and readout noise effects, as described in Egan et al. (2022). Data processing beyond the downlink of the TRIM2D data products occurs on the ground. The two-dimensional data are collapsed along a diagonal extraction region; the spectra are wavelength and flux calibrated using observations from the on-orbit commissioning phase. Figure 4 (\(bottom\)) shows calibrated one-dimensional spectra of WASP-189 and KELT-20, taken outside of transit. The spectra are typical of NUV observations from A-type stars in the \(IUE\) archive, with the most prominent feature being Mg ii absorption in the photosphere of these intermediate temperature stars. The reader will also notice the defocus seen in the cross-dispersion direction, manifest as the double-lobe structure that increases to shorter wavelengths across the band. This defocus was introduced during an additional payload vibration test that was not part of the original test specifications but later required by NASA just prior to the delivery of the spacecraft. This defocus was then exacerbated during the powered ascent; there is no detectable "breathing" of the focus with orbital location although the background levels are strongly driven by the spacecraft thermal and illumination conditions. \(CUTE\)'s two-dimensional spectra are calibrated with master bias frames before cosmic ray correction using LAcosmic algorithm (van Dokkum, 2001). We extract the one-dimensional spectra from this corrected image as described in Sreejith et al. (2022 - in prep). The one-dimensional spectra are obtained by subtracting the background, which are then wavelength calibrated. The spectra are integrated over the wavelength region of \(\approx\)2540A to \(\approx\)3300A to create a light curve point. These light curves can be created down to a wavelength resolution limit of \(\sim\) 4 A per bin, but for initial demonstration purposes we display a broad NUV bandpass. Figure 5 presents \(CUTE\)'s approximately 2540 - 3300 A light curves for 3 different visits of the ultra-hot Jupiter WASP-189b. The best fit transit model, taking into account wavelength-dependent stellar limb darkening (shown in gray), will be presented in detail in Sreejith et al. (2022 - in prep). The optical transit light curve from \(CHEOPS\)(Lendl et al., 2020) is shown for comparison and suggests excess transit absorption at UV wavelength compared with the broadband geometric size of the planet. We demonstrate self-consistent transit depth recoveries of \(\approx\) 1.0 - 1.1 % over three separate transit observations of WASP-189b separated by several weeks. Excess planetary absorption at NUV wavelengths is consistent with previous observations of ultra-hot Jupiters observed with \(HST\)(Sing et al., 2019; Cubillos et al., 2020; Lothringer et al., 2022). A transit depth of 1% in WASP-189b would indicate that the NUV transit observations are probing the extended upper atmosphere of the planet that is subject to stellar high energy radiation and escape. This is because the 1 microbar level, used here as a rough proxy for the base of the thermosphere, has a radius of about 1.1 R\({}_{p}\) and a transit depth of 0.6%, based on an effective temperature of 2410 K and an atmosphere with solar abundances. In contrast, a transit depth of 1% corresponds to a larger radius of 1.4 R\({}_{p}\). If we assume a temperature of 8000 K in the thermosphere, the pressure at 1.4 R\({}_{p}\) would be between 0.1 and 1 nbar. Given that this pressure is too low for significant clouds and hazes, a pseudo-continuum by these absorbers is unlikely and the broadband NUV transit depth likely arises from a forest of metal ion lines (e.g., Fossati et al., 2010; Sing et al., 2019). The individual absorption lines responsible would have to extend to much higher radii than 1.4 R\({}_{p}\) in transit to be detectable in the broad NUV band. We note that the preliminary lightcurves show significant scatter beyond the photon noise limit at this stage of the reduction. Work is ongoing to model the temperature- and orbital position-dependent background to reduce the observed dispersion in the lightcurves. ## 7 Conclusions The \(CUTE\) CubeSat mission was launched in September 2021 and is currently carrying out its primary science mission to collect NUV spectroscopy of transiting planets. \(CUTE\) has successfully completed spacecraft and instrument commissioning and has completed initial science observations on a number of exoplanetary systems. Optimal science targets are short-period, Jovian-sized planets orbiting bright (\(V<\) 8) F and A stars. The science instrument has demonstrated sensitivity to NUV white light transit depth of \(<\) 1% and wavelength-dependent exoplanet atmosphere opacity increase at \(\lambda\lesssim\) 3300 A. We have presented the motivation for, mission design of, and on-orbit characteristics of the \(CUTE\) mission. The companion paper by Egan et al. present the details of the on-orbit instrument performance of \(CUTE\) and future papers will present mission science results as well as information about the \(CUTE\) ground-segment data pipeline, commissioning, and operations of the mission. \(CUTE\) has supported mentoring and training for over 20 early-career scientists and engineers. Mission operations are planned to continue until June 2023 (currently limited by project funding) and the initial release of \(CUTE\) data will be delivered to the NexSci archive in 2023. Figure 4: \(CUTE\) spectral observations of WASP-189 and KELT-20, A-type host stars of \(CUTE\)’s first Early Release Science targets. The top plots shows a ”TRIM2D” 2048 \(\times\) 100 pixel two-dimensional data product (exposure time = 5 minutes). The bottom plots show the one-dimensional spectral collapse including wavelength and flux calibration. A 10 pixel boxcar smooth has been applied to the one-dimensional spectra for display purposes. Acknowledgments: \(CUTE\) was developed and operated with the support to two NASA/APRA awards to the Laboratory for Atmospheric and Space Physics at the University of Colorado Boulder, NNX17AI84G and 80NSSC21K1667. A. G. S. was supported by a Schrodinger Fellowship through the Austrian Science Fund (FWF) [J 4596-N]. A. G. S. and L.F. acknowledge financial support from the Austrian Forschungsforderungsgesellschaft FFG project 859718 and 865968. AAV acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 817540, ASTROFLOW). K.F. acknowledges the numerous and invaluable discussions with colleagues excited about ultraviolet transit science and the potential to do science with small satellites. The \(CUTE\) team wishes to specifically recognize the amateur radio operator community and, and SatNOGS network specifically, for hosting numerous telemetry tracking tools that have improved the mission's ability to recover from faults and understand long-term spacecraft trends much more efficiently than would have been otherwise possible. Figure 5: Initial \(CUTE\) light curves of WASP-189b, showing three independent NUV (approximately 2540 – 3300 Å) light curves (black points) and the best-fit transit models in gray. The plots compare the NUV band light curves with the optical light curve (in red) from \(CHEOPS\) (Lendl et al., 2020). The NUV transits are significantly deeper than their broadband optical counterparts, indicating an effective planetary radius increase of R\({}_{P,NUV}\approx\) 1.5 R\({}_{P,opt}\).
2308.11041
Bayesian Prevalence Estimation from Pooled and Individual Data
Pooled and individual disease testing are common methods for determining the population prevalences of diseases. Recently, researchers have used Monte Carlo Markov Chain methods to estimate population prevalence from the combined streams of these two types of testing data. We propose an analytical solution for estimating population prevalence from combined individual and pooled binary sampling data. We also use simulated sampling data to characterize these posterior distributions under a variety of sampling conditions, including a range of true prevalences, variable numbers of pooled and individual tests, variable number of individual samples per pooled sample, and a range of values for test sensitivity and specificity.
Matthew Ritch, Charles Copley
2023-08-21T21:02:00Z
http://arxiv.org/abs/2308.11041v1
# Bayesian Prevalence Estimation from Pooled and Individual Data ###### Abstract Pooled and individual disease testing are common methods for determining the population prevalences of diseases. Recently, researchers have used Monte Carlo Markov Chain methods to estimate population prevalence from the combined streams of these two types of testing data. We propose an analytical solution for estimating population prevalence from combined individual and pooled binary sampling data. We also use simulated sampling data to characterize these posterior distributions under a variety of sampling conditions, including a range of true prevalences, variable numbers of pooled and individual tests, variable number of individual samples per pooled sample, and a range of values for test sensitivity and specificity. ## 1 Introduction Binary tests are frequently used to measure disease prevalence in a large population. When test kits are scarce or expensive, multiple individual samples can be combined for a pooled or group test to save resources [1]. Recently, researchers have investigated using Bayesian data fusion [2] methods for using pooled and individual test data together to estimate population prevalences via Monte Carlo Markov Chain (MCMC) [3][4][5]. We have developed an analytical Bayesian method for combining data from both individual and pooled tests into a single analytical posterior distribution for the true population prevalence, obviating the need for MCMC methods and thereby saving compute time and resources. As we will show, pooled testing is more useful at low true prevalences. We present simulation results for a variety of sampling conditions. These results can be used to inform sampling program design given a test budget and a preliminary estimate of disease prevalence. ## 2 The Posterior Probability Distribution for Population Prevalence ### Definitions Define \(P\) as a random variable for population prevalence. We will use \(p\) to denote outcome values of that random variable. Define \(m\) as the number of individual tests conducted. Define \(Y_{1},Y_{2},...Y_{m}\) as random variables for the binary results of these individual tests. Define \(y\) as the number of observed positive individual tests. Define \(n\) as the number of pooled tests conducted. Define \(Z_{1},Z_{2},...Z_{n}\) as random variables for the binary results of these pooled tests. Define \(z\) as the number of observed positive pooled tests. Define \(\pi_{q}\) as the probability of a pooled sample testing positive when \(q\) individuals are pooled. We assume that each individual sample's disease status is identically and independently distributed and is positive with probability \(P\). This allows us to derive simple expressions for the probability of a positive pooled test and the joint probability of our individual and pooled testing results. This assumption is also made in Hoegh et al. 2021 [3], where they explain that "Implicitly the calculation of \(\pi\) in Equation 1 assumes that the samples are independent. For the applications we are primarily focused on, viral surveillance in wildlife populations where individual samples can be randomly assigned to pools, this is usually a reasonable assumption." The probability of a pooled sample testing positive when \(q\) individuals are pooled, \(\pi_{q}\), is equivalent to the probability that at least one of the samples which are combined into the pooled sample is positive. \[\pi_{q}=1-Pr(\text{all of the samples are negative})\] so \[\pi_{q}=1-(1-p)^{q}\] Each \(Y_{i}\) and \(Z_{i}\) are i.i.d. Bernoulli \[Y_{i}\sim Bernoulli(p)\] \[Z_{i}\sim Bernoulli(\pi_{q})\] Define random variables \(Y\) and \(Z\) as \[Y=\sum_{i=1}^{m}Y_{i}\] \[Z=\sum_{i=1}^{n}Z_{i}\] Then \(Y\) and \(Z\) follow binomial distributions and are independent. \[Y\sim Binomial(m,p)\] \[Z\sim Binomial(n,\pi_{q})\] so \[Pr(Y=y|P=p)={m\choose y}(p)^{y}(1-p)^{m-y} \tag{1}\] \[Pr(Z=z|P=p)={n\choose z}(\pi_{q})^{z}(1-\pi_{q})^{n-z} \tag{2}\] In general, we assume a beta prior distribution for \(P\) \[P\sim Beta(\alpha,\beta)\] so \[Pr(P=p)=\frac{p^{\alpha-1}(1-p)^{\beta-1}}{B(\alpha,\beta)} \tag{3}\] where \(B(\alpha,\beta)\) is the beta function of \(\alpha\) and \(\beta\). For the simulation study, we use the uniform prior \[\alpha=1,\beta=1\] Bayes' Theorem can be stated for this problem as \[Pr(P=p|Y=y,Z=z)=\frac{Pr(Y=y,Z=z|P=p)Pr(P=p)}{Pr(Y=y,Z=z)} \tag{4}\] We will solve for \(Pr(P=p|Y=y,Z=z)\). ### \(Pr(Y=y,Z=z|P=p)Pr(P=p)\) Because we have assumed that each sample is i.i.d. \(Bernoulli(P)\), \[Pr(Y=y,Z=z|P=p)=Pr(Y=y|P=p)Pr(Z=z|P=p)\] Substitute in equations 1 and 2: \[=\binom{m}{y}\binom{n}{z}(p)^{y}(1-p)^{m-y}(\pi_{q})^{z}(1-\pi_{q})^{n-z}\] and recall our assumed prior for \(P\), equation 3 \[Pr(P=p)=\frac{p^{\alpha-1}(1-p)^{\beta-1}}{B(\alpha,\beta)}\] so \[Pr(Y=y,Z=z|P=p)Pr(P=p)=\frac{\binom{m}{y}\binom{n}{z}(p)^{y}(1-p)^{m-y}(\pi_{q} )^{z}(1-\pi_{q})^{n-z}p^{\alpha-1}(1-p)^{\beta-1}}{B(\alpha,\beta)} \tag{5}\] combine terms \[Pr(Y=y,Z=z|P=p)Pr(P=p)=\frac{\binom{m}{y}\binom{n}{z}(p)^{y+\alpha-1}(1-p)^{m- y+\beta-1}(\pi_{q})^{z}(1-\pi_{q})^{n-z}}{B(\alpha,\beta)} \tag{6}\] recall \[\pi_{q}=1-(1-p)^{q}\] so \[(1-\pi_{q})^{n-z}=((1-p)^{q})^{n-z}=(1-p)^{qn-zq} \tag{7}\] By binomial expansion: \[\pi_{q}^{z}=[1-(1-p)^{q}]^{z}=\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}(1-p)^{qi} \tag{8}\] Substitute these expressions for \((1-\pi_{q})^{n-z}\) and \(\pi_{q}^{z}\) in equations 7 and 8 into equation 6 \[Pr(Y=y,Z=z|P=p)Pr(P=p)=\frac{\binom{m}{y}\binom{n}{z}(p)^{y+\alpha-1}(1-p)^{m -y+\beta-1+qn-qz}[\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}(1-p)^{qi}]}{B(\alpha,\beta)}\] Move terms inside the summation: \[Pr(Y=y,Z=z|P=p)Pr(P=p))=\frac{\binom{m}{y}\binom{n}{z}}{B(\alpha,\beta)}\sum_ {i=o}^{z}\binom{z}{i}(-1)^{i}p^{y+\alpha-1}(1-p)^{m-y+\beta-1+qn-qz}(1-p)^{qi}\] Combine terms to find that \[Pr(Y=y,Z=z|P=p)Pr(P=p))=\frac{\binom{m}{y}\binom{n}{z}}{B(\alpha,\beta)}\sum_ {i=o}^{z}\binom{z}{i}(-1)^{i}p^{y+\alpha-1}(1-p)^{m-y+\beta-1+qn-qz+qi} \tag{9}\] ### \(Pr(Y=y,Z=z)\) Now the joint probability \(Pr(Y=y,Z=z,P=p)\) can be expressed as \[Pr(Y=y,Z=z|P=p)Pr(P=p)\] So \(Pr(Y=y,Z=z)\) can be expressed as \[Pr(Y=y,Z=z)=\int_{0}^{1}Pr(Y=y,Z=z,P=p)dp\] \[Pr(Y=y,Z=z|P=p)Pr(P=p)=\frac{\binom{m}{y}\binom{n}{z}}{B(\alpha,\beta)}\sum_{i=o}^ {z}\binom{z}{i}(-1)^{i}p^{\gamma-1}(1-p)^{\delta+qi-1}\] Recall Bayes' Theorem \[Pr(P=p|Y=y,Z=z)=\frac{Pr(Y=y,Z=z|P=p)Pr(P=p)}{Pr(Y=y,Z=z)}\] (4 revisited) so \[Pr(P=p|Y=y,Z=z)=\frac{\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}p^{\gamma-1}(1-p)^{ \delta+qi-1}}{\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}B(\gamma,\delta+qi)} \tag{11}\] This is an analytical form of the posterior probability distribution for the population prevalence \(P\). Now let \(f(p,\gamma,\delta+qi)\) be the PDF of the beta distribution with parameters \(\gamma\), and \(\delta+qi\), evaluated at \(p\). \[f(p,\gamma,\delta+qi)=\frac{p^{\gamma-1}(1-p)^{\delta+qi-1}}{B(\gamma,\delta+ qi)}\] So we can also write the posterior probability distribution for \(P\) in this form: \[Pr(P=p|Y=y,Z=z)=\frac{\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}B(\gamma,\delta+qi)f( p,\gamma,\delta+qi)}{\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}B(\gamma,\delta+qi)} \tag{12}\] This form is less useful computationally, but it demonstrates that the posterior distribution for \(P\) can be viewed as a weighted sum of \(1+z\) beta distributions. ### Posterior Distribution for \(P\) with Point Estimates for Sensitivity and Specificity Now we will consider what happens to the posterior distribution when our binary testing is sometimes incorrect. We define sensitivity \(s_{e}\) and specificity \(s_{p}\) as \[s_{e}=Pr(test\ is\ positive\mid individual\ is\ positive)\] \[s_{p}=Pr(test\ is\ negative\mid individual\ is\ negative)\] For simplicity, we will assume without loss of generality that these values are the same for pooled tests. Thus, \[Pr(pooled\ test\ is\ positive\mid at\ least\ one\ pooled\ individual\ is\ positive)=s_{e}\] \[Pr(pooled\ test\ is\ negative\mid all\ pooled\ individuals\ are\ negative)=s_{p}\] We will also assume that we know the true values of \(s_{e}\) and \(s_{p}\). We can now modify our previous expression for \(Pr(Y=y,Z=z|P=p)Pr(P=p)\). Again, \[Pr(Y=y,Z=z|P=p)=Pr(Y=y|P=p)Pr(Z=z|P=p)\] Now, we will modify 1 and 2 to account for sensitivity and specificity. \[Pr(Y=y|P=p)=\binom{m}{y}[s_{e}p+(1-s_{p})(1-p)]^{y}[(1-s_{e})p+s_{p}(1-p)]^{m-y} \tag{13}\] \[Pr(Z=z|P=p)=\binom{n}{z}[s_{e}\pi_{q}+(1-s_{p})(1-\pi_{q})]^{z}[(1-s_{e})\pi_{ q}+s_{p}(1-\pi_{q})]^{n-z} \tag{14}\] As before, we will use the binomial theorem to make these expressions more amenable to the integration needed to find an analytical posterior distribution for \(P\). First, look at 13. \[[s_{e}p+(1-s_{p})(1-p)]^{y}=\sum_{i=0}^{y}\binom{y}{i}[s_{e}p]^{i}[(1-s_{p})( 1-p)]^{y-i}=\sum_{i=0}^{y}\binom{y}{i}s_{e}^{\ i}(1-s_{p})^{y-i}p^{i}(1-p)^{y-i}\] and \[[(1-s_{e})p+s_{p}(1-p)]^{m-y}=\sum_{j=0}^{m-y}\binom{m-y}{j}[(1-s_{e})p]^{j}[s _{p}(1-p)]^{m-y-j}=\sum_{j=0}^{m-y}\binom{m-y}{j}(1-s_{e})^{j}s_{p}^{\ m-y-j}p^{j}(1-p)^{m-y-j}\] so \[Pr(Y=y|P=p)=\binom{m}{y}\sum_{i=0}^{y}\sum_{j=0}^{m-y}\binom{y}{i}\binom{m-y} {j}s_{e}^{\ i}(1-s_{e})^{j}s_{p}^{\ m-y-j}(1-s_{p})^{y-i}p^{i+j}(1-p)^{m-i-j} \tag{15}\] Similarly, 14 can be modified to \[Pr(Z=z|P=p)=\binom{n}{z}\sum_{k=0}^{z}\sum_{l=0}^{n-z}\binom{z}{k}\binom{n-z} {l}s_{e}^{\ k}(1-s_{e})^{l}s_{p}^{\ n-z-l}(1-s_{p})^{z-k}\pi_{q}^{\ k+l}(1-\pi_{q})^{ n-k-l} \tag{16}\] Define \(g(i,j,k,l)\) as \[g(i,j,k,l)=\binom{y}{i}\binom{m-y}{j}\binom{z}{k}\binom{n-z}{l}s_{e}^{\ i+k}(1-s_{e})^{j+l}s_{p}^{\ m-y-j+n-z-l}(1-s_{p})^{y-i+z-k} \tag{17}\] Combining 15 and 16 with our prior 3, \[Pr(Y=y|P=p)Pr(Z=z|P=p)Pr(P=p)=\] \[\frac{\binom{m}{y}\binom{n}{z}}{B(\alpha,\beta)}\sum_{i=0}^{y}\sum_{j=0}^{m-y} \sum_{k=0}^{z}\sum_{l=0}^{n-z}g(i,j,k,l)p^{i+j+\alpha-1}(1-p)^{m-i-j+\beta-1} \pi_{q}^{\;k+l}(1-\pi_{q})^{n-k-l} \tag{18}\] To continue, we can express \((1-\pi_{q})^{m-k-l}\) and \(\pi_{q}^{k+l}\) in terms of \(p\), as we did in equations 7 and 8. recall \[\pi_{q}=1-(1-p)^{q}\] so \[(1-\pi_{q})^{n-k-l}=((1-p)^{q})^{n-k-l}=(1-p)^{nq-kq-lq}\] and \[\pi_{q}^{\;k+l}=\sum_{r=0}^{k+l}\binom{k+l}{r}(-1)^{r}(1-p)^{rq}\] so, substituting these into 18 and combining like terms, we find an equation which is analogous to 9 \[Pr(Y=y|P=p)Pr(Z=z|P=p)Pr(P=p)=\] \[\frac{\binom{m}{y}\binom{n}{z}}{B(\alpha,\beta)}\sum_{i=0}^{y}\sum_{j=0}^{m-y }\sum_{k=0}^{z}\sum_{l=0}^{n-z}g(i,j,k,l)\sum_{r=0}^{k+l}\binom{k+l}{r}(-1)^{r }p^{i+j+\alpha-1}(1-p)^{m-i-j+\beta-1+nq-kq-lq+rq} \tag{19}\] Now, by margining out \(p\), we can find an equation which is analogous to 10 \[Pr(Y=y)Pr(Z=z)=\int_{0}^{1}Pr(Y=y|P=p)Pr(Z=z|P=p)Pr(P=p)dp\] \[=\frac{\binom{m}{y}\binom{n}{z}}{B(\alpha,\beta)}\sum_{i=0}^{y}\sum_{j=0}^{m-y }\sum_{k=0}^{z}\sum_{l=0}^{n-z}g(i,j,k,l)\sum_{r=0}^{k+l}\binom{k+l}{r}(-1)^{r }\int_{0}^{1}p^{i+j+\alpha-1}(1-p)^{m-i-j+\beta-1+nq-kq-lq+rq}dp\] \[=\frac{\binom{m}{y}\binom{n}{z}}{B(\alpha,\beta)}\sum_{i=0}^{y}\sum_{j=0}^{m-y }\sum_{k=0}^{z}\sum_{l=0}^{n-z}g(i,j,k,l)\sum_{r=0}^{k+l}\binom{k+l}{r}(-1)^{r }B(i+j+\alpha,m-i-j+\beta+nq-kq-lq+rq) \tag{20}\] Using Bayes' Theorem 4, \[Pr(P=p|Y=y,Z=z)\] \[=\frac{\sum_{i=0}^{y}\sum_{j=0}^{m-y}\sum_{k=0}^{z}\sum_{l=0}^{n-z}g(i,j,k,l) \sum_{r=0}^{k+l}\binom{k+l}{r}(-1)^{r}p^{i+j+\alpha-1}(1-p)^{m-i-j+\beta-1+nq- kq-lq+rq}}{\sum_{i=0}^{y}\sum_{j=0}^{m-y}\sum_{k=0}^{z}\sum_{l=0}^{n-z}g(i,j,k,l) \sum_{r=0}^{k+l}\binom{k+l}{r}(-1)^{r}B(i+j+\alpha,m-i-j+\beta+nq-kq-lq+rq)} \tag{21}\] A similar approach can be used to create an analytical posterior distributions for \(P\) under beta prior estimates of \(s_{e}\) and \(s_{p}\). ## 3 Characterizing the Posterior Distribution for \(P\) In this section, we will consider only the case where \(s_{e}=s_{p}=1\). ### Moments of the Posterior Distribution for \(P\) The moments of the posterior distribution for \(P\) can be calculated as follows. The \(n\)-th raw moment of a distribution, \(\mu_{n}\), is defined as \(\int x^{n}P(x)dx\), where \(x\) is a random variable and P(x) is its PDF. Hence, for our application, the \(n\)-th moment of the posterior distribution for \(P\) is: \[\mu_{n}=\int_{0}^{1}\frac{\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}B(\gamma,\delta+qi) f(p,\gamma,\delta+qi)}{\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}B(\gamma,\delta+qi)}p^{n}dp\] Where, as before, \(f(p,\gamma,\delta+qi)\) is the PDF of the beta distribution with parameters \(\gamma\) and \(\delta+qi\), evaluated at \(p\). \[f(p,\gamma,\delta+qi)=\frac{p^{\gamma-1}(1-p)^{\delta+qi-1}}{B(\gamma,\delta+qi)}\] Using linearity of integration: \[\mu_{n}=\frac{\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}B(\gamma,\delta+qi)\int_{0}^{1 }p^{n}f(p,\gamma,\delta+qi)dp}{\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}B(\gamma, \delta+qi)}\] But, \[\int_{0}^{1}p^{n}f(p,\gamma,\delta+qi)dp\] is clearly just of the \(n\)-th moment of the Beta distribution \(f(p,\gamma,\delta+qi)\), which is known to be \[\mu_{n}^{i}=\prod_{j=0}^{n}\frac{\gamma+j}{\gamma+\delta+qi+j} \tag{22}\] So the \(n\)-th moment of the posterior distribution for \(P\) can be found using the equation \[\mu_{n}=\frac{\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}B(\gamma,\delta+qi)\mu_{n}^{i} }{\sum_{i=0}^{z}\binom{z}{i}(-1)^{i}B(\gamma,\delta+qi)} \tag{23}\] And thus the \(n\)-th raw moment of posterior distribution for \(P\) is just a weighted sum of the \(n\)-th raw moments of its constituent beta distributions. ### The Posterior Distribution for \(P\) is not a Beta Distribution Since the posterior distribution for \(P\) is a weighted sum of Beta distributions, it is natural to wonder if the posterior distribution for \(P\) is itself another Beta distribution. We can construct a simple counterexample to show that the posterior distribution for \(P\) is not in general a beta distribution. Take the case where \(m=1,y=0,n=1,z=1,\alpha=1,\beta=1\) and \(q=3\). With these values, \(\gamma=1\) and \(\delta=2\). Plug these all into equation 12. \[Pr(P=p|Y=0,Z=1)=\frac{\sum_{i=0}^{1}\binom{1}{i}(-1)^{i}B(1,2+3i)f(p,1,2+3i)}{ \sum_{i=0}^{1}\binom{1}{i}(-1)^{i}B(1,2+3i)}\] \[=\frac{B(1,2)f(p,1,2)-B(1,5)f(p,1,5)}{B(1,2)-B(1,5)}\] Note that \(B(1,2)=\frac{1}{2}\) and \(B(1,5)=\frac{1}{5}\), so \[Pr(P=p|Y=0,Z=1)=\frac{.5f(p,1,2)-.2f(p,1,5)}{.3}=\frac{5f(p,1,2)-2f(p,1,5)}{3} \tag{24}\] Beta distributions are determined by their moments, so if this posterior distribution were a beta distribution, there would be some parameters \(a\) and \(b\) such that the moments of \(f(p,a,b)\) match the moments of the posterior distribution for \(P\). We will match the first three moments of \(f(p,a,b)\) to those of the posterior distribution for \(P\) to show that there is no solution for \(a\) and \(b\). First, we find the first three raw moments of the posterior distribution for \(P\): \(\mu_{0}\), \(\mu_{1}\), and \(\mu_{2}\). We will do this using equation 23. From equation 22, the moments of \(f(p,1,2)\) are \[\mu_{0}^{0}=\frac{1}{1+2}=\frac{1}{3}\] \[\mu_{1}^{0}=(\frac{1}{1+2})(\frac{1+1}{1+2+1})=\frac{1}{6}\] \[\mu_{2}^{0}=(\frac{1}{1+2})(\frac{1+1}{1+2+1})(\frac{1+2}{1+2+2})=\frac{1}{10}\] Similarly, the moments of \(f(p,1,5)\) are \[\mu_{0}^{1}=\frac{1}{1+5}=\frac{1}{6}\] \[\mu_{1}^{1}=(\frac{1}{1+5})(\frac{1+1}{1+5+1})=\frac{1}{21}\] \[\mu_{2}^{1}=(\frac{1}{1+5})(\frac{1+1}{1+5+1})(\frac{1+2}{1+5+2})=\frac{1}{56}\] Putting these six moments together with equations 23 and 24, we get \[\mu_{0}=\frac{5\mu_{0}^{0}-2\mu_{0}^{1}}{3}=\frac{\frac{1}{5}5-\frac{1}{6}2}{3} =\frac{4}{9}\] \[\mu_{1}=\frac{5\mu_{1}^{0}-2\mu_{1}^{1}}{3}=\frac{\frac{1}{6}5-\frac{1}{21}2}{ 3}=\frac{31}{126}\] \[\mu_{2}=\frac{5\mu_{2}^{0}-2\mu_{2}^{1}}{3}=\frac{\frac{1}{10}5-\frac{1}{56}2 }{3}=\frac{13}{84}\] So the posterior distribution for \(P\)'s first three moments are \(\frac{4}{9}\), \(\frac{31}{126}\), and \(\frac{13}{84}\). If the posterior distribution for \(P\) were equivalent to a standard beta distribution \(f(p,a,b)\), its first three moments could be calculated in terms of \(a\) and \(b\) using the equation for the \(n\)-th moment of the beta distribution. Set these equations for the first three moments of \(f(p,a,b)\) equal to the moments we just calculated. \[\mu_{0}^{*}=\frac{a}{a+b}=\frac{4}{9}\] \[\mu_{1}^{*}=(\frac{a}{a+b})(\frac{a+1}{a+b+1})=\frac{31}{126}\] \[\mu_{2}^{*}=(\frac{a}{a+b})(\frac{a+1}{a+b+1})(\frac{a+2}{a+b+2})=\frac{13}{84}\] There is no solution \((a,b)\) which satisfies these equations, so the posterior distribution for \(P\) is not in general a beta distribution. ### Implementation Notes The coefficients of the summations can be calculated using a computer program. Some of the terms in both the numerator and denominator sums are very large or very small. Therefore, computer implementations of this distribution must use floating point algebra of sufficient precision. The python decimal package with 200 places of precision is usually precise enough to accurately compute confidence intervals. The CDF can also be calculated by computer. It is simply a weighted sum of the CDFs of the \(1+z\) beta distributions. The same is true for the MGF of this distribution. This solution may be approximated by another beta distribution in some cases, but it is not a beta distribution in general. The solution can be approximated by assuming that it is equivalent to a beta distribution and using the method of moments, but this is only a good approximation if the true \(P\) is small. ### Sample Posterior Probability Distributions Figure 1 shows example posterior probability distributions for \(P\), computed following the equation above. The parameter settings used to create a curve are displayed above the graphs. ## 4 Simulation 1: \(s_{e}=s_{p}=1\) ### Simulation Design For Simulation 1, we assume that \(s_{e}=s_{p}=1\). We wrote a computer program to simulate \(m\) individual tests and \(n\) pooled tests with \(q\) individuals per pooled test and variable true population prevalence \(P\). We can use the results of these simulated tests to construct the analytical posterior probability distribution for \(P\), calculate 95% confidence intervals for \(P\), and compute various properties of the posterior distribution for \(P\). To assess the method's performance across a range of true prevalence values, we varied the true \(P\) between \(0.01\) and \(0.99\) in increments of approximately \(0.05\), or each of the following: \(P\in[0.01,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75, 0.8,0.85,0.9,0.95,0.99]\). To assess the method's performance across a range of numbers of individual and pooled tests, we varied \(m\), the number of individual tests, between \(0\) and \(200\) in increments of \(20\), or each of the following: \(m\in[0,20,40,60,80,100,120,140,160,180,200]\). For each trial, the number of pooled tests \(n\) was \(200-m\) so that we always simulated \(200\) tests total. We varied \(q\), the number of individuals per pooled test, between \(3\) and \(6\) inclusive in increments of \(1\). Figure 1: Example Posterior Distributions for \(P\) We ran 100 trials per experimental condition, or combination of \((m,n,P,q)\), for a total of 92400 trials. For each trial, we recorded: 1. if the true \(P\) was inside our 95% confidence interval, 2. the width of the confidence interval, and 3. the expected value of the posterior distribution for \(P\). Then, the results for each of these measurements were aggregated for each experimental condition \((m,n,P,q)\). ### Simulation Results #### 4.2.1 Confidence Interval Accuracy If our derivation is correct, the posterior distribution for \(P\)'s 95% confidence interval for \(P\) will contain the true value of \(P\) about 95% of the time. The data shows that this holds true. We now calculate the confidence interval accuracy for all experiments. We ran 100 simulated trials for each parameter setting. Each point in this graph shows the proportion of trials at a single experimental condition in which the posterior distribution for \(P\)'s 95% confidence interval for \(P\) will contained the true value of \(P\). We separate the figures by value of \(q\) for legibility. #### 4.2.2 Confidence Interval Width We define \(f(p)\) as the posterior distribution for \(P\) and \(F(p)\) as the CDF of this distribution. We define the 95% Confidence Interval Width as \[CI_{width}=F^{-1}(0.975)-F^{-1}(0.025)\] Figure 2: Confidence Interval Accuracies \(CI_{width}\) decreases if we have increased confidence in our estimate of the true value of \(P\). We calculate the 95% confidence interval widths for all experiments. We ran 100 simulated trials for each parameter setting. Each point in this graph shows the mean and standard deviation of the CI width, aggregated over 100 trials run for each parameter setting. We separate the figures by value of \(q\) for legibility. Figure 4: Confidence Interval Widths for multiple values of \(q\) Figure 3: Confidence Interval Widths #### 4.2.3 Expected Value of the Posterior Distribution for \(P\) By definition of Expectation, \[E[P=p|Y=y,Z=z]=\int_{0}^{1}Pr(P=p|Y=y,Z=z)dp\] We now calculate \(E[P=p|Y=y,Z=z]\) for all experimental trials. We ran 100 simulated trials for each parameter setting. Each point in this graph shows the mean and standard deviation of \(E[P=p|Y=y,Z=z]\), aggregated over 100 trials run for each parameter setting Figure 5: Confidence Interval Widths By Sampling Design Figure 6: Confidence Interval Width aggregated across all true \(P\) We calculate the percent error of expectation as follows: \[\%Error=\frac{|E[P=p|Y=y,Z=z]-P_{true}|}{P_{true}}\] Where \(P_{true}\) is true population prevalence. ## 6 Conclusion We have proposed a novel method to estimate the percent error of expectation as follows: \[\%Error=\frac{|E[P=p|Y=y,Z=z]-P_{true}|}{P_{true}}\] Where \(P_{true}\) is true population prevalence. ## References * [1] S. A. B., "The effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of the effect of effect of the effect of the effect of the effect of the effect of effect of effect of the effect of the effect of effect of the effect of effect of the effect of the effect of effect of the effect of the effect of effect of effect of the effect of the effect of effect of effect of effect of effect of the effect of the effect of effect of effect of the effect of effect of the effect of effect of effect of effect of the effect of effect of the effect of effect of the effect of effect of the effect of effect of effect of effect of the effect of the effect of effect of the effect of effect of the effect of the effect of effect of effect of the effect of the effect of the effect of effect of the effect of effect of the effect of effect of effect of the effect of the effect of effect of the effect of the effect of effect of the effect of the effect of effect of the effect of the effect of effect of effect of effect of the effect of the effect of effect of the effect of effect of effect of the effect of the effect of effect of the effect of effect of the effect of effect of the effect of effect of effect of effect of the effect of effect of the effect of effect of effect of the effect of effect of effect of effect of the effect of the effect of effect of the effect of effect of effect of the effect of effect of the effect of effect of effect of effect of effect of effect of effect of the effect of effect of effect of effect of effect of the effect of effect of effect of the effect of effect ## 5 Simulation 2: \(s_{e}=s_{p}<1\) ### Simulation Design For Simulation 2, we used the results of 2.5 to compute analytical posterior distributions for \(P\) under conditions where testing sensitivity and specificity are not equal to 1. We wrote a computer program to simulate \(m\) individual tests and \(n\) pooled tests with \(q\) individuals per pooled test and variable true population prevalence \(P\). We can use the results of these simulated tests to construct the analytical posterior probability distribution for \(P\), calculate 95% confidence intervals for \(P\), and compute various properties of the posterior distribution for \(P\). We simulated a smaller number of tests for each condition and a reduced variety of conditions because of the increased computational costs of inference under imperfect testing. To assess the method's performance across a range of true prevalence values, we varied the true \(P\) between 0.01 and 0.99 in increments of approximately 0.05, or each of the following: \(P\in[0.01,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75, 0.8,0.85,0.9,0.95,0.99]\). To assess the method's performance across a range of numbers of individual and pooled tests, we varied \(m\), the number of individual tests, between 0 and 30 in increments of 5, or each of the following: \(m\in[0,5,10,15,20,25,30]\). For each trial, the number of pooled tests \(n\) was \(30-m\) so that we always simulated 30 tests total. We assumed that we know the true values of \(s_{e}\) and \(s_{p}\). We varied \(s_{e}\) and \(s_{p}\) between each of the following: \(s_{e}=s_{p}=1\), \(s_{e}=s_{p}=0.95\), \(s_{e}=s_{p}=0.9\), and \(s_{e}=s_{p}=0.8\). We used \(q=3\) for all testing conditions. We ran 100 trials per experimental condition, or combination of \((m,n,P,q,s_{e},s_{p})\), for a total of 58800 trials. For each trial, we recorded: 1. if the true \(P\) was inside our 95% confidence interval, 2. the width of the confidence interval, and 3. the expected value of the posterior distribution for \(P\). Then, the results for each of these measurements were aggregated for each experimental condition \((m,n,P,q,s_{e},s_{p})\). Figure 8: Percent error of \(E[P=p|Y=y,Z=z]\) aggregated for each \(q\) and number of individual tests \((m)\) ### Simulation Results #### 5.2.1 Confidence Interval Accuracy If our derivation is correct, the posterior distribution for \(P\)'s 95% confidence interval for \(P\) will contain the true value of \(P\) about 95% of the time. The data shows that this holds true. We now calculate the confidence interval accuracy for all experiments. We ran 100 simulated trials for each parameter setting. Each point in this graph shows the proportion of trials at a single experimental condition in which the posterior distribution for \(P\)'s 95% confidence interval for \(P\) will contained the true value of \(P\). We separate the figures by value of \(s_{e}\) and \(s_{p}\) for legibility. #### 5.2.2 Confidence Interval Width We define \(f(p)\) as the posterior distribution for \(P\) and \(F(p)\) as the CDF of this distribution. We define the 95% Confidence Interval Width as \[CI_{width}=F^{-1}(0.975)-F^{-1}(0.025)\] \(CI_{width}\) decreases if we have increased confidence in our estimate of the true value of \(P\). We calculate the 95% confidence interval widths for all experiments. We ran 100 simulated trials for each parameter setting. Each point in this graph shows the mean and standard deviation of the CI width, aggregated over 100 trials run for each parameter setting. We separate the figures by value of \(s_{e}\) and \(s_{p}\) for legibility. Figure 9: Confidence Interval Accuracies Figure 11: Confidence Interval Widths for multiple values of \(s_{e}\) and \(s_{p}\) Figure 10: Confidence Interval Widths #### 5.2.3 Expected Value of the Posterior Distribution for \(P\) Again, by definition of Expectation, \[E[P=p|Y=y,Z=z]=\int_{0}^{1}Pr(P=p|Y=y,Z=z)dp\] We now calculate \(E[P=p|Y=y,Z=z]\) for all experimental trials. We ran 100 simulated trials for each parameter setting. Each point in this graph shows the mean and standard deviation of the expected values aggregated over 100 trials run for each parameter setting. We separate the figures by value of \(s_{e}\) and \(s_{p}\) for legibility. Figure 12: Confidence Interval Widths By Sampling Design Figure 13: Confidence Interval Width aggregated across all true \(P\) and values of \(s_{e}\) and \(s_{p}\) We calculate the percent error of expectation as \(\%Error=\frac{|E[P=p|Y=y,Z=z]-P_{true}|}{P_{true}}\), where \(P_{true}\) is true population prevalence. Figure 14: \(E[P=p|Y=y,Z=z]\) Figure 15: Percent error of \(E[P=p|Y=y,Z=z]\) aggregated for each value of \(s_{e}\) and \(s_{p}\) and number of individual tests (\(m\)) Discussion We have presented an analytical method for estimating population prevalence from combined individual and pooled binary sampling data. We have also conducted simulations to characterize these posterior distributions under a variety of sampling conditions, including a range of true prevalences, variable numbers of pooled and individual tests, variable number of individual samples per pooled sample, and a range of values for test sensitivity and specificity. We computed the proportion of trials with the true \(P\) falling within the confidence interval to see if in general the posterior probability distribution for \(P\) accurately captures the true \(P\). With \(n_{trials}\) total trials conducted, the number of trials falling within a 95% confidence interval should be distributed as \(Binomial(n_{trials},0.95)\). Under almost all parameter settings, the proportion of trials with the true \(P\) falling within the 95% confidence interval for \(P\) are consistent with expectations. The exceptions are some of the trials conducted with many pooled tests and a true \(P\). Pooled tests conducted with a high true p are almost guaranteed to be positive, so they yield very little information. The increased inaccuracy in this high true \(P\), high numbers of pooled tests region is due to this "washing out" effect. In general, performance degrades when true \(P\) is greater than 0.95 or less than 0.05. Performance also degrades as sensitivity and specificity decrease, but our posterior distribution for \(P\) still captures the truth under many parameter settings. We computed the expected value of the posterior distributions for \(P\) to assess if the posterior distribution can furnish us an adequate point estimate of \(P\). The comparisons of true \(P\) vs the expected value of the posterior distribution for \(P\) (predicted \(P\)) follow similar trends as we saw with the confidence intervals. Predicted \(P\) accuracy and precision are very good in almost all cases other than trials run with high true \(P\), high numbers of pooled tests, and high \(q\). Our results suggest that with larger \(q\), \(s_{e}\), and \(s_{p}\), the expected value will overestimate true \(P\) when its true value is close to 0 and underestimate \(P\) when its true value is close to 1. We computed confidence interval width to assess the precision of the posterior distribution for \(P\). A wider confidence interval can be interpreted as greater uncertainty in the estimate of \(P\). Using more pooled tests and more individuals per pooled test yields narrower confidence interval widths at low population prevalences but wider confidence interval widths at high population prevalences. In addition, the individual and pooled sample data both follow binomial distributions and are therefore at their highest variance around \(P=0.5\) or \(\pi_{q}=0.5\). Thus, as true \(P\) increases, CI width increases to a maximum and then decreases. Overall, these results show that this method is performant in all but extreme sampling conditions.
2303.13524
Talking Abortion (Mis)information with ChatGPT on TikTok
In this study, we tested users' perception of accuracy and engagement with TikTok videos in which ChatGPT responded to prompts about "at-home" abortion remedies. The chatbot's responses, though somewhat vague and confusing, nonetheless recommended consulting with health professionals before attempting an "at-home" abortion. We used ChatGPT to create two TikTok video variants - one where users can see ChatGPT explicitly typing back a response, and one where the text response is presented without any notion to the chatbot. We randomly exposed 100 participants to each variant and found that the group of participants unaware of ChatGPT's text synthetization was more inclined to believe the responses were misinformation. Under the same impression, TikTok itself attached misinformation warning labels ("Get the facts about abortion") to all videos after we collected our initial results. We then decided to test the videos again with another set of 50 participants and found that the labels did not affect the perceptions of abortion misinformation except in the case where ChatGPT explicitly responded to a prompt for a lyrical output. We also found that more than 60% of the participants expressed negative or hesitant opinions about chatbots as sources of credible health information.
Filipo Sharevski, Jennifer Vander Loop, Peter Jachim, Amy Devine, Emma Pieroni
2023-02-23T17:35:27Z
http://arxiv.org/abs/2303.13524v1
# Talking Abortion (Mis)information with ChatGPT on TikTok ###### Abstract In this study, we tested users' perception of accuracy and engagement with TikTok videos in which ChatGPT responded to prompts about "at-home" abortion remedies. The chatbot's responses, though somewhat vague and confusing, nonetheless recommended consulting with health professionals before attempting an "at-home" abortion. We used ChatGPT to create two TikTok video variants - one where users can see ChatGPT explicitly typing back a response, and one where the text response is presented without any notion to the chatbot. We randomly exposed 100 participants to each variant and found that the group of participants unaware of ChatGPT's text synthetization was more inclined to believe the responses were misinformation. Under the same impression, TikTok itself attached misinformation warning labels ("_Get the facts about abortion"_) to all videos after we collected our initial results. We then decided to test the videos again with another set of 50 participants and found that the labels did not affect the perceptions of abortion misinformation except in the case where ChatGPT explicitly responded to a prompt for a lyrical output. We also found that more than 60% of the participants expressed negative or hesitant opinions about chatbots as sources of credible health information. ## 1 Introduction Large language modeling of human-like dialog is a reality with several free "chatbots" available for people, researchers, coders, and test cheaters to use and experiment with [10]. Understandably, such a discursive sophistication and sensitivity attracts attention both in the usability and abusibility of these "chatbots." Essentially Large-scale Language Models (LLM), chatbots are touted in their ability to fix bugs in programs [72], give sound financial advice [83], and write impressive prose [43]. But chatbots and LLMs could be triggered to produce malicious semantics for conveying offensive language [58], extract personally identifiable data [15], and inject adversarial instructions to reproduce social biases and reinforce stereotypes [13]. On the social engineering side, chatbots and LLMs have been successfully used to generate fake personas' resumes on popular social media sites [51], write phishing emails and social media posts [18, 67], and produce a honeypot by issuing system commands [47]. As chatbots' objective is to produce persuasive language, they naturally make an attractive fit for an automated supply of misinformation and false narratives [19]. One could plausible generate a "spin" narrative against a person or topic of interest [5], emulate the QAnon-style conspiratorial narratives [14], generate fake news [89], and misuse rhetorical appeals to make a misinformation argument sound and credible [61]. Evidence suggests that people have difficulties distinguishing between chatbots and human-generated misinformation on political topics [37, 41], so it is question of time when the manual "troll farms" will be substituted with fully automated disinformation mills [30, 29]. Chatbot-generated political misinformation certainly warrants close attention [28], but of equal, if not more pressing, importance is chatbot-generated health misinformation. Human-generated health misinformation, in the past, flooded social media _en masse_ with narratives that append a fear of either _undesirable_, _uncontrollable_, and _unknown_ health consequences. A prime example is the COVID-19 misinformation "infodemic," seeded in part with previous disinformation about the Ebola and Zika viruses as well false information about the MMR vaccines [60, 63, 20, 64]. Presently, human-generated health misinformation, appending the lack of _desirable_, _known_ and _accessible_ health practices, is increasingly proliferated on social media in regards unproven treatments and "at-home" remedies such as alternative abortion treatments [68]. It is entirely plausible that in near future chatbots might replace humans in generating health misinformation, given the proliferation of chatbots for answering personal healthcare concerns [53] and writing treatments on input diagnoses [32]. Because misleading health information leads to vaccine hesitancy [44], consumption of unproven remedies [75], and attempts of unsafe procedures [35], it is an imperative to explore, first, how chatbots respond to health misinformation prompts, and second, how people respond to the chatbot answers to these prompts. We took upon this imperative and conducted a study on the topic of abortion misinformation with 150 participants. Inspired by the skyrocketing popularity of ChatGPT [56] we queried this chatbot on four distinct prompts of "at-home" abortifacient herbs. We were particularly interested in abortion remedies because social media was abruptly flooded with questionable information about "at-home" attempts to induce miscarriage, following the US Supreme Court decision to strike down the legal right to abortion [79]. We recorded the interaction with ChatGPT and posted the videos on TikTok, as previous research suggests that this platform in particular is a go-to place for social support exchange and questionable abortion information [68, 6]. We used two variants of the ChatGPT responses to each of our four "at home" abortion remedy prompts: (i) one where we recorded ChatGPT actually typing an response; and (ii) one in which the response of ChatGPT is superimposed verbatim over on a image as a static text (both variants are popular ways of formatting TikTok videos [74]). We randomly exposed 50 of the participants to four videos from the first variant and the other 50 to four videos from the second variant, asking their perception of accuracy of the statements in the videos. After we completed the main data collection of our study, we noticed that TikTok labeled all eight videos with a misinformation warning, urging the viewers to "_get the facts about abortion_." Given one in three users ignore the abortion misinformation labels on TikTok [68], we followed up with a second round of data collection with 50 new participants now with an explicit directive by TikTok that the videos might not be factually correct in regards "at-home" abortion remedies. We prompted ChatGPT to respond about abortifacient herbs, to say something about herbs used for abortion, to say the facts about using herbs for abortion, and to write lyrics about these herbs (the prompts and responses are given in the Appendix). The ChatGPT's responses, expectedly, did not contain explicit misinformation though they hinted that herbs could be used for inducing abortions. Each response presented a rather encyclopedic and general answer to the prompts and at urged the users to consult with a medical professional before using any of the abortifacient herbs for inducing abortion on their own. The lyrical answer extolled the virtue of a particular abortifacient herb, pointing its use for menstrual ramps, but did not mention its explicit use for abortion (the menstrual cramps benefit is the main justification for recommending the use of this herb as an abortifacient "at-home" remedy [54]). We found that participants who weren't aware that the videos were created by ChatGPT were more inclined to perceive the textual response as misinformation for all for prompts. The participants perceiving the ChatGPT responses as factually inaccurate pointed out that the chatbot omits references on toxicity and dosage therefore spreading "_misinformation by omission_." The impression of misinformation was also justified in the formatting of the responses that, in the words of the participants, "_were biased towards selling you products, not informing you about it_." The misinformation label, perhaps overcautious and algorithmically assigned by TikTok, made little change in the perception of accuracy except for the lyrical prompt where it nudged a higher number of participants to deem the videos as misinformation. To report the findings from our study, we reviewed the prior work on misinformation relative to LLMs, and generative health (mis)information in Section 2. Section 3 provides the broader context of health and abortion misinformation narratives on social media. Section 4 covers the methodological details of our study. Section 5 elaborates how participants assessed and engaged with abortion misinformation, and Section 6 the receptivity of our participants to chatbot generated abortion responses, respectively. We draw on our findings in Section 7 to discuss the implications for "neural" abortion (mis)information as well as the relative content moderation on social media. Finally, Section 8 concludes the paper. ## 2 Synthetic Misinformation ### LLMs and Neural Fake News The synthetic or "neural" fake news, early on, were seen a serious threat to the constrictive discourse online [82, 73]. Highlighting the plausibility of this threat, Zellers et al. showed that a synthetic propaganda is perceived as more believable than human-generated propaganda among human readers [89]. Similarly, Newhouse et al. demonstrated that LLMs are able to generate text with an ideological consistency from any extremist view used an input [1]. The ability to generate such a trolling content _en masse_, thus, is naturally appealing for state sponsored and any other malicious trolls that so far had to manually manufacture fake news, rumors, and conspiracy theories around polarising issues [31]. Goldstein et al. argue that "neural" fake news will (a) drive down the costs of operating any troll farms, allowing for new ones to quickly appear; (b) enable fast scaling and cross-platform testing; and (c) improve the linguistic or cultural inconsistencies innate to the human-generated misinformation [26]. This, in turn, will lower the existing barriers and complicate detection of state sponsored trolling, given that they can fuse the synthetic misinformation with their proven abilities to pose as authentic, culturally competent personas (e.g. the so-called "Jenna Abrams" accounts [85]) or vocal supporters of hashtag activism movements (e.g. BlackToLive in #BlackLivesMatter [77]). An early proof of this concerning scenario is the LLM trained on 4chan - dubbed GPT-4chan - to produce offensive and hateful yet very believable synthetic trolling narratives [52]. Compared to the past influence operations [19], the GPT-4chan trolling is not just comparably cruel in sentiment, but dangerously more powerful in volume output [45]. Measuring the toxicity of chatbots' content, Si et al. found that popular chatbots like BlenderBot and TwitterBot [49] are not just prone to providing toxic responses - offensive language that involves hateful or violent content - when fed toxic queries (e.g. commentary from 4chan and Reddit), but non-toxic prompts can trigger such responses too [70]. Such synthetic responses, Schuster et al. show, complicate the detection of online trolling and fake news as they are harder to detect due to the excellent stylometric obfuscation [66]. ### Chatbots and Health (Mis)information Chatbots, facilitated by language modeling, became the go-to way of addressing accessibility issues with traditional in-person healthcare with the proliferation of smartphones and broadband internet access [2, 7]. Healthcare conversational agents provide answers relative to mental health [22], cancer [71], viruses [50], substance abuse [39], and even help to dispel COVID-19 misinformation [86]. A convincing evidence on how the new and powerful chatbots like ChatGPT perform in regards providing general medical advice is still absent, though ChatGPT already showed a good performance when taking the medical licensing exam in the US [23, 38]. The generative capabilities of ChatGPT were sufficient in writing factually correct radiology reports [32], correctly answer cirrhosis and hepatocellular carcinoma questions [87], provide accurate pediatric diagnosis [8], and convincingly arguing for using AI-based chatbots in providing care to patients [17, 36, 40]. Though promising aiding healthcare decision-making, the latest generation of chatbots are yet to be evaluated relative to generating falsehoods, misleading claims, and speculative treatment. Such capabilities for synthetic health misinformation are deliberately restricted in the design and development of these chatbots [73], however, they could be skifully triggered to "hallucinate" i.e. fabricate a credible but incorrect medical advice [64]. Additionally, the ChatGPT and the likes are currently only evaluated by highly experienced healthcare and LLM researchers [41], without any evidence on how ordinary users and patients perceive, incorporate, and act upon the synthesized medical advice. ## 3 Abortion Misinformation So far, ChatGPT has not been formally tested in providing abortion information as a medical advice, though the chatbot could persuasively argue for legalizing abortion in US [28]. This particular capability of ChatGPT draws relevance, perhaps in part, to the polarized abortion discourse online abruptly amplified in the immediate aftermath of the US Supreme Court decision to strike down the constitutional right for abortion [79]. The inability to obtain a legal abortion turned people to search engines and social media to learn how to manage their reproductive decisions and perform safe abortions [69]. Unfortunately, not all information aligned with the National Library of Medicine's description of abortion and recommendations for safe practices [3]. ### Post _Roe vs Wade_ "At-Home" Remedies Unlike the vaccination misinformation narratives, driven by a fear mongering conspiratorial narratives targeting vaccine _hesitancy_[81], the abortion misinformation is driven by a reproductive _resolution_ to try untested "at-home" abortion remedies [76]. In the post Roe vs Wade discourse, for example, many questionable "at-home" practices including pills, oils, and herbs for inducing abortion flooded social media, both as claims and as an advertisements in users' feeds [69]. As prior evidence shows that 70.1% of women obtain information regarding abortion from the Internet [42], it is likely that these misleading claims will not just show in many users' feeds, but that some users will pursue the relative treatments. Reports, anecdotally, already show that women have been admitted in emergency rooms seeking critical lifesaving treatment following failed "at-home" attempts to induce abortion [65]. Abortion misinformation online, literature shows, takes many forms and users generally have difficulties discerning inaccuracies in the related alternative treatments [59]. The inability to spot falsehoods relative to the safety, infertility, mental health risk, and legality of abortion [9] is a cause for serious concern as reports indicated that abortion misinformation specifically related to an "abortion reversal pill" increased on Facebook from 20 interactions on June 23 to 3,500 interactions on June 24 2022, the day after the Supreme Court decision to overturn _Roe v. Wade_. [35]. The momentum of concern is even more evident as the Spanish-language abortion misinformation was deliberately designed to galvanize voters in Latino communities across the US, following the Supreme Court ruling [24]. Abortifacient herbs - purportedly providing the ability to induce a spontaneous miscarriage - form the majority of post-_Roe v Wade_ misinformation [11]. The toxicity of abortifacient herbs like has been widely studied, alas, without of an explicit effect in inducing "at-home" abortions. [33]. Most existing studies were done in countries other than the US, where abortion was not legal until recently. Abortion did not become legal in Uruguay until 2012 [46], for example, and a 2003 study found that the Montevideo Poison Centre had 86 cases of ingestion of herbal infusions with abortive intent from 1986 to 1999 [16]. In the United States, misinformation surrounding "herbal abortions" has increased dramatically on social media after the legal abortion was overturned, especially in viral videos on TikTok [80]. ### Social Media Handling Platforms used diverse strategies to mitigate abortion misinformation: YouTube added "context labels" to such abortion content [88], Twitter decided to promote authoritative abor tion information in its Twitter Moments and Events [35], and Meta purportedly blocked questionable abortion treatment advertisements [48]. TikTok also stated it removed and labeled videos with abortion misinformation [34], but many of the questionable home practices aimed to "cause a miscarriage" still appeared in users' personal streams [12]. Debunking of abortion misinformation on TikTok followed up [76], but the slow-in-nature checking and verifying of health-related facts was no match for the rapid spread of videos recommending dangerous abortion remedies. TikTok - deemed the "New Google" for Gen-Z [27] - draws special attention relative to abortion misinformation, pressing reproductive decisions are particularly interesting to the majority of users on this platform. TikTok's status as a platform for social support exchange [6] further exacerbates the immediate danger of abortion remedies as supportive communication adds to "stickiness" and internalization of such content among adolescents and young adults [21]. Studies focused on abortion misinformation on TikTok already show that this danger is real as roughly 30% of the users believed in "at-home" remedies' safety and efficacy, despite those being already scientifically debunked [68]. Worse, the explicitly debunking label attached to a misleading abortion video by TikTok about the harms of "at-home" did not help a third of the users in the study to dismiss a video about self-administering abortion as misinformation. ## 4 Study Design ### Research Questions As fully LLM powered chatbots like ChatGPT will inevitably appear in many health-related conversations and online searches [26], it is an imperative to learn how the _users_ perceive, assess and engage with synthesized responses. We took upon this imperative in the context of abortion "at-home" remedies because internalizing the related misinformation has immediate dangers to the well-being of the users, whom abruptly were restricted access to reproductive healthcare by the US Supreme Court decision. Therefore, we set to answer the following research questions in our study: * **RQ1a:**_Assessment_: How do TikTok users assess ChatGPT responses relative to "at-home" abortion remedies prompts in videos explicitly showing the interaction with the chatbot? * **RQ1b:**_Assessment_: How do TikTok users assess ChatGPT responses relative to "at-home" abortion remedies prompts in videos showing only the textual response? * **RQ2:**_Engagement_: What strategies users employ in assessing and responding to "at-home" abortion videos created with ChatGPT on TikTok? * **RQ3:**_Reception_: What is users' general reception of information generated by ChatGPT and language models? ### Sample We obtained an IRB approval for fielding an anonymous, exploratory study where ordinary users directly interacted on TikTok with short videos either showing an interaction with ChatGPT or the chatbot's response to "at-home" abortion remedies' prompts. After the interaction, users answered series of questions, given in the Appendix, relative to the accuracy of the information in the videos, experience with chatbots, and their engagement strategies with misinformation. We sampled TikTok users ages 18 and above in the United States and used Prolific for recruitment. Our participants were allowed to skip any question they were uncomfortable answering, taking around 15 minutes to complete interact with a random selection of the videos and complete the survey. Participants were offered a compensation rate of $3 each. Following a preliminary power analysis and a data consolidation, we ended with an initial sample of 100 participants. The sample's demographic distribution is given in Table 1. After we completed the main data collection of our study, we noticed that TikTok labeled all eight videos with a misinformation warning to "_get the facts about abortion._" According to TikTok's safety policy, a proactive detection program substantiated through fact-checking flags new and evolving claims in regarding the most popular topics of misinformation - vaccines, abortion, and voting [34]. As the abortion misinformation labels on TikTok haven't yielded the anticipated dispelling effect with 30% of users in a previous test [68], we initiated a second round of data collection with 50 new participants to collect their impressions of abortion advice given by ChatGPT, now with an explicit directive by TikTok that this advice might not be factually correct in regards "at-home" remedies. Using the same data collection setup, we recruited an additional sample of 50 TikTok users, with a demographic distribution is given in Table 2. \begin{table} \begin{tabular}{|c c c c|} \hline \multicolumn{4}{|c|}{**Gender**} \\ \hline **Female** & **Male** & **Non-cisgender** \\ 70 (70\%) & 24 (24\%) & 6 (6\%) \\ \hline \multicolumn{4}{|c|}{**Age**} \\ \hline **[18-20]** & **[21-30]** & **[31-40]** & **[41-50]** & **[51-60]** & **[61+]** \\ 9 (9\%) & 53 (53\%) & 23 (23\%) & 5 (5\%) & 6 (6\%) & 4 (4\%) \\ \hline \multicolumn{4}{|c|}{**Political leanings**} \\ \hline **Left** & **Moderate** & **Right** & **Apolitical** \\ 58 (58\%) & 25 (25\%) & 12 (12\%) & 5 (5\%) \\ \hline \multicolumn{4}{|c|}{**Highest Level of Education Completed**} \\ \hline **High school** & **College** & **Graduate** \\ 19 (19\%) & 71 (71\%) & 10 (10\%) \\ \hline \end{tabular} \end{table} Table 1: Initial Sample Demographic Distribution \(N=100\) ### Method and Analysis Participants were provided an open ended qualitative survey through Qualtrics that provided a list of questions and a predetermined set of TikTok videos. We used prompts in ChatGPT to generate text regarding herbal abortions in general and pennyroyal in particular. Various herbs are offered as "at-home" abortion remedies with claims that they have "abortificaient" effect i.e. induce miscarriage, such as blue/black cohosh, eastern daisy fleebane, mugworth, parsley, pennyroyal and rue [33]. Three independent members of the research team spent an extensive time prompting ChatGPT with each of the abovementioned abortificaient herbs, but only pennyroyal was the one that came up each time regardless of what prompt text, device (e.g. computer, smartphone), browser, time of the day, or location was used. The prompts and ChatGPT responses we selected to use in our study are provided in the Appendix. We then used the conversational results to create a total of eight short TikTok videos. Four of the videos showed a prompt being entered into ChatGPT and the response being generated on the website as depicted in the screenshot in Figure 0(a). The other four show only the response that was generated through ChatGPT overlaying an image of the herb pennyroyal and did not indicate that the text had been created using a chatbot, as depicted in Figure 0(b). The videos were posted from an innocuous account without an avatar or any history of posting other videos to avoid any confounding effects. Participants were randomly assigned into two groups of 50 each. In the first group, participants were each shown four randomly selected videos explicitly showing the conversational interaction with ChatGPT. The second group of participants were also shown four randomly selected videos that contained only the text that had been generated by ChatGPT. We randomly selected the exposure to the videos in both groups to avoid habituation and question order bias in evaluation of the content. After the initial survey concluded, TikTok added misinformation labels on the videos in the mobile application version of TikTok only, as shown in Figure 0(c), linked to the National Library of Medicine's MedlinePlus webpage on abortion [55]. For the additional data collection we stipulated a use the mobile application which showed them the TikTok misinformation label. We then randomly divided the participants into two groups of 25. The first group viewed the set of four videos showing ChatGPT generating the text and the second group viewed the videos that only showed the overlaying text. Two independent researchers coded and analyzed the results using the codebook in [68] and achieving a strong level of inter-coder agreement (Cohen's \(\kappa=.87\)). We utilized a thematic analysis methodology to identify the assessment themes most saliently emerging from the responses in our sample. The themes were summarized to describe the subjective assessment of the ChatGPT's responses, the relative engagement actions taken (e.g. scroll past, fact-check, block, report, reply, like), and the general experience with chatbots. In reporting the results, we utilized as much as possible verbatim quotation of participants' answers, emphasized in "_italics_" and with a reference to the participant as either **PXYZ#** or **[PXYZ#]**, where **P** denotes **participant**, **X** denotes the **number** of the participant in the sample (ordered by the time of participation), **Y** denotes their **gender** identity (**F** - female, **M** - male, **NC** - non-cisgender), **Z** denotes their **political** identity (**L** - left-leaning, **M** - moderate, **R** - right-leaning; **A** - apolitical), and **#** denotes the upper bound of their **age bracket**. For example, **P16FL30** refers to **participant 16**, **female**, **left-leaning**, **age bracket [21-30]**. ## 5 Results: Assessment and Engagement ### Abortificaier herbs The first set of TikTok videos, shown in Appendix 8.1, were created using the prompt "_Abortificaier herbs._" As shown in Table 3, 25 participants were randomly selected to view the explicit ChatGPT video and 26 to view the implicit ChatGPT video. Thirteen participants from the explicit ChatGPT group (52%) stated they "_don't know for sure if the video contains any misinformation because these herbs could make an abortion happen_" **[P24FL30]**, but they "_try not to take anything as fact on TikTok_" **[P31FL40]**. Seventeen of participants in the implicit group (65.38%) were "_not sure if the post is misinformation but [they]] will give them credit that they push the reader to go to their healthcare provider_" **[P82FL30]**. Nine participants from the explicit ChatGPT group (36%) stated that "_it doesn't look like there is misinformation about it_" **[P2FL40]** added they felt this way "_because of the examples provided_" **[P7FL20]**. Three (12%) participants from the explicit ChatGPT group the video "_contains misinformation, giving medical advice about abortion, is very dangerous_" **[P45FM40]** and four (of participants from the implicit ChatGPT group 15.38%) state they "_believe it has misinformation since it is from a random account and not a doctor/scienti \begin{table} \begin{tabular}{|c c c|} \hline \multicolumn{3}{|c|}{**Gender**} \\ \hline **Female** & **Male** & **Non-cisgender** \\ 35 (70\%) & 13 (26\%) & 2 (4\%) \\ \hline \multicolumn{3}{|c|}{**Age**} \\ \hline **[18-20]** & **[21-30]** & **[31-40]** & **[41-50]** & **[51-60]** & **[61+]** \\ 8 (16\%) & 25 (50\%) & 11 (22\%) & 3 (6\%) & 2 (4\%) & 1 (2\%) \\ \hline \multicolumn{3}{|c|}{**Political leanings**} \\ \hline **Left** & **Moderate** & **Right** & **Apolitical** \\ 30 (60\%) & 12 (24\%) & 5 (10\%) & 3 (4\%) \\ \hline \multicolumn{3}{|c|}{**Highest Level of Education Completed**} \\ \hline **High school** & **College** & **Graduate** \\ 5 (10\%) & 39 (78\%) & 6 (12\%) \\ \hline \end{tabular} \end{table} Table 2: Additional Sample Demographic Distribution \(N=50\) who would know this information_" [**P83MM30**]. While the majority of the explicit ChatGPT group participants were inclined to scroll past the first video, as shown in Table 4, 16% or four of them were keen on fact-checking ChatGPT's response, saying they "_would take the time to research the info [they] found before taking any action_" [**P3FL30**]. Two other participants were keen on replying and reporting the explicit video because "_giving medical advice about abortion on TikTok is very dangerous_" [**P45FM40**]. Participants in the implicit ChatGPT group were more likely to take action on the first video as seven of them (26.92%) pointed they would perform an "_educated research and if severe risks were at stake I may have been inclined to comment or even possibly provide a reputable link counter this information_" [**P92FL40**]. Two participants said they would block the video account and another two were keen to "_flag it for dangerous information_" [**P90FM20**]. One participant stated they would comment because "_this seems irrational or misinformation and it should not be spread on TikTok giving girls/women ideas of a dangerous act_" [**P70FM40**] and one liked the video because "_it is addressing what the herbs are while also educating people about how harmful they are_" [**P62FL30**]. When the first videos were labeled as misinformation by TikTok and shown as such to the follow-up set of participants, there was a slight increase in perceiving the content as misinformation in both groups, as shown in Table 5. While seven participants in the the explicit ChatGPT group (28%) said they were "_not too knowledgeable on those herbs, but it seems accurate since it is ChatGPT_" [**P10SFM30**], four (16%) said they "_don't believe AI is the most reliable as it pulls from the information of the internet, which is full of misinformation; So, yes, in a way I believe this post could contain misinformation_" [**P103FM20**]. Thirteen of the participants in the implicit ChatGPT group (52%) said they "_cannot say with certainty that this post contains misinformation, but [they] would be very suspicious of it because of the lack of sources provided and the general language used_" [**P13SFL30**] and six (24%) said "_this post contains misinformation because those herbs are not scientifically proven to induce safe abortion, and the video is exposing audiences to unsafe methods_" [**P127FL20**]. Though the majority of the participants were keen on scrolling pass the video ignoring of the misinformation label, as indicated in Table 6, three participants in the explicit ChatGPT group (12%) said they "_would want to look up more information about abortifacienti \begin{table} \begin{tabular}{|c c c c c c|} \hline \multicolumn{6}{|c|}{**Explicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 19 (76\%) & 4 (16\%) & 0 (0\%) & 1 (4\%) & 1 (4\%) & 0 (0\%) \\ \hline \multicolumn{6}{|c|}{**Implicit ChatGPT Group [viewed: 26 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 13 (50\%) & 7 (26.92\%) & 2 (7.69\%) & 2 (7.69\%) & 1 (3.85\%) & 1 (3.85\%) \\ \hline \end{tabular} \end{table} Table 4: What action would you take on Video #1? Figure 1: TikTok videos used in the study: Screenshots \begin{table} \begin{tabular}{|c c c|} \hline \multicolumn{3}{|c|}{**Explicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Yes** & **No** & **Unsure** \\ 3 (12\%) & 9 (36\%) & 13 (52\%) \\ \hline \multicolumn{3}{|c|}{**Implicit ChatGPT Group [viewed: 26 participants]**} \\ \hline **Yes** & **No** & **Unsure** \\ 4 (15.38\%) & 6 (23.08\%) & 16 (61.54\%) \\ \hline \end{tabular} \end{table} Table 3: Is Video #1 Misinformation? search or scientific studies on their effect on the human body_" [**P102FL30**]. Two participants (8%) said they "_would respond by commenting that ChatGPT isn't to be considered a medical or scientific source_" [**P115ML30**] and one participant said they would "_maybe block the account_" [**P111MM30**]. Five participants in the implicit ChatGPT group (20%) said they "_would go on Google to verify if there are herbs that cause abortion_" [**P147FL30**]. The remaining participants were evenly split on their actions. Two said they "_would report the video and move on_" [**P127FL20**], two said they "_would leave a comment to inform the person who posted it and others who might watch the video about how harmful the contents of the post are_" [**P146FM20**], and two said they "_would probably like it_" [**P141FL40**]. ### Tell me about herbs for abortion The second set of TikTok videos, shown in Appendix 8.2, were created using the prompt "_Tell me about herbs for abortion_". As indicated in Table 7, most participants in both groups felt that these videos were not misinformation. Seventeen participants in the explicit ChatGPT group (65.38%) stated the information in the video seems to be factual because "_it at least encourages consulting trained medical professionals rather than the internet, which I think is the correct response in that type of situation_" [**P23FM40**]. Eleven of the participants in the implicit ChatGPT group (45.83%) felt the video was not misinformation, adding that "_it's just somebody posting some FY_" [**P97FM40**]. Eight of participants in the explicit ChatGPT group (30.77%) stated they "_can't elaborate on the validity of the chat bot due to it just being good at writing and not realizing what it is actually writing_" [**P44ML20**]. Nine of the implicit ChatGPT group participants (37.5%) also were unsure, but felt the information "_would be better coming from someone that was an expert_" [**P66FM20**]. Only one participant from the explicit ChatGPT group thought the video "_looks as though it contains misinformation regarding abortions in this form_" [**P34MR30**]. Four of the participants in the implicit ChatGPT group (16.67%) thought the video contained misinformation because "_this TikTok is an advertisement for a product and is therefore biased toward selling vs informing_" [**P92FL40**]. Again, the majority of the participants in both groups, as shown in Table 8, stated they "_would honestly probably scroll past it, although the functionality of ChatGPT is impressive_" [**P4MM30**]. Four of the participants in the explicit ChatGPT group (15.38%) said they would "_like the post because answers the question, however it also explains how dangerous it would be to use_." [**P41FL30**]. Three of participants in the explicit ChatGPT group (11.54%) said they "_would probably read the comments to see what others said_" [**P36FL30**], one said they "_would share it_" [**P22FM40**], and one said they "_would respond with skepticism and confusion_" [**P35FL30**]. Five of participants in the implicit ChatGPT group (20.83%) said they "_would research for themselves what herbs can be used for abortion_" [**P61FL30**], one said "_this is a good video that I would feel comfortable sharing_" [**P67FL40**] and one said "_if this post came on my feed I would flag it_" [**P81ML40**]. The misinformation label did not cause any noticeable effect in the perception of the videos for both groups, as indicated in Table 9. Nine participants in the explicit ChatGPT group (36%) said the video "_seems like helpful information without any misinformation_" [**P105FM30**] but two (8%) said they felt it was misinformation because "_some of the herbs don't cause abortion_" [**P12AFM40**]. Eleven of the participants in the implicit ChatGPT group said they "_don't necessarily think they are spreading misinformation, but [they] don't feel they are giving enough truth in their post either_" \begin{table} \begin{tabular}{|c c c c c c|} \hline \hline \multicolumn{4}{|c|}{**Explicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 18 (72\%) & 3 (12\%) & 1 (4\%) & 0 (0\%) & 2 (8\%) & 1 (4\%) \\ \hline \multicolumn{4}{|c|}{**Implicit ChatGPT Group [viewed: 25 participants]**} \\ \hline \hline \multicolumn{4}{|c|}{**Explicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 14 (56\%) & 5 (20\%) & 0 (0\%) & 2 (8\%) & 2 (8\%) & 2 (8\%) \\ \hline \end{tabular} \end{table} Table 6: What action would you take on Video #1? [Labeled] \begin{table} \begin{tabular}{|c c c c c c|} \hline \hline \multicolumn{4}{|c|}{**Explicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Share** & **Report** & **Reply** & **Like** \\ 17 (65.38\%) & 3 (11.54\%) & 1 (3.85\%) & 0 (0\%) & 1 (3.85\%) & 4 (15.38\%) \\ \hline \multicolumn{4}{|c|}{**Implicit ChatGPT Group [viewed: 24 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Share** & **Report** & **Reply** & **Like** \\ 16 (66.66\%) & 5 (20.83\%) & 1 (4.17\%) & 1 (4.17\%) & 1 (4.17\%) & 0 (0\%) \\ \hline \hline \end{tabular} \end{table} Table 8: What action would you take on Video #2? \begin{table} \begin{tabular}{|c c c c c|} \hline \hline \multicolumn{4}{|c|}{**Explicit ChatGPT Group [viewed: 26 participants]**} \\ \hline **Yes** & **No** & **Unsure** \\ 1 (3.85\%) & 17 (65.38\%) & 8 (30.77\%) \\ \hline **Implicit ChatGPT Group [viewed: 24 participants]** & \\ \hline \hline \multicolumn{4}{|c|}{**Implicit ChatGPT Group [viewed: 24 participants]**} \\ \hline \hline \multicolumn{4}{|c|}{**Explicit ChatGPT Group [viewed: 26 participants]**} \\ \hline **Yes** & **No** & **Unsure** \\ 4 (16.67\%) & 11 (45.83\%) & 9 (37.5\%) \\ \hline \end{tabular} \end{table} Table 7: Is Video #2 Misinformation? \begin{table} \begin{tabular}{|c c c c c c|} \hline \hline \multicolumn{4}{|c|}{**Explicit ChatGPT Group [viewed: 26 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Share** & **Report** & **Reply** & **Like** \\ 17 (65.38\%) & 3 (11.54\%) & 1 (3.85\%) & 0 (0\%) & 1 (3.85\%) & 4 (15.38\%) \\ \hline \multicolumn{4}{|c|}{**Implicit ChatGPT Group [viewed: 24 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Share** & **Report** & **Reply** & **Like** \\ 16 (66.66\%) & 5 (20.83\%) & 1 (4.17\%) & 1 (4.17\%) & 1 (4.17\%) & 0 (0\%) \\ \hline \hline \end{tabular} \end{table} Table 8: What action would you take on Video #2? **[P128FL50]**. Ten participants in this group (40%) said "_the post does not include any misinformation and is accurate_" **[P147FL30]**, and four (16%) said they "_believe this post does contain misinformation because herbs are not a very common form of abortion healthcare_" **[P138FL30]**. Watching the video and scrolling past was still the main engagement strategy for both groups, as shown in Table 10, though three participants in the explicit ChatGPT group (12%) said they "_would take in the information and read the comments and likely look up more about those types of herbs and their practice in the past for use in at home abortions_" **[P102FL30]**. Another three (12%) said they "_would respond by commenting that ChatGPT first paragraph should be taken seriously, since the use of such herbs could be fatal or very damaging_" **[P115ML30]**. One participant actually pointed to the misinformation label attached by TikTok, saying they "_like that the TikTok stresses how dangerous they are though, so I might like the post if I saw it_" **[P123FL30]**. In the implicit ChatGPT group, four of the participants (16%) said they "_would like the post since it does not include any misinformation and is accurate_" **[P147FL30]**. ### Tell me all of the facts about pennyroyal The third set of videos, shown in Appendix 8.3, were created using the prompt "_Tell me all of the facts about pennyroyal_." As shown in Table 11, 16 (66.67%) of participants in the explicit ChatGPT group stated they "_do not know enough about Pennyroyal to know if it is misinformation but I do know not to trust what ChatGPT says_" **[P9FL30]**. Six participants in this group (25%) stated "_it is unlikely the information is misinformation as it appears to have pulled up a definition from the internet_" **[P19ML30]** and two (8.33%) "_stated that feel that leaving information out of the description is a form of misinformation and it should have been included for someone to have a full picture of the facts_" **[P13FL30]**. The implicit ChatGPT group participants were mostly split between saying they "_have no idea if this is misinformation_" **[P8SFMNR]** and believing "_the information is probably correct list the potential risks associated with the herb_" **[P52NCL30]**. The remaining four participants in this group (16%) thought "_this post contains misinformation, as people sometimes just post information they know from their grandmas for example, or from someone else who doesn't have any studies in life, and that can make people feel confused_" **[P63FM30]**. About half of the participants in each group shown in Table 12 said they would watch and scroll past the videos. Seven of the participants in the explicit ChatGPT group (29.17%) said they "_would do more research about this herb because it sounds interesting and helpful_" **[P24FL30]**. Two participants that said they would reply to '_comment that pennyroyal should not be ingested.. ever_" **[P18FL50]**. The remaining participant said they "_may attempt to report it as misinformation_" **[P13FL30]**. In the implicit ChatGPT group, eight participants (32%) said they "_would go to a legitimate resource on herbs and their medicinal effects if I had an interest in the information here_" **[P91NCL60]** and two said they "_would block the account to prevent any more posts from appearing on my for you page again_" **[P82FL30]**. One participant was keen on "_reporting it_" **[P74FM30]**, and one said they would "_maybe like it so content like it could come on my feed_" **[P52NCL30]**. The misinformation label for the third set of videos also did not have any effect on the perceived accuracy of both the explicit and implicit ChatGPT responses, as shown in Table 13. In fact, more than half of participants (56%) in the explicit ChatGPT group thought that the video not misinformation because the "_information was very detailed and specific which \begin{table} \begin{tabular}{|c c c c c|} \hline \multicolumn{5}{|c|}{**Explicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Scroll Past** 18 (72\%) & **Fact-check** 3 (12\%) & **Block** 0 (0\%) & **Report** 0 (0\%) & **Reply** 3 (12\%) & **Like** 1 (4\%) \\ \hline \multicolumn{5}{|c|}{**Implicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Scroll Past** 18 (72\%) & **Fact-check** 1 (4\%) & **Block** 1 (4\%) & **Report** 1 (4\%) & **Reply** 0 (0\%) & **Like** 4 (16\%) \\ \hline \end{tabular} \end{table} Table 10: What action would you take on Video #2? [Labeled] \begin{table} \begin{tabular}{|c c c c|} \hline \multicolumn{5}{|c|}{**Explicit ChatGPT Group [viewed: 24 participants]**} \\ \hline **Yes** & **No** & **Unsure** \\ 2 (8.33\%) & 6 (25\%) & 16 (66.67\%) \\ \hline **Implicit ChatGPT Group [viewed: 25 participants]** \\ \hline **Yes** & **No** & **Unsure** \\ 4 (16\%) & 10 (40\%) & 11 (44\%) \\ \hline \end{tabular} \end{table} Table 11: Is Video #3 Misinformation? \begin{table} \begin{tabular}{|c c c c c c|} \hline \multicolumn{5}{|c|}{**Explicit ChatGPT Group [viewed: 24 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 14 (58.33\%) & 7 (29.17\%) & 0 (0\%) & 1 (4.17\%) & 2 (8.33\%) & 0 (0\%) \\ \hline \multicolumn{5}{|c|}{**Implicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 13 (52\%) & 8 (32\%) & 2 (8\%) & 1 (4\%) & 0 (0\%) & 1 (4\%) \\ \hline \end{tabular} \end{table} Table 12: What action would you take on Video #3? \begin{table} \begin{tabular}{|c c c c|} \hline \multicolumn{5}{|c|}{**Explicit ChatGPT Group [viewed: 24 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 14 (58.33\%) & 7 (29.17\%) & 0 (0\%) & 1 (4.17\%) & 2 (8.33\%) & 0 (0\%) \\ \hline \multicolumn{5}{|c|}{**Implicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 13 (52\%) & 8 (32\%) & 2 (8\%) & 1 (4\%) & 0 (0\%) & 1 (4\%) \\ \hline \end{tabular} \end{table} Table 12: What action would you take on Video #3? leads me to believe it's likely accurate_" [**P101FL20**]. Participant **P108MR30** added that "_it seems like this info came straight from Wikipedia though so my guess is it's likely accurate._" Nine of participants in this group (36%) were uncertain as they "_know little about pennryrovals_" [**P122MM30**] and two (8%) said "_the post contains misinformation with regards to the fact that pennryoyal is toxic regardless of dosage and adverse effects such as vomiting and dizziness can occur after ingestion of doses less than 10 milliliters_" [**P119NCL30**]. Fifteen of the participants from the implicit ChatGPT group (60%) said they are "_not sure if it contains misinformation but it sounds more knowledgeable than the others so I would look into it; The more detail there are, the more convincing it is_" [**P138FL30**]. Nine participants in this group (36%) said "_the idea of misinformation being in this is extremely low all thanks to the fact that the creator made it seem like they did their research_" [**P136MR30**]. As shown in Table 14, beyond the usual "scroll past," five participants in the explicit ChatGPT group (20%) said they "_would probably look up the herb and read about it_" [**P113FM40**]. Two participants (8%) said they "_might like the post, but I wouldn't share it because I would be worried about someone trying to do something unsafe with the information If they were desperate_" [**P123FL30**], and one said they "_would would respond by not believing the information_" [**P121FL40**]. In the implicit ChatGPT group, similarly, nine of the participants (36%) said they "_would read it and maybe look up that herb to see if that is true_" [**P143FL30**], and three (12%) said they "_would like the post_" [**P146FM20**]. ### Write a rap about pennryoyal The fourth set of videos, shown in Appendix 8.4, were created using the prompt "_Write a rap about pennryoyal_". As shown in Table 15, most participants were unsure if this video was misinformation. Sixteen of the participants in the explicit ChatGPT group (61.54%) stated they "_can't tell if the post has any misinformation as it is a rap and most songs aren't made for educational purposes_" [**P35FL30**]. Participant **P23FM40** added that "_some of the information in it might be untrue or misleading, but since it is presented in the sort of goofy rap form, I'm not sure this is meant to be taken seriously_." One of the participants, **P9FL30**, pointed that "_ChatGPT is not a trusted source_". Nine participants in the explicit ChatGPT group (34.62%) said "_it doesn't appear there is any misinformation; It looked like it was just a made up rap song_" [**P21FR50**], and one said "_don't believe it technically contains misinformation as poetry isn't exactly making claims and could be fictional_" [**P38MR30**]. In the implicit ChatGPT group, 12 participants (52.17%) said they "_don't know if this TikTok has any misinformation_" [**P63FM30**] and three (13.04%) said "_the information didn't express the dangers of the herb strongly enough, so it is a misinformation through omission_" [**P65FL61**+]. Eight participants (34.78%) said "_it has very bad grammar and etiquette which leads me to believe it contains misinformation_" [**P70FM40**] and commented that the "_lyrics sound childish, does not seem accurate._" [**P84FL30**]. As indicated in Table 16, 17 participants in the explicit ChatGPT group (65.38%) said they "_would scroll past this_" [**P31FL40**] and six (23.08%) "_would be intrigued to validate the claims and research it externally_" [**P38MR30**]. One of the participants said they "_would report the video for encouraging the use of an herb without providing any actual support for why it would work_" [**P25FM30**]. One participant said the would "_probably comment on it and send it to a friend because it's interesting that AI generates that_" [**P34MR30**], and one said they "_would like the video_" [**P21FR50**]. In the implicit ChatGPT group, 16 participants (69.57%) said they they "_would not trust something in a rhyme, as it seems like a spell casting, its creepy_" [**P78FL20**]. Four participants (17.39%) said they "_would comment on how they know what they are implying by the caption even if the poem doesn't explicitly state it and it can harm someone; I'd make a Facebook post telling people not to listen to this; It can harm_" [**P100NC40**]. Two participants said they "_probably would be interested in finding out more about the herb and start [their] own research_" [**P57FA30**] and one said they "_would \begin{table} \begin{tabular}{|c c c|} \hline \multicolumn{3}{|c|}{**Explicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Yes** & **No** & **Unsure** \\ 2 (8\%) & 14 (56\%) & 9 (36\%) \\ \hline \multicolumn{3}{|c|}{**Implicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Yes** & **No** & **Unsure** \\ 1 (4\%) & 9 (36\%) & 15 (60\%) \\ \hline \end{tabular} \end{table} Table 13: Is Video #3 Misinformation? [Labeled] \begin{table} \begin{tabular}{|c c c c c|} \hline \multicolumn{4}{|c|}{**Explicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 17 (68\%) & 5 (20\%) & 0 (0\%) & 0 (0\%) & 1 (4\%) & 2 (8\%) \\ \hline \multicolumn{4}{|c|}{**Implicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 13 (52\%) & 9 (36\%) & 0 (0\%) & 0 (0\%) & 0 (0\%) & 3 (12\%) \\ \hline \end{tabular} \end{table} Table 14: What action would you take on Video #3? [Labeled] \begin{table} \begin{tabular}{|c c c|} \hline \multicolumn{4}{|c|}{**Explicit ChatGPT Group [viewed: 26 participants]**} \\ \hline **Yes** & **No** & **Unsure** \\ 1 (3.84\%) & 9 (34.62\%) & 16 (61.54\%) \\ \hline \multicolumn{4}{|c|}{**Implicit ChatGPT Group [viewed: 23 participants]**} \\ \hline **Yes** & **No** & **Unsure** \\ 8 (34.78\%) & 3 (13.04\%) & 12 (52.17\%) \\ \hline \end{tabular} \end{table} Table 15: Is Video #4 Misinformation? probably report this video because it is most certainly a scam"_ **[P76MM30]**. The label paired with the mobile-only access of the videos made more participants in the explicit ChatGPT group beleive the rap lyrics were misinformation, as shown in 5.4. Their impression was that "_the rap could be dangerous since it encourages its use even if it's toxic, without recommending talking to a health care professional_" **[P105FM30]**, feeling that "_it would make [them] smile a bit, but it can be seen as misinformation purely because the rap makes it seem like some crazy cool herb that'll make you feel great and it doesn't warn against taking too much_" **[P118FL30]**. In the implicit ChatGPT group, 11 participants (44%) said they "_would be suspicious of this post; I don't like that it's written like an ad because it seems like it's trying to sell me something rather than to inform me; It could contain misinformation for that reason_" **[P140FL20]**. Eight of the participants in this group (32%) said "_the post doesn't make much sense, so I wouldn't trust it; I'd assume that there would be misinformation in the post due to it not making sense_" **[P148FL40]**. Six participants (24%) said "_the rhyme didn't seem to contain any real info so no misinformation to my eye_" **[P126ML40]**. As shown in Table 18, most of the explicit ChatGPT group participants had mixed feeling about ChatGPT creating lyrical content, commenting that "_it's kind of funny to see AI make a rap about abortion_" **[P108MR30]**. The remaining two participants were either going to" _research then block if false_" **[P106MM40]** or "_would respond by not believing the information_" **[P121FL40]**. Nineteen of the of the implicit ChatGPT group (76%) said they think "_think this post is odd_" **[P121FL40]**. The remaining two participants in this group said they "_would do my own research via Google_" **[P130FA60]** and comment on the video to "_encourage others to do their research before trying pennyroyal_" **[P146FM20]**. ## 6 Results: Reception After viewing the videos, participants in both groups in each study were asked about their prior experience with chatbots or language models and their reception of them. Most participants indicated they had very limited experience with chatbots other than using them for online customer service, but had heard about them in the news. We categorized participants opinions of chatbots and language models based on their tone of their response, namely positive, negative, hesitant, or absent of an opinion. Participants in the explicit ChatGPT group generally had a more positive view of chatbots, as indicated in Table 19, and lauded the "_generally knowledgeability_ **[P118FL30]**. Fifteen participants overall had a negative opinion of chatbots included noting they "_don't trust bots to give them information and would want to research from credible sources before trusting a bot_" **[P40FL30]**. These participants thought that "_chatbots or language models can be dangerous and are not reliable resources for information because they seemingly provide people with a sense of entertainment_" **[P13FL30]**. The participants with a hesitant response pointed to the credibility of chatbots' responses, stating "_they can be useful for a quick glimpse into information, but should not be taken as an absolute; They facilitate a start in research, but are not and should not be taken as scientific proof_" **[P19ML30]**. Participant **P119NCL30** commented that "_language models are very intriguing and can be useful as a resource, but I am wary of the information that it returns when given certain prompts; Chatbots can output fabricated information such as citing nonexistent studies, which can be very dangerous if taken at face value and spread to other people_". Another participant, **P125MR30** felt chatbots "_are very interesting and a big part of the future but can also be very dangerous because \begin{table} \begin{tabular}{|c c c c c c c|} \hline \multicolumn{6}{|c|}{**Explicit ChatGPT Group [viewed: 26 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 17 (65.38\%) & 6 (23.08\%) & 0 (0\%) & 1 (3.85\%) & 1 (3.85\%) & 1 (3.85\%) \\ \hline \multicolumn{6}{|c|}{**Implicit ChatGPT Group [viewed: 23 participants]**} \\ \hline **Scroll Past** & **Fact-check** & **Block** & **Report** & **Reply** & **Like** \\ 16 (69.57\%) & 2 (8.7\%) & 0 (0\%) & 1 (4.35\%) & 4 (17.39\%) & 0 (0\%) \\ \hline \end{tabular} \end{table} Table 16: What action would you take on Video #4? \begin{table} \begin{tabular}{|c c c|} \hline \multicolumn{4}{|c|}{**Explicit ChatGPT Group [viewed: 25 participants]**} \\ \hline **Yes** & **No** & **Unsure** \\ 7 (28\%) & 7 (28\%) & 11 (44\%) \\ \hline **Implicit ChatGPT Group [viewed: 25 participants]** \\ \hline **Yes** & **No** & **Unsure** \\ 8 (32\%) & 6 (24\%) & 11 (44\%) \\ \hline \end{tabular} \end{table} Table 17: Is Video #4 Misinformation? [Labeled] at the same time it is still programmed and could be set to give out any kind of information whether it be misinformation or not._" Some participants indicated they felt "_indifferent with no preference for or against chatbots_" [**P112FL30**] and some were reserved but acknowledged "_how crazy advanced AI is that we now have chatbots that are able to assist_" [**37FM30**]. Participants in the implicit ChatGPT group were unaware that the videos they viewed had text created by a language model and were much more likely to respond that they were unsure of their opinion on chatbots and language models. As indicated in Table 19, many of the participants in this group based their opinion of chatbots on the reliability of the information produced, noting that "_believe that they are the future of improving the lives of everyone and improving productivity exponentially_" [**P69ML30**]. Some of the participants said that chatbots "_feel cheap and show a lack of effort from the creator; They are also impersonal_" [**P80FM40**], commenting that they "_think chatbots are overblown and hyped up way too much; They still have humans shadow working on them constantly to make sure they preform as expected, feed them data and correct them_" [**P95FL40**]. Participants also had a hesitant opinion of chatbots as "_they can be a great source of information, or a dangerous source of misinformation_" [**P127FL20**]. ## 7 Discussion ### Implications Aware of the obvious threat of causing physical harm by synthesizing speculative health responses, ChatGPT was designed to opine with a default recommendation for consulting a health professionals before attempting to consume the "at-home abortion" herbs. We did not attempt to circumvent these settings, but it becomes increasingly easier to do so, as evidence show exploits of ChatGPT prompts to "do anything now" could anyhow produce intentionally manipulative and harmful responses [57]. The problems don't stop here, as our results suggest that even the default, socially responsible answers from ChatGPT are perceived as incomplete, lacking credibility, dangerous, unsafe, and scientifically unproven. Generating a plausibly sound but incorrect information even if trained on factual data is an acknowledged and open issue for LLMs and chatbots [13] and hopefully the accuracy of the generated information will improve with time. But another open issue is the users' confidence in generative health and abortion information. Despite the good performances chatbots show on medical licensing tests or when writing basic diagnoses [23, 38], our findings show that a considerable number of users feel the chatbot simply pulls information of the Internet about "at-home" abortion herbs and already know not to trust what ChatGPT says. True, one could argue that we did not exposed our participants to directly interact with ChatGPT but they nonetheless evaluated the accuracy of the chatbot's responses and noticed that the chatbot avoids doubling down on the harms and attempts to respond in balanced if somewhat grammatically incorrect manner. It is reassuring that a non-negligible number of entire sample pointed out they would fact-check and do their own search about what is a save abortion practice, now that the legal rights have been revoked in the United States. Some of our participants noted they would comment on the ChatGPT's response elements they believed are truthful and the elements they believed are misleading to help other TikTok users. This is a commendable engagement strategy and reinforces the previous evidence suggesting that TikTok facilitates social support exchange on thorny societal and health issues [6, 74]. While the precarious labeling of our posts as misinformation by TikTok might be seen as overly intrusive, it might be a result of an increased algorithmic moderation in response to the criticisms that the platform pumps dangerous abortion content to young users [12]. Like before, the misinformation labels mattered little and the participants largely ignored it [68], except in the case where ChatGPT explicitly produces a rap about the abortifacient herb pennyroyal. We are on the opinion that the perceived oddity of the lyrical response, coupled with the smartphone interface might have be the main confounding effect, but we are nonetheless content that the participants distanced themselves from the videos. How TikTok and other social media platforms will handle generative health information remains to be seen, especially when it comes to moderating health and abortion misinformation. Currently, all mainstream platforms either use "context labels" to human-generated abortion content (YouTube and TikTok [34, 88]), promote authoritative abortion information (Twitter [35]) or simply block questionable abortion treatment advertisements (Meta [48]). Using chatbots and language models in strict medical circumstances is certainly lower on magnitude in generated content and likely supervised by health professionals that have to sign off the treatments or diagnoses, but on a social media scale this is impractical. Abusing chatbots for synthesized health misinformation, coupled with the capabilities for generative propaganda and rumors as in the case with GPT-4chan is a real threat [52]. In our opinion, this threat won't be entirely mitigated by users' early distrust in chatbots signaled in our study and the "neural" fake have the potential to spur a social media infodemic comparable to the chaos of COVID-19 misinformation [31]. ### Ethical Considerations While we debriefed our participants about the dangers of "at-home" abortion remedies, we acknowledge that there might be a potential risk of repeated exposure to abortion misinformation, i.e. an "implied truth effect" [62], as each participant saw four of the videos. To mitigate this risk we explicitly pointed to the debunked information for the related "at-home" remedies and their associated harms. Another risk that stems from our study is the possibility that participants can attempt or promote prompting ChatGPT and other chatbots about many different types of health information and untested medical practices. We warned that the study does not promote nor advocate for using generative recommendations of treatments or diagnoses without a consulting a healthcare professional. We are aware that studies with generative health and abortion (mis)information might risk oversimplification or misinterpretation of the findings, therefore we deliberately avoided providing definitive numbers beyond the participants' self-reported age, gender, and political leanings in our reporting. Abortion misinformation could have dangerous consequences and is regularly used in polarized political discourses revolving around the legality and availability of safe abortion treatments. For example, #OpJane is the latest online operation launched against the state of Texas for enacting the anti-abortion Bill 8 that allows "abortion bounty" for anyone who will investigate and report abortion [25]. Because the operation calls for "fighting misinformation with enough plausible and difficult to disprove misinformation" to make any data these bounty hunters gather as useless [4], we exercise caution in exploiting ChatGPT or other chatbots with "do anything now" jailbreak prompts to facilitate this call. ### Limitations Our research was limited in its scope to U.S. TikTok media users and the state of generative capabilities of ChatGPT regarding at-home abortion remedies at the time of the study. In as much as we attempted to use as close as possible generic abortion information prompts, other prompts to ChatGPT or another chatbot and LLM might produce different responses than the ones in our study. A limitation also comes from the sampling method and the use of Prolific as a participant recruit provider, as other users and other samples might provide results that differ from the once we obtained. We did not measure the efficacy of users' assessments and engagement strategies for human-generated abortion misinformation content, nor did we ask how users dealt with other abortion misinformation on other social media platforms. Short-form videos are a relatively new way of persuasive communication appealing to younger users, and generative use of images for creating "neural memes" or abortion misinformation "deep fakes" might provoke different responses for a wider population of users [78]. Therefore, we are careful to avoid any predictive use of our findings due to the malleability of the "neural" multimedia misinformation synthetization. ## 8 Conclusion Self-induced terminations of unwanted pregnancies in a post-_Roe v Wade_ America will undoubtedly drive many interested users to try chatbots for an advise regarding "at-home" abortion. Whether the generative responses will propagate on social media platforms verbatim or will be modified by humans to fit particular narratives is yet to be seen, but the barriers are already low for any questionably content on abortion to reach wide audiences. Our findings are reassuring that users on TikTok are able to distance themselves from generative responses and assume more of a "better-safe-than-sorry" position in engaging with them. The misinformation moderation strategies employed by the platforms and the availability of scientifically debunked abortion claims certainly affect this position and we hope that our study adds in the effort to prevent harmful outcomes of abortion misinformation.
2301.12043
Parsimonious System Identification from Fragmented Quantized Measurements
Quantization is the process of mapping an input signal from an infinite continuous set to a countable set with a finite number of elements. It is a non-linear irreversible process, which makes the traditional methods of system identification no longer applicable. In this work, we propose a method for parsimonious linear time invariant system identification when only quantized observations, discerned from noisy data, are available. More formally, given a priori information on the system, represented by a compact set containing the poles of the system, and quantized realizations, our algorithm aims at identifying the least order system that is compatible with the available information. The proposed approach takes also into account that the available data can be subject to fragmentation. Our proposed algorithm relies on an ADMM approach to solve a $\ell_{p},(0<p<1),$ quasi-norm objective problem. Numerical results highlight the performance of the proposed approach when compared to the $\ell_{1}$ minimization in terms of the sparsity of the induced solution.
Omar M. Sleem, Constantino M. Lagoa
2023-01-28T01:31:44Z
http://arxiv.org/abs/2301.12043v1
# Parsimonious System Identification from Fragmented Quantized Measurements ###### Abstract Quantization is the process of mapping an input signal from an infinite continuous set to a countable set with a finite number of elements. It is a non-linear irreversible process, which makes the traditional methods of system identification no longer applicable. In this work, we propose a method for parsimonious linear time invariant system identification when only quantized observations, discerned from noisy data, are available. More formally, given a priori information on the system, represented by a compact set containing the poles of the system, and quantized realizations, our algorithm aims at identifying the least order system that is compatible with the available information. The proposed approach takes also into account that the available data can be subject to fragmentation. Our proposed algorithm relies on an ADMM approach to solve a \(\ell_{p},(0<p<1)\), quasi-norm objective problem. Numerical results highlight the performance of the proposed approach when compared to the \(\ell_{1}\) minimization in terms of the sparsity of the induced solution. System identification, Sparsity, Quantization, ADMM ## I Introduction ### _Motivation_ Quantization is the division of a quantity into a discrete number of small parts, often assumed to be integral multiple of a common quantity [1, 2]. A classical example of quantization by rounding off, for the application of estimating densities of histograms, was analyzed in [3]. Since the processing of signals, i.e. speech and image, requires a digital environment, quantization plays an important role in bridging the analog and digital worlds [4]. On one hand, quantization led to a new research area in control theory called network controlled systems (NCS) [5]. NCS deals with the idea of controlling a process when the input and output signals are transmitted via a communication channel. On the other hand, it revealed the incompetence of the classical theory of system identification in considering quantized measurements [5]. When a signal is subject to quantization, the quantization noise can no longer be modeled as a filtered white (zero mean and independent over time) noise and is signal dependent. Hence, in [6, 7] and references therein, the traditional theory of system identification was suggested to be modified to tackle the fact that the measurements are subject to quantization. Moreover, from [8] (section 10.1), the classical identification procedures are not suitable for robust identification, when the signal is subject to quantization, because they identify a set of parameters of a fixed mathematical structure, where a fixed system order must be assumed. Inspired by this, various works -which will be discussed in the next section in more detail-explored the problem of system identification given quantized realizations. However, in this paper we aim to present a new approach to the problem of (Linear Time Invariant) LTI system identification from quantized outputs. This approach allows for the use of a priori information on the system and fragmented measurements of the output. In addition, our approach aims to recover the least order system that is compatible with the data by minimizing an \(\ell_{p}\), \((0<p<1)\), quasi-norm objective. The paper is organized as follows; in the remaining of the introduction, we provide a comprehensive discussion of the previous related work and our contribution. Section II introduces the notations that are used throughout the paper. In section III, we thoroughly describe the system model used. The parsimonious system identification problem is formally provided in section IV. The proposed (Alternating Direction Method of Multipliers) ADMM algorithm based on \(\ell_{p}\) quasi-norm approximation is described in section V. We validate our approach with an extensive suite of numerical simulations in section VI. Finally, the paper is concluded in section VII. ### _Related work_ The problem of simple representation of signals using quantization dates back to the 1940's and is one of the main threads of information theory [9]. However, rigorous analysis did not begin until the 1980's. In [10, 11], considering digital feedback control systems, the authors proposed a way in which one can specify system structures that alleviate the adverse effects of quantization. The works in [12, 13] demonstrate that quantization can induce a chaotic behavior in digital feedback systems. The results in [14, 15] are recognized as a quantum leap because the author was able to analyze the behavior of control systems in detail. The circumstances under which a discrete unstable LTI system can be stabilized, by choosing feedback control that depends on the quantized measurements, are studied. In [16], the authors proposed a control design methodology, assuming a quantizer with variable sensitivity along with system state, that stabilizes LTI control systems with quantized measurements. In [17], the coarsest quantizer that stabilizes a single input LTI system is shown to be a logarithmic one and can be obtained by solving a linear quadratic regulator problem. Abundant other works investigated the problem of the stabilization of NCS in different situations, e.g., [18, 19, 20]. Despite that ample research activity in the stabilization and state estimation, quantization in system identification problems was still not properly pondered [21]. In [22], the authors studied the effect of quantization on I/O data used for system identification in a controlled plant whose parameters may change during the operation. They derived the optimal quantization scheme and showed that it is coarse near the origin of the signals and dense at a distance from it. This result is opposite to the case of stabilization in [17] and reveals duality between system identification and stabilization. Similar properties of the optimal quantizer were concluded in [23], where the author considered a least square error objective function -for parameter estimation- subject to a constraint on the number of subsections of the quantized signals or the expectation of the optimal code length for either high or low resolution. In [24], the problem of system identification using uniformly quantized realizations was considered, where, the proposed formulation is a least square minimization of the difference equation errors over all time samples with the system parameters as optimization variables. Regardless of the high accuracy in the estimation of the unknown information in the I/O data, the proposed method stills suffers the drawback of high computational complexity and noise neglection. The work in [25] aimed to solve these drawbacks by exploiting statistical properties instead of deterministic treatment. In particular, an identification method for a linear system based on quantized measurements was derived. Using traditional equi-spaced quantizer, an instrumental level identification approach was proposed to enhance the estimation accuracy. The authors of [26] took this approach a step further where a variation for the equi-spaced quantizer was considered. They showed that using a generalized noise shaping code improves the accuracy of the estimates. Another line of research includes the identification using a general class of quantized observations that allows the segmentation of the output range into a collection of subsets that may have unequal, fixed lengths or even design variables such as quantization design in communication systems and NCS [27]. This serves in favor of understanding the potency of systems with limited sensor information, which in turns rapports the gap between resource limitations and identification complexity in sensor and communication networks. In particular, the work in [28] considered the identification of a gain system by exploiting the information from multiple thresholds sensor and the convex combination of these thresholds. The results were extended to the case of a noisy communication channel through which the sensor output information is transmitted. The authors prove that their estimator is asymptotically efficient achieving the Cramer-Rao lower bound. Furthermore, the results were extended to a finite impulse response and transfer function models for periodic bounded input signals. In [29], the authors focused on relationships between the identification space and time complexities. They showed that the asymptotic efficiency of empirical measure based algorithms yield to a tight bound on identification accuracy. This in turns aids to derive a separation principle of the complexities (time and space). The gained insights aim to provide a feasible approach for optimal utility of communication bandwidth resources in magnifying the identification accuracy. The role of dithering noise -adding artificial noise to the observed signal before quantization in order to mitigate the effects of quantization- at the sensor was studied in [30]. The authors asserted that tailored dithering noise can considerably simplify the derivation of optimal estimators in the expense of a decreased signal to noise ratio. ### _Contributions_ The different methods reviewed in the previous part aim to either stabilize the system, find an optimal quantization scheme or solve a system identification problem. In this work, we focus on the latter problem where, to the best of our knowledge, none of the proposed methods address the problem of identifying the system of least order that is compatible with collected information. The problem of identifying systems using collected measurements can also include several other challenges including; 1) One can be faced with fragmented data due to the misplacement of sensors or external disturbances that can possibly make the collected data unreliable. 2) The ability to handle prior information on the system, e.g., constraints on the locations of the poles. In this paper, we aim to develop an algorithm that tackles the challenges mentioned above. More precisely, we consider a system in which the a priori information can be described by constraining the locations of the system's poles to be in a known compact set. Then, by exploiting "simple representations" of transfer functions, we develop an efficient algorithm that aims at finding the lowest order system that is compatible with fragmented quantized output measurements. This algorithm is based on an ADMM approach to the problem of \(\ell_{p}\) quasi-norm optimization. To validate our analysis, we consider two different numerical examples; 1) System identification with randomly generated data set. 2) Identification with actual data collected from the motion of flexible robotic arm [31]. The numerical results in both examples show that our method is competitive against the \(\ell_{1}\) convex relaxation objective in terms of both the detected system order and accuracy of the recovered realizations. A preliminary version of part of this work was presented in [32]. This journal version includes a generalized formulation where a non continuous input data stream can be handled (input data is composed of independent chunks). Moreover, we do not assume a continuous measurement of the chunks' outputs from the quantizer, i,e,. output data is subject to fragmentation. Unlike [32], we assume a generalized quantizer whose input is prone to noise and show that it plays an important role in the sparsity of the induced solution. We provide a new experiment to demonstrate the superior performance of our method in a more practical scenario. ## II Notations Unless otherwise specified, we denote scalars with non boldface letters, e.g., \(x\), vectors with lowercase boldface letters, e.g., \(\mathbf{x}\), with \(i\)-th entry as \(x_{i}\), while matrices are in uppercase, e.g., \(\mathbf{X}\), with \((i,j)\)-th entry as \(x_{i,j}\). \(\mathbf{X}_{j,:}\) specifies the \(j\)-th row of the matrix \(\mathbf{X}\). \(\mathbb{R}\) and \(\mathbb{C}\) are the sets of real and complex numbers respectively. For a vector \(\mathbf{x}\) and matrix \(\mathbf{X}\), \(|.|\) is an element-wise absolute value of the applied variable. However, for a set \(\mathcal{X}\), \(|.|\) operator stands for the cardinality of the set. We use \(\preceq\) for element wise inequality of vectors. For any constant \(c>0\), we define \(\mathcal{I}_{c}\stackrel{{\Delta}}{{=}}[-c,c]\). For a positive integer \(n\), we let \([n]\stackrel{{\Delta}}{{=}}\{1,\ldots,n\}\). The \(p\)-th norm of a vector \(\mathbf{x}\in\mathbb{R}^{n}\) is defined such that \[\left\|\mathbf{x}\right\|_{p}\stackrel{{\Delta}}{{=}}(\sum_{i=1} ^{n}|x_{i}|^{p})^{\frac{1}{p}}. \tag{1}\] It is important to note that when \(0<p<1\), the expression in (1) is a quasi-norm satisfying the same axioms of the norm except the triangular inequality making it a non-convex function. For a complex number \(x\), we use \(\bar{x}\) to denote the complex conjugate of that number. We let \(\mathbf{1}\) be a vector of all entries equal to 1, \(\mathbf{0}\) is a vector of zeros and \(\mathbb{1}_{\mathcal{X}}(.)\) be the indicator function to the set \(\mathcal{X}\), i.e., it evaluates to zero if its argument belongs to the set \(\mathcal{X}\) and is \(+\infty\) otherwise. The compact set formed by the union of the interior and boundary of a unit circle, i.e. unit desk, on the complex domain centered around the origin is denoted by \(\mathbb{D}\). Finally, for a matrix \(\mathbf{X}\), we let \(\text{vec}(\mathbf{X})\) be the vector formed by stacking its rows. ## III System description We consider the system shown in figure 1, where a discrete time input \(u(k)\) on a finite time horizon is applied to a linear time invariant (LTI) system \(G\). In control systems, the technology used to sense the process variable (output of the controlled process) often introduces noise, e.g., noise in electrical signals is due to interference from other electrical sources. We let a measurement noise \(n(k)\in\mathcal{I}_{\epsilon}\) be added to the system output \(y(k)\). The noisy output \(\hat{y}(k)\) is then measured by the effect of a sensor that quantizes its input to discrete samples \(\mathbf{z}(k)\). In the next part, we describe each component in figure 1 thoroughly. ### _LTI system \(G\)_ We consider a stable finite dimensional LTI system \(G\) with poles that are contained in the compact set \(\mathbb{D}\). The transfer function of the system, in the \(z\)-domain, can then represented as \[H(z)=r+\sum_{q\in\mathbb{D}}\frac{a_{q}}{z-q}, \tag{2}\] with \(r\in\mathbb{R}\) and \(a_{q}\in\mathbb{C}\) being the coefficient that is associated with pole \(q\). For systems with repeated poles, an approximation by systems with transfer functions as in (2) can be made with an arbitrary small precision level. ### _Input data_ The system \(G\) models the relationship between the input \(u(k)\) and the output signal \(y(k)\). Besides boundness, we impose no constraints on the values of the samples of \(u(k)\). The stability of the system \(G\) ensures that the output \(y(k)\) is bounded as long as \(u(k)\) is bounded as well. Fig. 1: System model. As mentioned before, and without loss of generality, we assume discrete time data with a sampling time of 1 unit. Moreover, we do not require continuous measurement of data. More precisely, input data is divided into multiple sets where continuous measurements are available. We refer to these sets as "chunks". The upper part of figure 2 provides an input stream example, where \(T\) different input data chunks, with size \(n_{i}\) for chunk \(i\in[T]\), are presented. For chunk \(i\), the input sample at instance \(k_{j}\) is denoted by \(u(k_{j}^{(i)})\) where \(j\in[n_{i}]\). Separations between different chunks as well as their sizes are arbitrary. ### _Quantizer and output data_ We assume a general quantizer, \(Q\), that consists of the set of intervals \(\mathcal{S}=\{\mathcal{S}_{i},i\in\mathcal{I}\}\), with the index set \(\mathcal{I}\) as ordinarily a collection of consecutive integers beginning with 1, together with a set of quantization levels \(L=\{L_{i},i\in\mathcal{I}\}\), so that the overall quantizer is defined by \(Q(x)=L_{i}\) for \(x\in\mathcal{S}_{i}\). The sets \(\mathcal{S}_{i}\) partition the real line. That is, the cells are disjoint and exhaustive [9]. Without loss of generality, we assume a symmetric quantizer, where \(L_{|\mathcal{I}|}=-L_{1}\) as the saturation level of the quantizer. Figure 3 provides an example for a uniform symmetric quantizer with \(2^{m}\)-levels, \(m=3\), saturation value of 1 and a quantization step \(\Delta=\frac{1}{2^{m-1}-0.5}=0.2857\). A cosine signal \(S(t)\) is applied to the quantizer to produce the discrete signal \(\bar{S}(t)\). In addition, we do not assume that the all the output data stream is available within a chunk, i.e., the data is subject to fragmentation. This arises in cases when intermittent measurements are collected from the sensor or failure in communication occurs. The second part of Figure 2 Fig. 2: Input/output data example. The circle indicates that the data is missing at that instance. provides an output example of a uniform \(2^{m}\) levels sensor, with \(m=2\) and a saturation level of 1.5, where the output for chunk \(i\in[T]\) at instance \(k_{j}\) is denoted by \(z(k_{j}^{(i)})\) with \(j\in[n_{i}]\). As mentioned in the previous section, the input data stream (and correspondingly the output) chunks' separations are arbitrary and hence, we assume that the data from different chunks is independent. This, along with the time invariance assumption of the system, makes it reasonable to assume that the data chunks starting instances are the same, i.e., \(k_{1}^{(i)}=1\) for all \(i\in[T]\). For ease of notation, we drop the subscript and let \(u^{(i)}(k)\) and \(z^{(i)}(k)\) represent the input and output samples respectively of chunk \(i\in[T]\) where \(k\in[n_{i}]\). ## IV Problem statement Given input/output data \(u^{(i)}(k)\) and \(z^{(i)}(k)\), we aim to reconstruct the least order system that is compatible with the input output information and a priori assumptions on the system. More formally, the problem we aim to address can be stated as follows **Problem**.: _Given_ * _Set_ \(\mathbb{D}\) _that contains the poles of the LTI system_ \(G\)_._ * _Input data chunks_ \(u^{(i)}(k)\)_,_ \(k\in[n_{i}]\)_,_ \(i\in[T]\)_, which are applied to the system_ \(G\)_._ * _A range_ \(\mathcal{I}_{\epsilon}\) _which includes the measurement noise_ \(n^{(i)}(k)\)_,_ \(k\in[n_{i}]\) _and_ \(i\in[T]\)_._ * _Measurements of the fragmented sensor output realizations_ \(z^{(i)}(k)\) _for_ \(k\in\mathcal{K}_{i}\subseteq[n_{i}]\)_._ Fig. 3: Sensor operation example. _find the most parsimonious system that is compatible with the a priori assumptions and a posteriori data mentioned above._ **Remark**.: _The formulation above assumes only the following a priori information, which are; 1)the system is stable and 2)the noise is bounded in \(\mathcal{I}_{\epsilon}\). However, any other a priori information on the system \(G\) that can be translated to constraints on the position of the poles (such as settling time), is compatible with the approach presented in this paper._ ### _Parsimonious identification as a block sparsification problem_ From the definition of linear systems, the output at instance \(k\in[n_{i}]\) within chunk \(i\in[T]\), \(y^{(i)}(k)\), can be decomposed as, \[y^{(i)}(k)=y^{(i)}_{\text{zi}}(k)+y^{(i)}_{\text{zs}}(k), \tag{3}\] where, \(y^{(i)}_{\text{zi}}(k)\) is the zero input response at instance \(k\) of chunk \(i\), i.e., the response due to the initial conditions of the system before the input is applied, while \(y^{(i)}_{\text{zs}}(k)\) is the zero state response. From [33], the zero input response can be written as, \[y^{(i)}_{\text{zi}}(k)=\sum_{q\in\mathbb{D}}b^{(i)}_{q}q^{k-1},\quad\forall k \in[n_{i}],\quad\forall i\in[T], \tag{4}\] such that, similar to (2), \(b^{(i)}_{q}\in\mathbb{C}\) is the coefficient that is associated to pole \(q\) for chunk \(i\). The zero state response is obtained by convolving the input sequence with the system's impulse response, \[y^{(i)}_{\text{zs}}(k)=\sum_{m=0}^{k}u^{(i)}(m)h(k-m),\quad\forall k\in[n_{i}],\quad\forall i\in[T], \tag{5}\] where \(h(k)\stackrel{{\Delta}}{{=}}\mathcal{Z}_{z}^{-1}[H](k)\) is the system's impulse response and \(\mathcal{Z}_{z}^{-1}[H](k)\) is the inverse \(z\)-transform of \(H(z)\) with index \(k\). By taking the inverse \(z\)-transform of (2), the impulse response can be easily found to be \[h(k)=\delta(k)r+\sum_{q\in\mathbb{D}}a_{q}q^{k-1}\text{step}(k-1), \tag{6}\] where \(\delta(k)\) is the dirac delta functional and \(\text{step}(\cdot)\) is the step function defined as, \[\text{step}(k)=\begin{cases}1&\text{if }k\geq 0\\ 0&\text{if }k<0\end{cases}.\] Since system complexity and order are always related with the number of poles used to describe the system, we aim to reconstruct the system and the associated noise realization \(n(k)\) for each sample, given only the quantized realizations \(z(k)\), that can be depicted by the least number of poles. First, we let \(\Upsilon:\mathbb{D}\rightarrow\mathbb{C}^{T+1}\) be the mapping from every pole \(q\) to the corresponding coefficients \(a_{q}\) and \(b_{q}^{(i)}\), i.e., \(\Upsilon(q)=\begin{bmatrix}a_{q}&b_{q}^{(1)}&\ldots&b_{q}^{(T)}\end{bmatrix}^{\top}\). The problem mentioned earlier can then be formulated such that, for all \(k\in\mathcal{K}_{i}\), \(i\in[T]\) and \(q\in\mathbb{D}\), we solve; \[\min_{a_{q},b_{q}^{(i)},r,n^{(i)}(k)}\] Cardinality \[\{q\in\mathbb{D}:\Upsilon(q)\neq\mathbf{0}\},\] (7a) s.t. \[y^{(i)}(k)=y_{\mathrm{zi}}^{(i)}(k)+y_{\mathrm{zs}}^{(i)}(k), \tag{7b}\] \[y_{\mathrm{zi}}^{(i)}(k)=\sum_{q\in\mathbb{D}}b_{q}^{(i)}q^{k-1},\] (7c) \[y_{\mathrm{zs}}^{(i)}(k)=\sum_{m=0}^{k}u^{(i)}(m)h(k-m),\] (7d) \[h(k)=\delta(k)r+\sum_{q\in\mathbb{D}}a_{q}q^{k-1}\text{step}(k-1),\] (7e) \[\hat{y}^{(i)}(k)=y^{(i)}(k)+n^{(i)}(k),\] (7f) \[z^{(i)}(k)\!=\!Q(\hat{y}^{(i)}(k)),\] (7g) \[n^{(i)}(k)\in\mathcal{I}_{\epsilon},\] (7h) \[a_{q}=\bar{a}_{\bar{q}},\quad b_{q}^{(i)}=\bar{b}_{\bar{q}}^{(i )}. \tag{7i}\] Constraint (7i) implies that the coefficients that are associated with complex conjugate poles have to be complex conjugate as well. ## V Proposed solution Theoretically, we aim to solve the problem in (7). However, this is not feasible because the unit circle contains an infinite number of poles which makes the computational complexity of the problem intractable. We aim to implement an approximation of the above problem which is based on using a grid of the unit circle of size \(n\). The denser the grid, the more accurate the approximation is to the original problem. However, a trade-off could exist as it increases the problem's computational complexity. First, we define the vector \(\mathbf{q}^{\top}\!=\![q_{1},\ldots q_{n}]\), which is composed of complex conjugates and real poles resulted from the gridding effect, the vector of the associated zero state coefficients \(\mathbf{a}^{\top}\!\!=\!\![a_{q_{1}},\ldots a_{q_{n}}]\) and the matrix of zero input coefficients \(\mathbf{B}\in\mathbb{C}^{T\times n}\), where \(\mathbf{B}_{i,:=}\left[b_{q_{1}}^{(i)},\ldots b_{q_{n}}^{(i)}\right]\). We also let \(\mathbf{n}_{i}\in\mathbb{R}^{|\mathcal{K}_{i}|}\) be the vector of noise realizations \(n^{(i)}(k)\) for \(k\in\mathcal{K}_{i}\) with chunk \(i\in[T]\). Second, we aim to equalize the energy contribution of all the poles and hence, we let the scaling factor \(\boldsymbol{\alpha}\in\mathbb{R}^{n}\) be defined as, \[\alpha_{m}=\frac{1-|q_{m}|^{2}}{1-|q_{m}|^{2N+2}}\quad\forall m\in[n]. \tag{8}\] The scaling factor \(\boldsymbol{\alpha}\) aims to make the Hankel matrix formed by the system's impulse response has a nuclear norm equal to 1. For more information on \(\boldsymbol{\alpha}\) and its proper choice, the interested reader is referred to [34]. A good approximation for the problem in (7) can then be defined such that, for all \(k\in\mathcal{K}_{i}\), \(i\in[T]\) and \(j\in[n]\), we aim to solve; \[\min_{\mathbf{a},\mathbf{B},r,\mathbf{n}_{i},\mathbf{d}}\left\| \mathbf{d}\right\|_{0},\] (9a) s.t. \[y^{(i)}(k)=y^{(i)}_{\text{zi}}(k)+y^{(i)}_{\text{zs}}(k), \tag{9b}\] \[y^{(i)}_{\text{zi}}(k)=\sum_{j\in[n]}\alpha_{j}b^{(i)}_{q_{j}}q^{ k-1}_{j},\] (9c) \[y^{(i)}_{\text{zs}}(k)=\sum_{m=0}^{k}u^{(i)}(m)h(k-m),\] (9d) \[h(k)=\delta(k)r+\sum_{q\in\mathbb{D}}a_{q}q^{k-1}\text{step}(k-1),\] (9e) \[\hat{y}^{(i)}(k)=y^{(i)}(k)+n^{(i)}(k),\] (9f) \[z^{(i)}(k)\!=\!Q(\hat{y}^{(i)}(k)),\] (9g) \[n^{(i)}(k)\in\mathcal{I}_{\epsilon},\] (9h) \[a_{q_{j}}=\bar{a}_{\bar{q}_{j}},\quad b_{q_{j}}=\bar{b}_{\bar{q }_{j}},\] (9i) \[|\mathbf{a}|\preceq\mathbf{d},\quad|\mathbf{B}_{i,:}|\preceq \mathbf{d}. \tag{9j}\] The auxiliary variable \(\mathbf{d}\in\mathbb{R}^{n}_{+}\) ensures block sparsity of the zero state and zero input coefficients, i.e., \(\mathbf{a}\) and \(\mathbf{B}\). A proper choice of the vector \(\boldsymbol{\alpha}\), defined in (8), and the use of (9a) and (9j) allow the identification of the system with the least number of poles, i.e., least order system. However, the \(\ell_{0}\) pseudo-norm is an NP hard problem and hence, using notions of sparsity [35], the objective function is relaxed using the \(\ell_{p}(0<p<1)\) quasi-norm, i.e., \(\left\|\mathbf{d}\right\|_{0}\) in (9a) is replaced with \(\left\|\mathbf{d}\right\|_{p}^{p}\) defined as in (1). For notation simplicity, we define the vector \(\mathbf{w}\in\mathbb{C}^{1+n(T+1)+\sum_{i\in[T]}|\mathcal{K}_{i}|}\), which is the concatenation of the variables \(r\), \(\mathbf{a},\text{vec}(\mathbf{B})\) and \(\mathbf{n}_{i}\) for \(i\in[T]\). Let the set \(\mathcal{D}\subseteq\mathbb{C}^{1+n(T+1)+\sum_{i\in[T]}|\mathcal{K}_{i}|} \times\mathbb{R}_{+}^{n}\) as the set of doubles \((\mathbf{w},\mathbf{d})\) where constraints (9b) to (9j) are satisfied. Hence, the problem in (9), after the objective function relaxation, will have the compact representation in the form; \[\min_{\mathbf{w},\mathbf{d}} \left\|\mathbf{d}\right\|_{p}^{p},\] (10a) s.t. \[\mathbf{w},\mathbf{d}\in\mathcal{D}. \tag{10b}\] As discussed, we aim to recover the lowest order system and hence, we consider the case when \(0<p<1\), which lead to a non-convex objective in (10). In our anaylysis, we consider an ADMM approach that utilizes the structure of the problem in order to divide the optimization over the variables via iteratively solving simpler sub-problems. Starting with the epi-graph form of (10) through introducing the auxiliary variable \(\mathbf{t}\in\mathbb{R}^{n}\), where, \[\min_{\mathbf{w},\mathbf{d},\mathbf{t}} \mathbf{1}^{\top}\mathbf{t},\] (11) s.t. \[t_{i}\geq|d_{i}|^{p},\quad i\in[n],\] \[\mathbf{w},\mathbf{d}\in\mathcal{D}.\] Let the non-convex set \(\mathcal{X}\subset\mathbb{R}^{2}\) be the epigraph of the scalar function \(|d|^{p}\), i.e., \(\mathcal{X}=\{(d,t)\in\mathbb{R}^{2}:t\geq|d|^{p}\}\). Then, (11) can be cast as \[\min_{\mathbf{w},\mathbf{d},\mathbf{t}} \sum_{i\in[n]}\mathbbm{1}_{\mathcal{X}}(d_{i},t_{i})+\mathbf{1}^{ \top}\mathbf{t},\] (12) s.t. \[\mathbf{w},\mathbf{d}\in\mathcal{D}.\] In order to write (12) in an ADMM form, we introduce the variables \(\mathbf{s}\in\mathbb{C}^{1+n(T+1)+\sum_{i\in[T]}|\mathcal{K}_{i}|}\), \(\mathbf{f}\) and \(\mathbf{z}\in\mathbb{R}^{n}\), and hence, an equivalent ADMM formulation can be then given by: \[\min_{\mathbf{w},\mathbf{d},\mathbf{t},\mathbf{s},\mathbf{f}, \mathbf{z}} \sum_{i\in[n]}\mathbbm{1}_{\mathcal{X}}(d_{i},t_{i})+g_{\mathcal{ D}}(\mathbf{s},\mathbf{f})+\mathbf{1}^{\top}\mathbf{z},\] (13) s.t. \[\mathbf{w}=\mathbf{s}: \boldsymbol{\lambda}_{1},\] \[\mathbf{d}=\mathbf{f}: \boldsymbol{\lambda}_{2},\] \[\mathbf{t}=\mathbf{z}: \boldsymbol{\theta}.\] The dual variables associated with the constraints \(\mathbf{w}=\mathbf{s}\), \(\mathbf{d}=\mathbf{f}\) and \(\mathbf{t}=\mathbf{z}\) are \(\boldsymbol{\lambda}_{1}\), \(\boldsymbol{\lambda}_{2}\) and \(\boldsymbol{\theta}\), respectively. Hence, the Lagrangian function corresponding to (13) augmented with a quadratic penalty on the violation of the equality constraints with penalty parameter \(\rho>0\), is given by: \[\mathcal{L}_{\rho}(\mathbf{d},\mathbf{t},\mathbf{s},\mathbf{f}, \mathbf{w},\mathbf{z},\boldsymbol{\lambda}_{1},\boldsymbol{\lambda}_{2}, \boldsymbol{\theta})=\sum_{i\in[n]}\mathbb{1}_{\mathcal{X}}(d_{i},t_{i})+g_{ \mathcal{D}}(\mathbf{s},\mathbf{f})+\] \[\mathbf{1}^{\top}\mathbf{z}+\boldsymbol{\lambda}_{1}^{\top}( \mathbf{w}-\mathbf{s})+\boldsymbol{\lambda}_{2}^{\top}(\mathbf{d}-\mathbf{f}) +\boldsymbol{\theta}^{\top}(\mathbf{t}-\mathbf{z})+\frac{\rho}{2}(\|\mathbf{ w}-\mathbf{s}\|_{2}^{2}\] \[+\|\mathbf{d}-\mathbf{f}\|_{2}^{2}+\|\mathbf{t}-\mathbf{z}\|_{2 }^{2}). \tag{14}\] Considering the three block variables \(\mathbf{Q}_{1}=(\mathbf{d},\mathbf{t})\), \(\mathbf{Q}_{2}=(\mathbf{s},\mathbf{f})\) and \(\mathbf{Q}_{3}=(\mathbf{w},\mathbf{z})\), ADMM [36] consists of the following iterations, where \(l\) is the iteration number: \[\mathbf{Q}_{1}^{\{l+1\}}\!\!\!=\!\!\!\!\mathop{\rm argmin}_{ \mathbf{d},\mathbf{t}}\mathcal{L}_{\rho}(\mathbf{Q}_{1},\mathbf{Q}_{2}^{\{l\}},\mathbf{Q}_{3}^{\{l\}}\!\!,\!\!\boldsymbol{\lambda}_{1}^{\{l\}}\!\!,\!\! \boldsymbol{\lambda}_{2}^{\{l\}}\!\!,\!\!\boldsymbol{\theta}^{\{l\}}), \tag{15}\] \[\mathbf{Q}_{2}^{\{l+1\}}\!\!\!=\!\!\!\!\mathop{\rm argmin}_{ \mathbf{s},\mathbf{f}}\mathcal{L}_{\rho}(\mathbf{Q}_{1}^{\{l+1\}}\!\!\!,\!\! \mathbf{Q}_{2}\!\!,\!\!\mathbf{Q}_{3}^{\{l\}}\!\!,\!\!\boldsymbol{\lambda}_{1 }^{\{l\}}\!\!,\!\!\boldsymbol{\lambda}_{2}^{\{l\}}\!\!,\!\!\boldsymbol{\theta}^ {\{l\}}),\] (16) \[\mathbf{Q}_{3}^{\{l+1\}}\!\!\!=\!\!\!\!\mathop{\rm argmin}_{ \mathbf{w},\mathbf{z}}\mathcal{L}_{\rho}(\mathbf{Q}_{1}^{\{l+1\}}\mathbf{Q}_{ 2}^{\{l+1\}}\!\!\!\mathbf{Q}_{3}\!\!,\!\!\boldsymbol{\lambda}_{1}^{\{l\}}\!\!,\!\!\boldsymbol{\lambda}_{2}^{\{l\}}\!\!,\!\!\boldsymbol{\theta}^{\{l\}}),\] (17) \[\boldsymbol{\lambda}_{1}^{\{l+1\}}\!\!\!=\!\!\!\boldsymbol{ \lambda}_{1}^{\{l\}}+\rho(\mathbf{w}^{\{l+1\}}-\mathbf{s}^{\{l+1\}}),\] (18) \[\boldsymbol{\lambda}_{2}^{\{l+1\}}\!\!\!=\!\!\!\boldsymbol{ \lambda}_{2}^{\{l\}}+\rho(\mathbf{d}^{\{l+1\}}-\mathbf{f}^{\{l+1\}}),\] (19) \[\boldsymbol{\theta}_{1}^{\{l+1\}}\!\!\!=\!\!\!\boldsymbol{ \theta}_{1}^{\{l\}}+\rho(\mathbf{t}^{\{l+1\}}-\mathbf{z}^{\{l+1\}}). \tag{20}\] ### (\(\mathbf{d}\), \(\mathbf{t}\)) _update_ From the expression of the augmented Lagrangian in (14) and by completing the square, the update of \(\mathbf{d}\) and \(\mathbf{t}\) in (15) can be found by solving the following optimization, \[\begin{split}\min_{\mathbf{d},\mathbf{t}}&\| \mathbf{d}-(\mathbf{f}^{\{l\}}-\frac{\boldsymbol{\lambda}_{2}^{\{l\}}}{\rho})\|_ {2}^{2}+\|\mathbf{t}-(\mathbf{z}^{\{l\}}-\frac{\boldsymbol{\theta}^{\{l\}}}{ \rho})\|_{2}^{2},\\ \text{s.t.}&(d_{i},t_{i})\in\mathcal{X}\quad\forall i \in[n].\end{split} \tag{21}\] It can be realized that the problem in (21) enjoys a separable structure and hence is amenable to decentralization. However, it is a non-convex problem due to the nature of the set \(\mathcal{X}\). In [37], the authors considered a similar problem and it was shown that the element-wise optimization of (21) boils down to finding the roots, \(a_{i}^{*}\), of the scalar \(2v\) polynomial; \[a_{i}^{2v}+\frac{u}{v}\left(a_{i}^{2u}-\tilde{t}_{i}a_{i}^{u}\right)-\tilde{x}_ {i}a_{i}^{v}, \tag{22}\] where \(\tilde{x}_{i}=f_{i}^{\{l\}}-\frac{\lambda_{i,2}^{\{l\}}}{\rho}\), \(\tilde{t}_{i}=z_{i}^{\{l\}}-\frac{\theta_{i}^{\{l\}}}{\rho}\) and \(u,v\in\mathbb{Z}_{+}\) such that \(p=u/v\). They showed that, in proposition 1, the entry-wise solution of (21) is given by \((d_{i}^{*},t_{i}^{*})=(a_{i}^{*^{*}},a_{i}^{*^{*}})\) for all \(i\in[n]\). ### (\(\mathbf{s}\), \(\mathbf{f}\)) _update_ By fixing all the remaining variables, the (\(\mathbf{s}\), \(\mathbf{f}\)) update in (16) can be easily shown to be the solution of the following optimization problem; \[\begin{split}\min_{\mathbf{s},\mathbf{f}}&\|\mathbf{ s}-(\mathbf{w}^{\{l\}}+\frac{\boldsymbol{\lambda}_{1}^{\{l\}}}{\rho})\|_{2}^{2}+\| \mathbf{f}-(\mathbf{d}^{\{l+1\}}+\frac{\boldsymbol{\lambda}_{2}^{\{l\}}}{\rho} )\|_{2}^{2},\\ \text{s.t.}&(\mathbf{s},\mathbf{f})\in\mathcal{D}. \end{split} \tag{23}\] The problem in (23) is clearly a convex optimization one that can be solved by various methods including sub-gradient projection [38], interior point and ellipsoid methods [39, 40]. ### (\(\mathbf{w}\), \(\mathbf{z}\)) _update_ From the Lagrangian expression in (14), the \(\mathbf{w}\) update can be found by solving; \[\begin{split}\mathbf{w}^{\{l+1\}}&=\operatorname*{ argmin}_{\mathbf{w}}\|\mathbf{w}-(\mathbf{s}^{\{l+1\}}-\frac{\boldsymbol{ \lambda}_{1}^{\{l\}}}{\rho})\|_{2}^{2}\\ &=\mathbf{s}^{\{l+1\}}-\frac{\boldsymbol{\lambda}_{1}^{\{l\}}}{ \rho},\end{split} \tag{24}\] while that of \(\mathbf{z}\) is given by; \[\begin{split}\mathbf{z}^{\{l+1\}}&=\operatorname*{ argmin}_{\mathbf{z}}\mathbf{1}^{\top}\mathbf{z}+\boldsymbol{\theta}^{\{l\} ^{\top}}(\mathbf{t}^{\{l+1\}}-\mathbf{z})+\frac{\rho}{2}\|\mathbf{t}^{\{l+1\} }-\mathbf{z}\|_{2}^{2}\\ &=\mathbf{t}^{\{l+1\}}+\frac{\boldsymbol{\theta}^{\{l\}}- \mathbf{1}}{\rho}.\end{split} \tag{25}\] The steps of the ADMM algorithm described in the previous sections can then be summarized as in algorithm 1. ## VI Numerical results In this section, we validate the ability of algorithm 1 in solving problem (10). For comparison purposes, we use a convex relaxation of (9), using the \(\ell_{1}\) norm in the objective, as a baseline. We did not include any other solution methods discussed in the literature due to the lack of their ability to handle the stability of the system when data fragmentation takes place. Our numerical results consists mainly of two parts; 1) System identification with random data. 2) Identification with real data from a flexible robot arm. In the next parts, we assume that \(p=0.5\), i.e., \(\ell_{0.5}\). With this selection of \(p\), the algorithm converges more quickly and the polynomial root finding problem in (22) is easier to solve. Numerical experiments were carried out for various values of \(p\), i.e., \(p\in\{\frac{1}{3},\frac{1}{4}\}\), however, they were not found to outperform the \(\ell_{0.5}\) case. Therefore, they are not included in the numerical results section and still under investigation. ### _System identification with random data_ We consider four data chunks, \(T=4\), with 50 samples per chunk, where the samples of each chunk are drawn independently from a symmetric uniform distribution on the interval \(\mathcal{I}_{5}\). Input chunks are applied to a randomly generated stable LTI system with a known order, where the initial conditions of the zero input response for each chunk are initialized through samples of zero mean Gaussian distribution with standard deviation \(\sigma=10^{-2}\). We assume a uniform gridding of the unit circle into \(n=146\) points. As mentioned before in section V, the denser the grid of the the unit circle is, the better the system is represented but the more complex it will be. From [41], our choice is a good approximation. Realization noise is added to the LTI system's output, where samples of the noise, \(n(k)\), are drawn independently from a uniform distribution on the interval \(\mathcal{I}_{0.25}\). We assume a symmetric \(2^{m}\)-levels, \(m=3\), uniform quantizer that maps the entire domain \(\mathcal{I}_{\infty}\) to \(2^{3}\) levels equally spaced on the interval \(\mathcal{I}_{3}\) with quantization step \(\Delta=0.8571\). 5 samples per chunk, (10\(\%\)) of the chunk size, are missing from the quantizer output, where the instances of the missing chunks are random and independent from each other. It is important to highlight that the chunks' sizes and number of missing samples per chunk could be arbitrary and different among chunks, however, we only assumed that these quantities are equal among chunks to simplify the implementation. All the other parameters in step 1 of algorithm 1 are initialized through samples from a Gaussian distribution of zero mean and \(10^{-1}\) standard deviation. The value of \(\rho\) is set to 20. We define a threshold \(\bar{\epsilon}\) as the value below which a vector entry is considered zero. The value of the threshold \(\bar{\epsilon}\) is chosen such that it is less than \(0.5\%\) of the maximum value of the optimal vector \(\mathbf{d}\), which makes \(\bar{\epsilon}=10^{-3}\) a good choice. The algorithm stops if either \(\left\|\mathbf{d}-\mathbf{f}\right\|_{2}\leq 10^{-2}\) or an iteration budget of 100 iterations is consumed. This budget value was determined through a process of trial and error across several repetitions of the experiment. In some cases, the algorithm's output of \(\left\|\mathbf{d}-\mathbf{f}\right\|_{2}\) converges to a value that is only slightly greater than \(10^{-2}\), but very close to it. Figure 3(a) shows the convergence of \(\frac{\left\|\mathbf{d}-\mathbf{f}\right\|_{2}}{\left\|\mathbf{f}\right\|_{2}}\) with respect to the iteration number for a single run. It can be realized that a budget of around 80 iterations is enough for the algorithm to converge. We perform two different experiments: 1) A single system is considered and different properties from \(\ell_{1}\) and \(\ell_{0.5}\) relaxations are compared. 2) Multiple systems with same original order are generated and the different statistical properties are studied. #### Iv-B1 Single system experiment In this subsection, we consider the experiment where an input is applied to a stable randomly generated system of order 10. Noise is then added to the output and then applied to the quantizer. The noise values and quantizer setup are as discussed above. Given the sensor outputs, the problem is solved via \(\ell_{1}\) and \(\ell_{0.5}\) relaxations and the detected system orders and outputs are compared. Figure 3(b) plots the original system poles vs those that are associated with the non zero coefficients in the vector \(\mathbf{a}\) and matrix \(\mathbf{B}\) from the \(\ell_{1}\) and \(\ell_{0.5}\) relaxations' solutions. From the figure, it can be concluded that the \(\ell_{0.5}\) detected a system of order 5 which is less complex than the system of order 11 detected by the \(\ell_{1}\) relaxation. This outlines the out-performance of the \(\ell_{0.5}\) quasi-norm when compared to the \(\ell_{1}\) convex relaxation. In figures 3(c) and 3(d), we plot the sensor input \(\hat{y}(k)=y(k)+n(k)\) and output \(\mathbf{z}(k)\), vs a finite time horizon \(N\) for the fourth chunk. The figures show how accurate the considered relaxations, whether \(\ell_{1}\) or \(\ell_{0.5}\), can represent the sensor inputs and outputs. We define the sensor input representation error across a time horizon of length \(N\) as, \(\zeta_{x}^{\text{in}},x\in\{\ell_{1},\ell_{0.5}\}\) where; \[\zeta_{x}^{\text{in}}=\sqrt{\sum_{k=0}^{N-1}(\hat{y}(k)-\hat{y}_{x}(k))^{2}}, \tag{26}\] with \(\hat{y}(k)\) as the noisy output from the original system. For the \(\ell_{0.5}\) relaxation, the representation error \(\zeta_{\ell_{0.5}}^{\text{in}}\) was found to be equal 3.3204 which is less than that of the \(\ell_{1}\) convex relaxation that had a value \(\zeta_{\ell_{1}}^{\text{in}}=4.2520\). It is important to note that we are not interested in perfectly fitting the original system's output. However, we aim to fit the sensor's realizations. Hence, we similarly define the sensor output representation error \(\zeta_{x}^{\text{out}},x\in\{\ell_{1},\ell_{0.5}\}\), such that, \[\zeta_{x}^{\text{out}}=\sqrt{\sum_{k\in\mathcal{K}_{4}}(z(k)-z_{x}(k))^{2}}, \tag{27}\] where, \(z(k)\) and \(z_{x}(k)\) in (27) are the discrete outputs from the original sensor and the considered algorithms while \(\mathcal{K}_{4}\) is the set of time indices where the data is available for the fourth chunk. From figure 4d, it can be realized that \(\zeta_{\ell_{0.5}}^{\text{out}}=\zeta_{\ell_{1}}^{\text{out}}=0\). In both figures 4c and 4d, the sensor levels are indicated by the dotted horizontal lines. The missing instances are marked by '\(\mathbf{x}\)' symbol. It can be realized that both algorithms perform a decent job in reconstructing the sensor input and output samples at those missing instances. ### _Multiple system experiment_ Since the systems that we generate to validate our solution method are random, the main idea in this part is to study the statistical properties of the derived algorithm solution. We perform an experiment where for a given original order, 50 random systems are generated. For each system, the same input is applied and the identification problem in (10) is solved, using the \(\ell_{1}\) norm and \(\ell_{0.5}\) quasi-norm relaxations, given the quantized realizations from the sensor output. Figure 5 outlines the different statistical properties from the \(\ell_{1}\) and \(\ell_{0.5}\) relaxations. It can be realized that for all original system orders, the \(\ell_{0.5}\) relaxation solution enjoys less mean and median values than its counterpart, i.e., \(\ell_{1}\) relaxation. Moreover, the \(\ell_{0.5}\) relaxation has a maximum value for each original order that is less than that of the \(\ell_{1}\). It can also be realized that in either cases, some systems have a detected order of zero, i.e., the minimum value of the whisker is zero, which means that the estimation of the constant \(r\) in (6) is enough to describe the I/O relationship. Finally, some systems are detected with higher order than the original, this because the \(\ell_{0.5}\) minimization is a non convex problem and hence algorithm 1 converges to a local minimum. Moreover, it motivates that the unit circle should be gridded into more points to increase precision, i.e., \(n>146\) mentioned in VI-A, in expense of computational complexity. ### _System identification using data from a flexible robot arm_ In this part, we consider the identification problem using data collected from the motion of a flexible robotic arm. As described in [31], the arm is installed on an electrical motor, where, the input represents the reaction torque of the structure to the ground while the output is the acceleration of the arm. The data is composed of 1024 samples, which we divide into 20 chunks of 50 samples each and hence, we drop the last 24 samples of the data set. We assume a uniform \(2^{m}\)-levels, \(m=2\), quantizer that maps \(\mathcal{I}_{\infty}\) to \(2^{2}\) levels equally spaced on \(\mathcal{I}_{0.7}\) with a quantization step \(\Delta=7/30\). Similar to as described in VI-A, we drop \(10\%\) of the chunk's samples, where Fig. 4: a) The algorithm convergence. b-d) Single system experiment results. Blue squares, red stars and green diamonds are the original, \(\ell_{1}\) and \(\ell_{0.5}\) relaxations respectively. Dotted lines in 4c and 4d are the used sensor levels. ‘\(\mathbf{x}\)’ indicates that output data is missing at that instance the location of the missing samples are chosen at random. For the \(\ell_{0.5}\) quasi norm algorithm, we use the same algorithm initialization as in the previous section while setting \(\rho\) to 50 and making the algorithm terminates if a budget of 100 iterations is consumed. We report the results for the first available data chunk with a threshold value \(\bar{\epsilon}=10^{-3}\). Figure (a)a plots the detected system order vs \(\epsilon\) which defines the noise boundaries in the range \(\mathcal{I}_{\epsilon}\), i.e. \(n^{(i)}(k)\in\mathcal{I}_{\epsilon}\). It can be realized from figure (a)a that, for both the \(\ell_{1}\) norm and the \(\ell_{0.5}\) quasi norm, the detected order decreases with the increase of \(\epsilon\). This is intuitive because on increasing \(\epsilon\), the size of the feasibility set increases which enables systems of lower orders to be explored. Moreover, a momentarily increase in the system order can happen while increasing \(\epsilon\). This is because we mainly aim to minimize a relaxed version in (10) instead of the original one in (7) and hence, more non zero low value entries can decrease the objective of (10). For all values of \(\epsilon\), the \(\ell_{0.5}\) quasi norm algorithm detects a lower order than the \(\ell_{1}\) convex relaxation. For a chunk of size 50 samples, it can be realized that the \(\ell_{1}\) norm objective recovers systems of orders \(\sim\) 35:40, for small values of \(\epsilon\). This indicates that the \(\ell_{1}\) relaxation tends to over fit the data for low values of \(\epsilon\), while the \(\ell_{0.5}\) one aims to recover a model which accurately represents it. The norm of Fig. 5: Box plot for the system order statistics. Circles with dots and black squares indicate the median and mean values respectively. Bottom/top edges of the boxes are the 25th/75th quantile. The whiskers extend from the minimum (downwards) to the maximum (upwards) value. the noiseless system output (quantizer input) error, denoted by \(\left\|y(k)-y_{x}(k)\right\|_{2},x\in\{\ell_{1},\ell_{0.5}\}\), is plotted in figure 5(b). With the same justification as in Figure 5(a), more systems that might have a lower order but higher output error are added to the feasible set when the value of \(\epsilon\) is increased. We are not concerned in exactly fitting the output of the original system, as was covered in section VI-A1. However, our goal is to choose the system from the feasibility set that fits the realizations of the sensor while having the lowest order. ## VII Conclusion In this paper, we presented an approach that aims to find the least order system that is compatible with fragmented quantized realizations. This approach allows for the use of a priori information on the system and fragmented measurements of the output. The algorithm is based on an ADMM approach that aims to solve an \(\ell_{p}\) quasi-norm objective by dividing the optimization over the variables through iteratively solving simpler sub-problems. The algorithm is tested on a synthetic data set, that is randomly generated, and a realistic data set collected through the measurement of the movement of a robotic arm. Numerical results presented show that the algorithm is very effective in obtaining low complexity explanations of the data collected. Further effort is being put into analyzing the convergence of the proposed algorithm, improving the numerical performance and its extension to continuous-time systems. Fig. 6: Robotic arm experiment results.
2310.07469
Constraining the Graviton Mass with the NANOGrav 15-Year Data Set
The recently detected stochastic signal by several pulsar timing array collaborations, offers an opportunity to scrutinize the fundamental properties of gravity, including the potential mass of the graviton. In this study, we analyze the NANOGrav 15-year data set to search for a stochastic gravitational wave background with modified Hellings-Downs correlations predicted by massive gravity. While the Bayesian analysis comparing the massive gravity to massless gravity within the effective searchable mass range of $m_g\in [3\times 10^{-25}, 8 \times 10^{-24}]\,\rm{eV}/c^2$ does not yield an explicit upper bound as all the Bayes factors are smaller than $3$, the combined consideration of the minimum frequency inherent in a massive gravity and the observed spectrum leads to an upper limit of $m_g<8.2\times 10^{-24}\,\rm{eV}/c^2$.
Yu-Mei Wu, Zu-Cheng Chen, Yan-Chen Bi, Qing-Guo Huang
2023-10-11T13:15:46Z
http://arxiv.org/abs/2310.07469v1
# Constraining the Graviton Mass with the NANOGrav 15-Year Data Set ###### Abstract The recently detected stochastic signal by several pulsar timing array collaborations, offers an opportunity to scrutinize the fundamental properties of gravity, including the potential mass of the graviton. In this study, we analyze the NANOGrav 15-year data set to search for a stochastic gravitational wave background with modified Hellings-Downs correlations predicted by massive gravity. While the Bayesian analysis comparing the massive gravity to massless gravity within the effective searchable mass range of \(m_{g}\in[3\times 10^{-25},8\times 10^{-24}]\,\mathrm{eV/c^{2}}\) does not yield an explicit upper bound as all the Bayes factors are smaller than 3, the combined consideration of the minimum frequency inherent in a massive gravity and the observed spectrum leads to an upper limit of \(m_{g}<8.2\times 10^{-24}\,\mathrm{eV/c^{2}}\). ## 1 Introduction After the successful observation of gravitational waves from compact binary coalescences by ground-based interferometers (Abbott et al., 2016), pulsar timing arrays, considered the most promising instruments for the first detection of a stochastic gravitational-wave background (SGWB), are expected to achieve the next major breakthrough in the field of gravitational wave detection in the coming years (Taylor et al., 2016; Burke-Spolaor et al., 2019). Pulsar timing arrays consist of highly stable millisecond pulsars, whose emitted pulses are monitored in terms of their arrival times to discern the imprints of gravitational waves (Sazhin, 1978; Detweiler, 1979; Foster and Backer, 1990). Specifically, an SGWB manifests as distinctive inter-pulsar correlated timing residuals, which represent the differences between actual and expected pulse arrival times. These correlations are referred to as Hellings-Downs correlations (Hellings and Downs, 1983). After decades of observations involving dozens of pulsars, several major international pulsar timing array collaborations, including the North American Nanohertz Observatory for Gravitational Waves (NANOGrav, McLaughlin (2013)), the European PTA (EPTA, Kramer and Champion (2013)) along with the Indian PTA (InPTA, Joshi et al. (2018)), Chinese PTA (CPTA, Lee (2016)), and the Parkes PTA (PPTA, Manchester et al. (2013)), have recently achieved groundbreaking progress. They have found evidence for a stochastic signal with the Hellings-Downs correlations in their latest data sets (Agazie et al., 2023, 2023; Xie et al., 2023; Reardon et al., 2023; Antoniadis et al., 2023, 2023; Xu et al., 2023), pointing to the gravitational-wave origin of the signal. Gravitational waves play a pivotal role as a fundamental tool for testing the theory of gravity, providing profound insights into important inquiries, such as the question of whether gravitons possess mass. This inquiry spans from the discovery of orbital period variations in binary pulsars (Hulse and Taylor, 1975) that served as indirect evidence of gravitational waves (Weisberg et al., 1981), to the direct confirmation of gravitational waves from binary black hole coalescences (Abbott et al., 2016). For instance, the binary pulsar PSRB1913+16 has established an upper bound as \(7.6\times 10^{-20}\,\mathrm{eV/c^{2}}\)(Finn and Sutton, 2002), the first observed gravitational-wave event GW150914 has put the bound as \(m_{g}\lesssim 1.2\times 10^{-22}\,\mathrm{eV/c^{2}}\)(Abbott et al., 2016), and the latest gravitational-wave transient catalog, GWTC-3, has further improved the bound as \(m_{g}\lesssim 1.27\times 10^{-23}\,\mathrm{eV/c^{2}}\)(Abbott et al., 2021). A massive gravity theory would also yield predictions of an SGWB different from those of General Relativity (GR), enabling the testing of massive gravity theories using pulsar timing arrays (Lee et al., 2010). If gravitons possess mass, it would lead to a modification of the dispersion relation for gravitational waves, resulting in two notable consequences for the SGWB. Firstly, there would exist a lower limit on the frequency of gravitational waves. Secondly, this alteration in the dispersion relation would influence the propagation equation, affecting pulse arrival times and consequently generating correlations among different pulsars that deviate from the Hellings-Downs curve (Liang and Trodden, 2021). Previous studies have constrained the graviton mass by fitting data to the correlation function (Bernardo and Ng, 2023; Wang and Zhao, 2023). However, in this work, we will take a different approach by directly searching for the SGWB from massive gravity using the NANOGrav 15-year data set. ## 2 SGWB from massive gravity By combining the de Broglie relations and the mass-energy equation, the component gravitational-wave signal of a massive SGWB can be described by a four-wave vector \(k^{\mu}=(\omega/c,k)\) that satisfies, \[\frac{\omega}{c}=\sqrt{\frac{m_{g}^{2}c^{2}}{\hbar^{2}}+|\mathbf{k}|^{2}}, \tag{1}\] where \(\omega\) is the circular frequency, \(c\) is the speed of light, and \(\hbar\) is the reduced Planck constant. The above relationship clearly demonstrates that there exists a minimal frequency for the gravitational-wave signal in a massive gravity, i.e., \[f_{\mathrm{min}}=\frac{m_{g}c^{2}}{2\pi\hbar}. \tag{2}\] For an SGWB formed by the superposition of gravitational-wave components of different frequencies, the induced timing residuals can be described by the cross-power spectral density, \[S_{ab}(f)=\frac{H_{0}^{2}}{16\pi^{4}f^{5}}\Gamma_{ab}(f)\Omega_{\mathrm{gw}}( f), \tag{3}\] where \(H_{0}\) is the Hubble constant, \(\Gamma_{ab}\) is the overlap reduction function (ORF) that measures the spatial correlation between the pulsar pairs \(a\) and \(b\), and \(\Omega_{\mathrm{gw}}(f)\) is the dimensionless gravitational-wave energy density parameter. While a large population of inspiraling supermassive black hole binaries are typically considered the most anticipated source for the stochastic signal, there are numerous other astrophysical and cosmological explanations that can also account for it (Afzal et al., 2023; Wu et al., 2023; Liu et al., 2023; Bi et al., 2023; Liu et al., 2023; Jin et al., 2023; Ellis et al., 2023; Yi et al., 2023; Chen et al., 2023). Given some remaining uncertainty in the spectral shape, we adopt the commonly-assumed power-law form for \(\Omega_{\mathrm{gw}}(f)\), \[\Omega_{\mathrm{gw}}(f)=\frac{2\pi^{2}A_{\mathrm{gw}}^{2}}{3H_{0}^{2}}\left( \frac{f}{f_{\mathrm{yr}}}\right)^{5-\gamma_{\mathrm{gw}}}f_{\mathrm{yr}}^{2}, \tag{4}\] where \(A_{\mathrm{gw}}\) is the amplitude of the SGWB at the reference frequency \(f_{\mathrm{yr}}=1/\mathrm{year}\) and \(\gamma_{\mathrm{gw}}\) is the spectral index. When considering the dispersion relation described by Eq. (1) in the context of massive gravity, the previously known massless Hellings-Downs correlation undergoes modification, resulting in the following expression: \[\Gamma_{\mathrm{MG}}^{ab}= \frac{1}{16\eta^{5}}\bigg{[}2\eta(3+(6-5\eta^{2})\delta) \tag{5}\] \[-6\left(1+\delta+\eta^{2}(1-3\delta)\right)\ln\left(\frac{1+ \eta}{1-\eta}\right)\] \[-\frac{3\left(1+2\eta^{2}(1-2\delta)-\eta^{4}(1-\delta^{2}) \right)\ln I}{\sqrt{(1-\delta)\left(2-\eta^{2}(1+\delta)\right)}}\bigg{]}\,,\] where \[\begin{split} I=&\frac{1}{\left(\eta^{2}-1\right) ^{2}}\bigg{[}1+2\eta^{2}(1-2\delta)-\eta^{4}(1-2\delta^{2})\\ &-2\eta(1-2\eta^{2}\delta)\sqrt{(1-\delta)\left(2-\eta^{2}(1+ \delta)\right)}\bigg{]}.\end{split} \tag{6}\] In these equations, \(\delta\equiv\cos\xi\), where \(\xi\) represents the separation angle of the two pulsars, and \(\eta\equiv c|\mathbf{k}|/\omega\). When \(\eta=1\), the gravitons become massless, and the above expressions reduce to the familiar Hellings-Downs function. However, it's worth noting that this reduction may not be immediately apparent from Eq. (5), as it appears to diverge when \(\eta=1\). In such cases, one can refer to Eq. (6) in Wu et al. (2023), which provides an alternative analytical form of Eq. (5) suitable for approximations when \(\eta\approx 1\). ## 3 The data set and methodology The NANOGrav 15-year data set comprises observations from 68 pulsars, of which 67 have an observational timespan exceeding 3 years and have been used in the search for the SGWB predicted by GR (Agazie et al., 2023). In this study, we will also utilize these 67 pulsars to search for the massive SGWB in their timing data. The expected arrival times of pulses from one pulsar are described by a timing model that encompasses various astrometric and timing parameters, including the pulsar's position, proper motion and spin period. After subtracting the timing model from the actual timing data, one obtains the timing residuals. In practice, apart from the SGWB signal, several effects will also contribute to timing residuals, including the inaccuracies of the timing model, other stochastic processes which can be further categorized as red noise and white noise. Following Agazie et al. (2023), the timing residuals \(\delta\mathbf{t}\) for each pulsar can be decomposed into \[\delta\mathbf{t}=\mathbf{t}_{\mathrm{TM}}+\delta\mathbf{t}_{\mathrm{WN}}+ \delta\mathbf{t}_{\mathrm{RN}}+\delta\mathbf{t}_{\mathrm{SGWB}}. \tag{7}\] The first term \(\mathbf{t}_{\mathrm{TM}}\) accounts for the inaccuracies of timing model (Chamberlin et al., 2015). It can be modeled as \(\mathbf{t}_{\mathrm{TM}}=M\epsilon\), where \(M\) represents the timing model design matrix, and \(\epsilon\) denotes a small offset vector indicating the disparity between true and estimated parameters. Essentially, \(M\epsilon\) corresponds to the linear term in the Taylor expansion centered at the estimated timing parameter. The second term \(\delta\mathbf{t}_{\mathrm{WN}}\) representing the contribution from the time-independent white noise accounts for measurement uncertainties. It can be modeled by three parameters, with parameter EFAC as a scale factor on the TOA uncertainties, parameter EQUAD as an added variance (Arzoumanian et al., 2015) and parameter ECORR as a per-epoch variance for each backend/receiver system (Arzoumanian et al., 2016). The third term \(\delta\mathbf{t}_{\mathrm{RN}}\) representing the stochastic contribution from time-correlated red noise accounts for the irregularities of the pulsar's motion (Shannon and Cordes, 2010). It is modeled by a power-law spectrum with the amplitude \(A_{\mathrm{RN}}\) and the spectral index \(\gamma_{\mathrm{RN}}\). The last term \(\delta\mathbf{t}_{\mathrm{SGWB}}\) is the contribution from an SGWB, which can be described by the cross-power spectral density Eq. (3). In practice, we adopt the "Fourier-sum" method to calculate \(\delta\mathbf{t}_{\mathrm{RN}}\) and \(\delta\mathbf{t}_{\mathrm{SGWB}}\), choosing \(N_{\mathrm{mode}}\) discrete frequency modes as \(f=1/T,2/T,\ldots,N_{\mathrm{mode}}/T\) where \(T\) is the observational timespan. Following Agazie et al. (2023), we choose \(N_{\mathrm{mode}}=30\) for the red noise of the individual pulsar and \(N_{\mathrm{mode}}=14\) for the common SGWB signal among all pulsars. In the search for the SGWB from massive gravity, we adopt a Bayesian inference approach, following the methodology outlined in Agazie et al. (2023). The posterior distribution for the model parameter \(\Theta\) is given by \[P(\Theta|\delta\mathbf{t})\propto L(\delta\mathbf{t}|\Theta)\pi(\Theta), \tag{8}\] where \(L(\delta\mathbf{t}|\Theta)\) represents the likelihood evaluated by a multivariate Gaussian function (Ellis et al., 2013) and \(\pi(\Theta)\) denotes the prior distribution. The parameters and their prior distributions required for our analysis are detailed in Table Table 1. In the analysis, we first infer the noise parameters for each individual pulsar without including the common signal \(\delta\mathbf{t}_{\mathrm{SGWB}}\). Then we combine all 67 pulsars as a collective unit, maintaining the white noise parameters at their maximum-likelihood values from the single-pulsar analysis, while allowing both the single-pulsar red noise and the common SGWB signal parameters to vary simultaneously. We also assess the goodness-of-fit of two candidate hypotheses to the data by calculating the Bayes factor, defined as \[\mathcal{BF}\equiv\frac{\mathrm{Pr}(\delta\mathbf{t}|\mathcal{H}_{2})}{ \mathrm{Pr}(\delta\mathbf{t}|\mathcal{H}_{1})}, \tag{9}\] where \(\mathrm{Pr}(\delta\mathbf{t}|\mathcal{H})\) measures the evidence that the data \(\delta\mathbf{t}\) are produced under the hypothesis \(\mathcal{H}\). Following \begin{table} \begin{tabular}{c c c c} \hline \hline **Parameter** & **Description** & **Prior** & **Comments** \\ \hline \multicolumn{4}{c}{White Noise} \\ \(E_{k}\) & EFAC per backend/receiver system & U[0.01, 10] & single pulsar analysis only \\ \(Q_{k}\)[s] & EQUAD per backend/receiver system & log-U[\(-8.5,-5\)] & single pulsar analysis only \\ \(J_{k}\)[s] & ECORR per backend/receiver system & log-U[\(-8.5,-5\)] & single pulsar analysis only \\ \hline \multicolumn{4}{c}{Red Noise} \\ \(A_{\mathrm{RN}}\) & red-noise power-law amplitude & log-U[\(-20,-11\)] & one parameter per pulsar \\ \(\gamma_{\mathrm{RN}}\) & red-noise power-law index & U[\(0,7\)] & one parameter per pulsar \\ \hline \multicolumn{4}{c}{Common-spectrum Process} \\ \(A_{\mathrm{gw}}\) & SGWB power-law amplitude & log-U[\(-18,-11\)] & one parameter per PTA \\ \(\gamma_{\mathrm{sw}}\) & SGWB power-law spectral index & U[\(0,7\)] & one parameter per PTA \\ \(m_{g}\) [eV/\(c^{2}\)] & graviton mass & delta function in \([10^{-24.5},10^{-23.1}]\) & \(m_{g}\in\{10^{-24.5},10^{-24.4},\ldots,10^{-23.1}\}\) \\ \hline \end{tabular} \end{table} Table 1: Parameters and their prior distributions used in the analyses. the interpretation from (Kass and Raftery, 1995), when \(\mathcal{B}\mathcal{F}\leq 3\), the evidence favoring \(\mathcal{H}_{2}\) over \(\mathcal{H}_{1}\) is deemed to be "not worth more than a bare mention". However, the strength of evidence increases as \(\mathcal{B}\mathcal{F}\) grows, classified as "substantial", "strong" and "decisive" when \(3\leq\mathcal{B}\mathcal{F}\leq 10\), \(10\leq\mathcal{B}\mathcal{F}\leq 100\), and \(\mathcal{B}\mathcal{F}\geq 100\), respectively. In practice, we estimate the Bayes factors using the _product-space method_(Carlin and Chib, 1995; Godsill, 2001; Hee et al., 2016; Taylor et al., 2020). All the above analyses are based on JPL solar system ephemeris (SSE) DE440(Park et al., 2021). To perform these Bayesian computations, we employ open-source software packages, namely enterprise(Ellis et al., 2020) and enterprise_extension(Taylor et al., 2021), for likelihood and Bayes factor calculations. The Markov chain Monte Carlo (MCMC) sampling is facilitated by the PTMCMCSampler(Ellis and van Haasteren, 2017) package. ## 4 Results and Discussion As we have demonstrated in Sec. 2, an SGWB originating from massive gravity exhibits two fundamental distinctions when compared to one from massless gravity: the presence of a minimal frequency and a deviation in the Hellings-Downs ORF. Our graviton mass constraints are derived from an analysis of these two aspects. Firstly, the NANOGrav collaboration analyzed their 15-year data set with a free spectrum, allowing for variations in the amplitudes of each individual Fourier frequency component. The presence of a non-vanishing amplitude at the lowest Fourier frequency illustrates the continued existence of a gravitational-wave signal at this frequency, implying that \[f_{\rm min}<1/T, \tag{10}\] which can be translated into the basic constraint on the graviton mass, \[m_{g}<8.2\times 10^{-24}\,\mathrm{eV/c^{2}}. \tag{11}\] Secondly, in the case of a lighter graviton mass, we calculate the Bayes factor between the hypothesis \(H_{2}\) of a massive-gravity SGWB with correlations given in Eq. (5), and the hypothesis \(H_{1}\) of a massless-gravity SGWB with Hellings-Downs correlations. Our investigation spanned a graviton mass range of \(m_{g}\in[3\times 10^{-25},8\times 10^{-24}]\,\mathrm{eV/c^{2}}\). Here \(3\times 10^{-25}\,\mathrm{eV/c^{2}}\) serves as an approximation for the massless scenario, as for \(m_{g}\leq 3\times 10^{-25}\,\mathrm{eV/c^{2}}\), \(\eta>0.999\) across the all 14 frequency bins. For each graviton mass listed in Table 1, the corresponding Bayes factor is decipted in the bottom panel of Fig. 1. The results demonstrate that all Bayes factors are less than 3 within the explored mass range. This suggests that it is challenging for current data to distinguish whether the SGWB arises from massive gravity or massless gravity based solely on spatial correlations. We also present the 90% credible intervals for the power spectrum amplitude \(A_{\rm gw}\) corresponding to each graviton mass in the top panel of Fig. 1. Within the mass range we explore, these 90% credible intervals are remarkably consistent, with the majority of them falling within the range of \(A_{\rm gw}\in[3,10]\times 10^{-15}\). These outcomes also closely align with the results of massless-gravity SGWB reported in (Agazie et al., 2023). We note that Wang and Zhao (2023) also constrains the graviton mass through a distinct method. They transform the ORF into a function of the group speed \(v/c=\sqrt{1-(\frac{m_{g}c^{2}}{\hbar\omega})^{2}}\) and evaluate its deviation from observed ORF with binned angular separation provided by (Agazie et al., 2023). However, a challenge arises when translating the constraint on the group speed into a constraint on the graviton mass. This challenge stems from the difficulty in determining the appropriate frequency value (or the corresponding \(\omega\)) to use, as the observed ORF is obtained by assuming independence from frequency. In contrast, our search methodology takes a more straightforward approach by directly investigating a frequency-dependent ORF within the data set. Searching for the massive SGWB signal in the real data prohibits us to probe graviton mass larger than the lowest Fourier frequency \(1/T\). However, the free spectrum have demonstrated the existence of gravitational-wave signal at this frequency. By combining this observation Figure 1: Top panel: the 90% credible interval of the power spectrum amplitude \(A_{\rm gw}\) as a function of the graviton mass \(m_{g}\), from NANOGrav 15-year data set. Bottom panel: the corresponding Bayes factor, \(\mathcal{B}\mathcal{F}\), as a function of the graviton mass \(m_{g}\). with the inherent existence of a minimum frequency in massive gravity, we obtain a natural upper bound for the graviton mass. This approach allows us to directly examine the frequency dependence and provides a meaningful constraint on the maximum graviton mass based on the available data.
2302.10304
ELECTRON TRANSPORT AND ELECTRON DENSITY INSIDE ONE-DIMENSIONAL DISORDERED CONDUCTORS: An Analysis of the Electronic-Levels Contribution
We consider the problem of electron transport along a one-dimensional disordered multiple-scattering conductor, and study the electron density for all the electronic levels. A model is proposed for the reduced density matrix of the system placed between two reservoirs at different chemical potentials, and the statistical-mechanical expectation value of the electron density is evaluated. An ensemble average is computed over disordered configurations. We compare its predictions with computer simulations. We find that the contribution of low-lying levels is very different from that of the high-lying ones studied in the past. Going down in energy, the wave function penetrates ever less inside the sample. For high-lying levels, this is interpreted in terms of localization from disorder. For low-lying levels, this interpretation gradually gives way to an understanding in terms of the increasing reflection produced by each scatterer, which is seen by the electron as a higher and higher -- and hence impenetrable -- potential barrier. Indeed, the local-density-of-states, LDOS, is gradually depleted in the interior of the system, since the wave function is ever smaller inside. The problem studied here is also of interest in electromagnetic, thermal, and acoustic transport in disordered systems.
Gerardo Rivas, Miztli Yepez, Pier A. Mello
2023-02-20T20:48:16Z
http://arxiv.org/abs/2302.10304v1
# Electron Transport and Electron Density Inside One-Dimensional Disordered Conductors: ###### Abstract We consider the problem of electron transport along a one-dimensional disordered multiple-scattering conductor, and study the electron density for all the electronic levels. A model is proposed for the reduced density matrix of the system placed between two reservoirs at different chemical potentials, and the statistical-mechanical expectation value of the electron density is evaluated. An ensemble average is computed over disordered configurations. We compare its predictions with computer simulations. We find that the contribution of low-lying levels is very different from that of the high-lying ones studied in the past. Going down in energy, the wave function penetrates ever less inside the sample. For high-lying levels, this is interpreted in terms of localization from disorder. For low-lying levels, this interpretation gradually gives way to an understanding in terms of the increasing reflection produced by each scatterer, which is'seen' by the electron as a higher and higher -and hence impenetrable- potential barrier. Indeed, the local-density-of-states, LDOS, is gradually depleted in the interior of the system, since the wave function is ever smaller inside. The problem studied here is also of interest in electromagnetic, thermal, and acoustic transport in disordered systems. E lectronic transport, Disordered conductors 73.23.-b, 05.60.Gg, 72.10.-d, 11.55.-m ## 1 Introduction The physics of electronic transport in disordered mesoscopic systems has been studied for many years (for a review, see Ref. [1] and references cited therein). The discovery by Landauer [2] of the electronic conductance being proportional to the transmittance was a breakthrough in the investigation of mesoscopic systems, since it has allowed studying the conductance in terms of the scattering properties of the system under investigation [1, 2, 3]. This equivalence allows many of the predictions of mesoscopic physics and localization theory to apply to the transport of quantum as well as classical waves [4, 5, 6, 7, 8, 9, 10, 11, 12]. The conductance, the transmittance and their statistical properties refer to physical quantities evaluated _outside_ the sample. Besides these properties, the problem of the statistics of transport _inside_ random systems has also been studied for many years [13, 14, 15, 16, 17, 18, 19, 20, 21]; see also Ref. [22], in which the author studied the short-range intensity correlations with both the source and the detector being inside the (infinite) medium, and Ref. [23], where the authors measured the long-range intensity correlation inside the sample, placed between two leads. In a series of recent papers, the control of different mesoscopic transport effects inside random media has been studied by changing the system's geometry, or shaping incident wavefronts [23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]. The problem is of interest in various branches of physics, the reason being that it is representative of a rather general wave-scattering problem: e.g., an electromagnetic wave traveling in a disordered waveguide [21] -the interest then being in the energy density inside the structure-, or an elastic wave propagating in a disordered elastic waveguide [34, 35] -the interest then being, e.g., the mean square displacement inside the system. One-dimensional (1D) disordered systems were studied very intensely in the 1980's and 1990's, as they can be considered as the simplest realizations of such problems. Some representative contributions are given, e.g., by Refs. [6, 36, 37, 38]. Interestingly enough, the density (for particles) or intensity (for waves) inside the sample for 1D systems are still of interest in the present time, the motivation being the following. 1. There is great interest in experiments with cold atoms (matter waves) in 1D channels. The atoms can be either bosons or fermions- the latter case should be similar to electrons. The reader is referred, e.g., to Refs. [39, 40] and Refs. [41, 42]: the last two are experiments concerning localization of cold atoms in 1D. 2. For electromagnetic waves, of course one can probe the intensity inside. Two concrete examples are the recent experiments reported in Refs. [21, 43], in which waves are launched from one end of the waveguide and the signal is detected by an antenna just above a slit along the length of the waveguide. 3. There is a renewed interest in 1D disordered problems, in connection with 'temporarily modulated media' (see, e.g., Ref. [44]). In that problem, space is homogeneous, but the dielectric function \(\epsilon(t)\) is a random function of time. One considers an initial wave packet with a well defined wave vector, and asks how this wave behaves in time. 4. The dynamic approach to quantum transport has been studied since the early investigations [45], and recent advancements have been done in this respect, particularly in the study of the transmission eigenchannels [46, 47, 48, 49, 50]. 5. Finally, in the last decade there have been advancements in the understanding of the fluctuations and correlations of scattering properties inside the sample [15, 48, 51]. Formally, the problem is equivalent to a 1D random chain, similar to 1D Schrodinger's equation (but the second derivative is in time rather than in coordinate). Similarly, in our group we have recently studied the statistical properties of the electron density inside a multiply scattering 1D disordered system [20, 21], and also its extension to a quasi-one-dimensional (q1D) [52] disordered geometry. In those studies, the system was fed with electrons of a _given energy_ from one end of the disordered conductor and the electron density was evaluated along the conductor and outside. In Ref. [20], the expectation value \(\langle{\cal W}(x)\rangle\) of the intensity \({\cal W}(x)\) at a distance \(x\) from the entrance of a 1D disordered system was calculated and compared with computer simulations. In Ref. [21], also for 1D systems, emphasis was put on the statistics of the logarithm of the intensity, \(\ln{\cal W}(x)\), which shows interesting scaling properties, in a way similar to the logarithm of the conductance in the conduction problem; theoretical predictions were compared with computer simulations and also with the results of microwave experiments. More recently [52], we studied the statistical properties of the electron density inside a q1D multiply-scattering medium, i.e., a system supporting more than one propagating mode or open channel. We should remark that in a real electrical conduction problem realized by inserting the system between two terminals (reservoirs) at different chemical potentials, the electron density inside the system would have to be calculated by adding the contribution of _all incident energies_ at which electrons are fed by the reservoirs, with a weight given by the Fermi function of the respective reservoir. Thus, whereas in the above-mentioned papers the analysis was restricted to one energy, the more complete calculation is the purpose of the present paper. This study may encourage the development of methods toward the experimental verification of its results. One possible experimental setup was described in Ref. [52] for the case of one energy. In the whole paper, our goal will be to study a 1D electronic conductor in a scheme of non-interacting electrons moving in a self-consistent potential. We first set up the statistical mechanical problem for a given sample, and then consider a collection of samples in order to construct an ensemble of configurations of disorder, both theoretically and through computer simulations. The specific theoretical model that our computer simulations will be compared with is designated as the DMPK model [1, 8, 53]; for 1D disordered system, the DMPK equation reduces to Melnikov's [7]. The present analysis extends and generalizes the results for 1D systems of Refs. [20, 21] which dealt with high-lying levels only, in order to take into account low-lying levels. The behavior of the latter is found to be very different from that of the high-lying ones: this is illustrated in Fig. 7 below. This constitutes the main difference with respect to our previous work; it is discussed and well understood and represents one of our main results. While DMPK gives a good description of the high-lying levels, for the low-lying ones it does not; for these latter states we have, at present, no theoretical model. The paper is organized as follows. In the next section, we start our discussion with a ballistic, non-disordered, mesoscopic conductor. We then discuss generalities of a disordered 1D system, which are subsequently applied to the study of the electron density, which is our main interest in the present paper. In Sec. 3, we find explicit expressions for the electron density in the various regions -inside the disordered sample and outside- and construct the corresponding expectation values over an ensemble of configurations of disorder. In Sec. 4, we study the electron density for a system in equilibrium at zero temperature and its expectation value over disorder. We analyze the contribution of single levels, and then the result obtained taking all levels into account. In Sec. 5, we extend the analysis to the logarithm of the electron density. In Sec. 6, we extend the above results to a non-equilibrium -but stationary- situation, in which the chemical potentials of the two reservoirs are not equal. Finally, we present our conclusions in Sec. 7. Some technical details can be found in the appendices. Various appendices are included to prove certain specific results without interrupting the main flow of the paper. ## 2 The electron density inside a 1D conductor In the whole paper our goal will be to study a one-dimensional electronic conductor placed between two reservoirs at temperature \(T\) and chemical potentials \(\mu_{1}\) and \(\mu_{2}\). When \(\mu_{1}=\mu_{2}\), the system of interest is in thermodynamic equilibrium. We shall assume an approximate picture: in equilibrium, the system is described by a Hamiltonian of non-interacting electrons moving in a self-consistent potential: each electron interacts with the average electron density of the rest of electrons rather than with each electron individually; we also include confining potentials, and ions and impurities with no internal degrees of freedom, thus producing only elastic scattering (see Refs. [1, 54] and references contained therein). For simplicity, however, in this section we first discuss the case of a ballistic conductor, in order to pave the way to the general analysis of an arbitrary disordered conductor. ### Ballistic 1D conductor: generalities As a preliminary study, the system to be considered in this section will be a ballistic 1D electronic conductor of length \(L_{0}\). For simplicity, we shall ignore spin-orbit coupling and deal with'spinless electrons' [1]. We define, in the interior of the conductor, a complete set of orthonormal states with periodic boundary conditions, which represent running waves: \[\phi_{n}(x) = \frac{e^{is_{n}k_{n}x}}{\sqrt{L_{0}}}\ \ \ \ \ \left\{\begin{array}{l}n=0,\pm 1, \pm 2,\cdots\\ s_{n}=\mathrm{sgn}(n)=\pm 1\end{array}\right.\, \tag{2.1a}\] \[k_{n} = \frac{2\pi|n|}{L_{0}},\] (2.1b) \[\epsilon_{n} = \frac{\hbar^{2}k_{n}^{2}}{2m}\,. \tag{2.1c}\] The wavenumber \(k_{n}\) is defined to be _positive_, so the direction of propagation is specified by \(s_{n}=\pm 1\). The total number of states \(\mathcal{N}^{+}(\epsilon_{n})\) traveling to the right, up to the energy \(\epsilon_{n}\), and the corresponding density of states \(\rho^{+}(\epsilon)\), i.e, the number of states per unit energy around the energy \(\epsilon\) (a continuous function of \(\epsilon\)), are given, respectively, by \[\mathcal{N}^{+}(\epsilon_{n}) = \frac{k_{n}L_{0}}{2\pi}, \tag{2.2a}\] \[\rho^{+}(\epsilon) = \frac{\partial\mathcal{N}^{+}(\epsilon)}{\partial\epsilon}=\frac{ L_{0}}{2\pi\hbar}\frac{1}{v_{gr}},\] (2.2b) \[v_{gr} = \frac{\hbar k}{m}=\sqrt{\frac{2\epsilon}{m}}, \tag{2.2c}\] where \(v_{gr}\) designates the group velocity. We have similar expressions for electrons traveling to the left. We follow Ref. [3] and describe our system of interest as illustrated in Fig. 1: on the left, it is in contact with reservoir 1, which has temperature \(T\) and chemical potential \(\mu_{1}\); on the right, it is in contact with reservoir 2, at the same temperature \(T\), and chemical potential \(\mu_{2}\). The reservoirs emit electrons toward the system and absorb, without reflection, the electrons incident upon them. We do not wish to include the reservoirs in our description, but only the'system proper'. We thus trace the complete density matrix over the reservoirs degrees of freedom: the result is what is called the _reduced density matrix_. When \(\mu_{1}\neq\mu_{2}\), the system is not in equilibrium, but we suppose it is in a stationary state. The density matrix \(\widehat{\rho}(\beta,\mu_{1},\mu_{2})\) describes a _non-equilibrium_, although _stationary state_. We introduce the following simple model to describe the system reduced density matrix. Let \(c_{n}^{\dagger}\), \(c_{n}\) be the electron creation and annihilation operators associated with the single-electron state \(\phi_{n}(x)\) of Eq. (2.1a) with energy \(\epsilon_{n}\) (\(\epsilon_{0}=0\)), and let \(\mu_{0}\equiv(\mu_{1}+\mu_{2})/2\). Then the model is defined as \[\widehat{\rho}(\beta,\mu_{1},\mu_{2}) = \frac{e^{-\beta\sum_{n>0}(\epsilon_{n}-\mu_{1})c_{n}^{\dagger}c_{ n}}}{{\cal Z}^{(+)}(\beta,\mu_{1})}\times\frac{e^{-\beta(\epsilon_{0}-\mu_{0})c_ {0}^{\dagger}c_{0}}}{1+e^{-\beta(\epsilon_{0}-\mu_{0})}}\times\frac{e^{-\beta \sum_{n<0}(\epsilon_{n}-\mu_{2})c_{n}^{\dagger}c_{n}}}{{\cal Z}^{(-)}(\beta, \mu_{2})},\] which reduces to the equilibrium grand canonical density matrix when the chemical potentials are equal, \(\mu_{1}=\mu_{0}=\mu_{2}\). For a more formal analysis of the problem using linear-response theory, the reader may consult, e.g., Refs. [1, 54] and references cited therein. It can be shown that the arguments given by Buttiker, [3] leading to the well-known expression for the current through a 1D conductor and the associated conductance for the present case, are equivalent to using the above simple model (although the reduced density matrix is not mentioned explicitly in Ref. [23]). Figure 1: Schematic representation of a ballistic 1D electronic conductor of length \(L_{0}\) placed between two reservoirs at chemical potentials \(\mu_{1}>\mu_{2}\). We also mention Ref. [55], which presents a non-equilibrium density matrix description of steady-state quantum transport. In App. A we give the correspondence between the above ansatz (2.4) and the results given in Ref. [55]. ### Disordered 1D conductor: generalities The actual system of interest in this paper, the'system proper' (disordered system), consists of independent electrons interacting with \(N_{scatt}\) scattering units, numbered \(j=1,\cdots,N_{scatt}\), sampled from some statistical distribution to be specified later, and occupying a length \(L\). The single-particle Hamiltonian thus consists of this single-particle potential plus the single-particle kinetic energy. When we speak of the full system, we mean the disordered system plus the leads. We solve this problem in two steps, as we now explain. #### 2.2.1 The single-particle scattering problem in the interval \(-\infty<x<+\infty\) We first solve the single-particle scattering problem in the interval \(-\infty<x<+\infty\) for the full system, consisting of the sample to which we have added perfect conductors on both sides, as illustrated in Fig. 2. We designate by \(\psi_{s,k}(x)\) the resulting eigenfunctions. They are shown in Table 1, where \(r(k)\), \(t(k)\), \(r^{\prime}(k)\), \(t^{\prime}(k)\) denote reflection and transmission amplitudes, and \(a(k)\), \(b(k)\), \(a^{\prime}(k)\), \(b^{\prime}(k)\) the appropriate amplitudes between two successive scattering units inside the disordered region; we are assuming that between individual scatterers there is a free potential region, with a'small' width, where the wave function can be written as it is indicated in the second column of Table 1: see Fig. 3 The wave functions \(\psi_{s,k}(x)\) form a _complete set of orthonormal states_ in the interval \(x\in(-\infty,\infty)\). Figure 3: Schematic representation of the scattering problem for \(\psi_{+,k}(x)\) of Table 1. Figure 2: Schematic representation of a 1D scattering system of length \(L\), with ‘clean’ regions extending to \(=-\infty\) on the left and to \(=+\infty\) on the right. #### 2.2.2 The single-particle scattering problem in the interval \(-L_{0}/2<x<L_{0}/2\) We restrict our full system to the interval \(x\in(-L_{0}/2,L_{0}/2)\), with \(L_{0}>L\). The functions \(\psi_{s,k}(x)\) of Table 1, restricted to that interval, i.e., \[\psi_{s,k}(x)\cdot\theta_{L_{0}}(x),\] (2.5a) where \[\theta_{L_{0}}(x) = \left\{\begin{array}{ll}1,&x\in(-L_{0}/2,L_{0}/2)\\ 0,&x\notin(-L_{0}/2,L_{0}/2)\end{array}\right., \tag{2.5b}\] satisfy Schrodinger's equation in that interval, but form an _over-complete_ set of states. If we now consider the _subset_ specified by the wavenumbers \(k=k_{n}=2\pi|n|/L_{0}\) of Eq. (2.1b), the functions \[\psi_{s_{n},k_{n}}^{L_{0}}(x)=\sqrt{\frac{2\pi}{L_{0}}}\psi_{s_{n},k_{n}}(x) \cdot\theta_{L_{0}}(x)=\left[\phi_{n}\left(x\right)+\psi_{s_{n},k_{n}}^{scatt} (x)\right]\cdot\theta_{L_{0}}(x)\] (2.6a) satisfy Schrodinger's equation in \[x\in(-L_{0}/2,L_{0}/2)\], i.e., \[H\psi_{s_{n},k_{n}}^{L_{0}}(x) = \epsilon_{n}\psi_{s_{n},k_{n}}^{L_{0}}(x), \tag{2.6b}\] and consist of a _complete set of orthonormal unperturbed states_\(\phi_{n}\left(x\right)\), Eq. (2.1a), plus scattered states \(\psi_{s_{n},k_{n}}^{scatt}(x)\). They will be designated as \(\psi_{n}^{L_{0}}(x)\equiv\psi_{s_{n},k_{n}}^{L_{0}}(x)\). We have analytical and numerical evidence that the \(\psi_{n}^{L_{0}}(x)\) form a complete set of _approximate_ orthonormal eigenstates of \(H\) if \(L_{0}\gg L\). The approximation is ever better, the larger is the ratio \(L_{0}/L\). The wave functions \(\psi_{n}^{L_{0}}(x)\) in the various regions have the structure shown in Table 2 (see also Fig. 4). The quantities \(a(k_{n}),b(k_{n})\), \(a^{\prime}(k_{n}),b^{\prime}(k_{n})\) at \(x\) denote the amplitudes inside the disordered region between two successive scattering units; we are assuming that between individual scatterers there is a free potential region, with a'small' width, where the wave function can be written as it is indicated in the second column of Table 2: see Fig. 5. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \(-\infty<x\leq 0\) & \(0\leq x\leq L\) & \(L\leq x<\infty\) \\ \hline \hline \(\psi_{+,k}(x)\) & \(\frac{1}{\sqrt{L_{0}}}\left(e^{ikx}+r(k)e^{-ikx}\right)\) & \(\frac{1}{\sqrt{L_{0}}}\left[a(k)e^{ikx}+b(k)e^{-ikx}\right]\) & \(\frac{1}{\sqrt{L_{0}}}\left(e^{-ikx}+r^{\prime}(k)e^{ikx}\right)\) \\ \hline \(\psi_{-,k}(x)\) & \(\frac{1}{\sqrt{2\pi}}t^{\prime}(k)e^{-ikx}\) & \(\frac{1}{\sqrt{2\pi}}\left[a^{\prime}(k)e^{ikx}+b^{\prime}(k)e^{-ikx}\right]\) & \(\frac{1}{\sqrt{2\pi}}\left[e^{-ikx}+r^{\prime}(k)e^{ikx}\right]\) \\ \hline \hline \end{tabular} \end{table} Table 1: The structure of the wave function \(\psi_{s,k}(x)\) in the three regions shown in Fig. 2, when incidence is from the left, \(s=+\) (2nd row), and when incidence is from the right, \(s=-\) (3rd row). The indicated wave function inside the sample is that occurring between two successive scattering units. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \(-\infty<x\leq 0\) & \(0\leq x\leq L\) & \(L\leq x<\infty\) \\ \hline \hline \(\psi_{+,k}^{L_{0}}(x)\) & \(\frac{1}{\sqrt{L_{0}}}\left(e^{ik_{n}x}+r(k_{n})e^{-ik_{n}x}\right)\) & \(\frac{1}{\sqrt{L_{0}}}\left(a(k_{n})e^{ik_{n}x}+b(k_{n})e^{-ik_{n}x}\right)\) & \(\frac{1}{\sqrt{L_{0}}}\left(e^{ik_{n}x}+\frac{1}{\sqrt{L_{0}}}t(k_{n})e^{ik_{n} x}\right)\) \\ \hline \(\psi_{n<0}^{L_{0}}(x)\) & \(\frac{1}{\sqrt{L_{0}}}t^{\prime}(k_{n})e^{-ik_{n}x}\) & \(\frac{1}{\sqrt{L_{0}}}\left(a^{\prime}(k_{n})e^{ik_{n}x}+b^{\prime}(k_{n})e^{- ik_{n}x}\right)\) & \(\frac{1}{\sqrt{L_{0}}}\left(e^{-ik_{n}x}+r^{\prime}(k_{n})e^{ik_{n}x}\right)\) \\ \hline \hline \end{tabular} \end{table} Table 2: The structure of the wave function \(\psi_{n}^{L_{0}}(x)\) in the three regions shown in Fig. 4, when incidence is from the left, \(n>0\) (2nd row), and when incidence is from the right, \(n<0\) (3rd row). The wave function indicated inside the sample is that occurring between two successive scattering units. The various coefficients \([r(k_{n}),\,a(k_{n}),\) etc.] are the same as those in Table 1, evaluated at \(k=k_{n}\). We should remark that, although the unperturbed wave functions \(\phi_{n}\left(x\right)\), Eq. (2.1a), are periodic and orthonormal in \(\left(-L_{0}/2,L_{0}/2\right)\), the perturbed wave functions \(\psi_{n}^{L_{0}}(x)\) are not periodic; also, _they are not orthonormal, since they are not eigenstates with different eigenvalues of a Hermitean operator_. However, as we mentioned, we found, in various cases, that the resulting wave functions \(\psi_{n}^{L_{0}}(x)\) of Table 2 fulfill \[\left(\psi_{n}^{L_{0}},\psi_{n^{\prime}}^{L_{0}}\right)_{L_{0}} \approx \left(\phi_{n},\phi_{n^{\prime}}\right)_{L_{0}}=\delta_{nn^{ \prime}},\] (2.7a) where we have defined the scalar product \[\left(\chi^{\prime\;L_{0}},\chi^{L_{0}}\right)_{L_{0}}\equiv\int_{-L_{0}/2}^{L _{0}/2}\left[\chi^{\prime\;L_{0}}(x)\right]^{*}\chi^{L_{0}}(x)dx;\] (2.7b) the approximate orthonormality is ever better the larger is the ratio \[L_{0}/L\gg 1. \tag{2.7c}\] We have found wide analytical and numerical evidence to verify the approximation of Eq. (2.7). The result (2.7) is not surprising: from the statement of Eqs. (2.7) for _discrete_ momenta \(k_{n}\), giving orthonormality in terms of _Kronecker's delta_, one recovers, in the continuous limit, as \(L_{0}\rightarrow\infty\), orthonormality in the _Dirac \(\delta\)-function_ sense. The situation we are considering, illustrated in Fig. 4, simulates the effect of two reservoirs placed at the ends of the interval \(L_{0}\), which emit electrons, absorb but do Figure 4: The system described by the wave function of Table 2: simulated is the effect of two reservoirs, placed at the ends of the interval \(L_{0}\), which emit electrons and absorb, but do not reflect impinging electrons. We introduce creation and annihilation operators \(d_{n}^{\dagger}\), \(d_{n}\), which create or annihilate an electron in state \(\psi_{k_{n}}^{L_{0}}(x)\) of Eq. (2.6a); i.e., \[\psi_{n}^{L_{0}}(x)\Rightarrow d_{n}^{\dagger}|0\rangle\equiv|1_{n}\rangle, \hskip 14.226378pti.e.,\hskip 14.226378pt\left\{\begin{array}{l}\psi_{n>0}^{L_{0}}(x) \hskip 14.226378pt\Rightarrow\hskip 14.226378ptd_{n>0}^{\dagger}|0 \rangle=|1_{n>0}\rangle,\\ \psi_{n=0}^{L_{0}}\equiv 0,\\ \psi_{n<0}^{L_{0}}(x)\hskip 14.226378pt\Rightarrow\hskip 14.226378ptd_{n<0}^{ \dagger}|0\rangle=|1_{n<0}\rangle.\end{array}\right. \tag{2.8}\] The property of the states \(\psi_{n}^{L_{0}}(x)\), Eq. (2.7a), is inherited by the single-particle states written in second quantization, as \[(\psi_{n}^{L_{0}}(x),\psi_{n^{\prime}}^{L_{0}}(x))_{L_{0}}\approx\delta_{nn^{ \prime}}\hskip 14.226378pt\Rightarrow\hskip 14.226378pt\langle 0|d_{n}d_{n^{ \prime}}^{\dagger}|0\rangle=\delta_{nn^{\prime}}. \tag{2.9}\] As one further example, two-particle antisymmetric states are \[\frac{1}{\sqrt{2!}}\left|\begin{array}{l}\psi_{n}^{L_{0}}(x_{1})\hskip 14.226378pt \psi_{n}^{L_{0}}(x_{2})\\ \psi_{n^{\prime}}^{L_{0}}(x_{1})\hskip 14.226378pt\psi_{n^{\prime}}^{L_{0}}(x_{2}) \end{array}\right|\hskip 14.226378pt\Rightarrow\hskip 14.226378pt|1_{n}1_{n^{ \prime}}\rangle=d_{n}^{\dagger}d_{n^{\prime}}^{\dagger}|0\rangle, \tag{2.10}\] etc. The new reduced density matrix is obtained from Eqs. (2.4) replacing \(c_{n}\) by \(d_{n}\). The factor \(n=0\) is omitted, as it corresponds to \(\psi_{n=0}^{L_{0}}(x)\equiv 0\); i.e., \[\widehat{\rho}(\beta,\mu_{1},\mu_{2}) = \frac{e^{-\beta\sum_{n>0}(\epsilon_{n}-\mu_{1})d_{n}^{\dagger}d_ {n}}}{\mathcal{Z}^{(+)}(\beta,\mu_{1})}\times\frac{e^{-\beta\sum_{n<0}( \epsilon_{n}-\mu_{2})d_{n}^{\dagger}d_{n}}}{\mathcal{Z}^{(-)}(\beta,\mu_{2})} \tag{2.11a}\] \[\equiv \widehat{\rho}_{+}(\beta,\mu_{1})\;\widehat{\rho}_{-}(\beta,\mu_ {2}), \tag{2.11b}\] \[\mathcal{Z}^{(+)}(\beta,\mu_{1}) = \prod_{n>0}\left[1+e^{-\beta(\epsilon_{n}-\mu_{1})}\right], \tag{2.11c}\] \[\mathcal{Z}^{(-)}(\beta,\mu_{2}) = \prod_{n<0}\left[1+e^{-\beta(\epsilon_{n}-\mu_{2})}\right]. \tag{2.11d}\] The above reduced density matrix for the system, Eqs. (2.11), is suggested here as a simple model to describe a non-equilibrium, but stationary state. Just as we mentioned in relation with the density matrix of Eq. (2.4) for the ballistic case, in Ref. [55] a non-equilibrium density-matrix description of steady-state quantum transport is presented. In App. A we give the correspondence between the above ansatz, Eqs. (2.11), and the results of Ref. [55]. Figure 5: Schematic representation of the scattering problem for \(\psi_{n>0}^{L_{0}}(x)\) of Table 2. The current through a 1D disordered conductor and the associated conductance can be computed by using the above model for the reduced density matrix and one finds the well-known Landauer-Buttiker result [3]. Here we concentrate on the electron density, as it is the main purpose of this paper. As before, by 'total system' we mean the actual system (sample) contained in the interval from \(0\) to \(L\), plus the leads; thus the total system corresponds to the interval from \(-L_{0}/2\) to \(L_{0}/2\) (see Fig. 4). The electron-density operator at the point \(x\) for an \(N\)-electron system is given by \[\widehat{D}_{el}(x)\equiv\sum_{i=1}^{N}\delta(x-\hat{x}_{i}). \tag{2.12}\] Its matrix elements in the one-particle states \(\psi_{n}^{L_{0}}(x)\) of Eqs. (2.7) are \[D_{nn^{\prime}}^{(1)}(x) \equiv \left(\psi_{n}^{L_{0}}(x_{1}),\hat{D}_{1}(x)\psi_{n^{\prime}}^{L_ {0}}(x_{1})\right)=[\psi_{n}^{L_{0}}(x)]^{*}\ \psi_{n^{\prime}}^{L_{0}}(x). \tag{2.13}\] In a second quantization formalism, the electron-density operator at the point \(x\) takes the form \[\widehat{\mathbb{D}}_{el}(x) = \sum_{nn^{\prime}}D_{nn^{\prime}}^{(1)}(x)\ \widehat{d}_{n}^{\dagger}\hat{d}_{n^{\prime}}\;. \tag{2.14}\] Its expectation value in the state defined by the density matrix \(\hat{\rho}\) of Eqs. (2.11) is \[{\cal W}(x) \equiv {\rm Tr}[\hat{\rho}\ \widehat{\mathbb{D}}_{el}(x)] \tag{2.15a}\] \[= \sum_{nn^{\prime}}D_{nn^{\prime}}^{(1)}(x)\ {\rm Tr}(\hat{\rho}\ \widehat{d}_{n}^{\dagger}\hat{d}_{n^{\prime}}). \tag{2.15b}\] For the density matrix of Eq. (2.11) one finds \[{\rm Tr}(\hat{\rho}\ \widehat{d}_{n}^{\dagger}\hat{d}_{n^{\prime}})= \delta_{nn^{\prime}}{\rm Tr}(\hat{N}_{n}\hat{\rho}), \tag{2.16}\] \(\hat{N}_{n}=\widehat{d}_{n}^{\dagger}\hat{d}_{n}\) being the number operator for state \(n\). In _equilibrium_, we have \[{\rm Tr}(\hat{\rho}\hat{N}_{n}) = \langle\hat{N}_{n}\rangle_{\beta,\mu} \tag{2.17a}\] \[= f_{\mu,\beta}(\epsilon_{n})=\frac{1}{1+e^{\beta(\epsilon_{n}-\mu )}}, \tag{2.17b}\] \(f_{\mu,\beta}(\epsilon_{n})\) being the Fermi function. For the _non-equilibrium stationary state_ defined by the model density matrix of Eqs. (2.11), we have \[{\rm Tr}(\hat{\rho}\hat{N}_{n^{+}}) = f_{\mu_{1},\beta}(\epsilon_{n^{+}}) \tag{2.18a}\] \[{\rm Tr}(\hat{\rho}\hat{N}_{n^{-}}) = f_{\mu_{2},\beta}(\epsilon_{n^{-}}) \tag{2.18b}\] and Eqs. (2.13), (2.15), (2.16) and (2.18) give the general expression for the electron density in the interval \(-L_{0}/2<x<L_{0}/2\), i.e., in the leads and inside the system proper, as \[\mathcal{W}(x) = \sum_{n}D_{nn}^{(1)}\;\mathrm{Tr}(\hat{\rho}\;\hat{N}_{n}) \tag{2.19a}\] \[= \sum_{n>0}D_{nn}^{(1)}\;f_{\mu_{1},\beta}(\epsilon_{n})+\sum_{n<0} D_{nn}^{(1)}\;f_{\mu_{2},\beta}(\epsilon_{n})\] (2.19b) \[= \sum_{n>0}|\psi_{n}^{L_{0}}(x)|^{2}f_{\mu_{1},\beta}(\epsilon_{n} )+\sum_{n<0}|\psi_{n}^{L_{0}}(x)|^{2}f_{\mu_{2},\beta}(\epsilon_{n}) \tag{2.19c}\] In physical terms, this final result means that the density at \(x\) is the sum of the densities of left-going and right-going electrons at \(x\), both weighted by the probability of such an electron having been emitted from the appropriate reservoir. In the next section, we shall find explicit expressions for the above result in the various regions. Subsequently, these results will be averaged over an ensemble of configurations of disorder and compared with computer simulations. In the simulations, the disordered potential is a random function of position. We use the model employed in Ref. [52], also used in Ref. [56], which we now summarize. In the model we shall use, the scattering units consist of thin potential slices idealized as equidistant delta potentials -\(d\) being their separation-, labeled by the index \(j=1,\cdots,N_{scatt}\). Thus, there are a total of \(N_{scatt}\) such delta potentials in the sample of length \(L\). We shall always consider situations in which the wavelength involved is much larger than the separation between successive scatterers, i.e., \(\lambda\gg d\). The random potential, in units of \(\hbar^{2}/2m\), has the form \[u(x) = \sum_{j=1}^{N_{scatt}}u_{j}\delta(x-jd), \tag{2.20a}\] \[\mathrm{with}\hskip 14.226378ptL = N_{scatt}d. \tag{2.20b}\] The strength \(u_{j}\) of a given delta potential is taken to be statistically independent from, and identically distributed to, the strength of any other one; therefore, the potential strengths \(u_{j}\) (in units of \(k\)) are uniformly distributed in the interval \([-u_{0},u_{0}]\), \(u_{0}\) being the maximum strength. In the dense weak scattering limit, the average reflection coefficient for a single delta scatterer (indicated by the index 1) \[\langle R_{1}(k_{n})\rangle=\langle|r_{1}(k_{n})|^{2}\rangle=\left\langle \frac{\left(\frac{u_{1}}{2k_{n}}\right)^{2}}{1+\left(\frac{u_{1}}{2k_{n}} \right)^{2}}\right\rangle, \tag{2.21}\] plays a key role. In this limit, the potential strength of a delta scatterer is very weak and their linear density is very large, so that the resulting mean-free-path (mfp) is fixed (for each electron level, since it is level dependent, as explained below); the corresponding statistical properties of the full system depend only on the mfp and on no other property of the delta-potentials distribution [56]. Throughout the paper, the concept of mfp is defined as in Ref. [56]. In the present problem, it depends on the energy level \(n\), so it is designated as \(\ell_{n}\). At the momentum \(k_{n}\), when the average reflection coefficient for a single scatterer \(\langle R_{1}(k_{n})\rangle\ll 1\), it is defined through the relation \[\frac{1}{\ell_{n}} = \frac{\langle R_{1}(k_{n})\rangle}{d}. \tag{2.22}\] The ratio \(s_{n}\equiv L/\ell_{n}\), frequently employed in the paper, of the system length to the mfp is then given by \[s_{n}\equiv\frac{L}{\ell_{n}}=L\frac{\langle R_{1}(k_{n})\rangle}{d}. \tag{2.23}\] The ratio \(L/\ell_{n}\) decreases as we go up in the spectrum, since \(\langle R_{1}(k_{n})\rangle\) decreases; the system becomes ever more ballistic and thus delocalized. Definition (2.22) is applicable as long as \(\langle R_{1}(k_{n})\rangle\ll 1\). As the incident energy is decreased, \(\langle R_{1}(k_{n})\rangle\) increases and the mfp decreases. Definition (2.22) is not strictly applicable near the ground state, where \(\langle R_{1}(k_{n})\rangle\sim 1\); in that region, the quantity \(s_{n}\), defined as \[s_{n}\equiv N_{scatt}\langle R_{1}(k_{n})\rangle, \tag{2.24}\] will be taken as a useful parameter that measures the extent to which the wave function penetrates into the sample. The local-density-of-states (LDOS), being proportional to the intensity itself, is ever more depleted as we go down in energy. The specific theoretical model that our computer simulations will be compared with is designated as the DMPK model [1, 8, 53]. This is essentially a random-phase approximation for a fixed energy \(\epsilon_{n}\); the DMPK model is governed by a diffusion equation in the transfer-matrix space, which depends on the single parameter \(s_{n}=L/\ell_{n}\) that was defined in Eq. (2.23). For one dimensional systems, the DMPK equation reduces to Melnikov's [7]. Even though the DMPK model depends only on the microscopic details through the ratio \(L/\ell_{n}\), the DMPK predictions are valid when the parameter \(z_{n}\equiv k_{n}\ell_{n}\) satisfies the so called _weak disorder regime_[58], i.e., \[z_{n}\equiv k_{n}\ell_{n}\gg 1. \tag{2.25}\] For the potential model used in our computer simulations, Eq. (2.20), and the definition of the mfp, Eq. (2.22), the parameter \(z_{n}\) is written as \[z_{n}=\frac{k_{n}d}{\langle R_{1}\left(k_{n}\right)\rangle},\ \ \mbox{with}\ \ k_{n}d\ll 1,\ \ \forall n. \tag{2.26}\] The weak disorder condition, Eq. (2.25), is satisfied as long as \(\langle R_{1}(k_{n})\rangle\ll k_{n}d\ll 1\), i.e., for high-lying energy levels, with \(n\gg 1\). As the incident energy decreases \(\langle R_{1}(k_{n})\rangle\approx 1\) and \(k_{n}d\ll 1\), so the weak disorder condition is not fulfill for low-lying energy levels, with \(n\sim 1\) near the ground state. Equations (2.24) and (2.26) show that the average reflection coefficient for a single delta scatterer \(\langle R_{1}(k_{n})\rangle\), Eq. (2.21), plays a key role to characterize our numerical simulations for both, high-lying and low-lying energy levels, and also to compare them with the DMPK predictions; therefore, the numerical results presented here will be specified by the parameters \(s_{n}\), \(z_{n}\) and \(\langle R_{1}(k_{n})\rangle\): see Table 3. ## 3 Explicit expressions for the electron density in the various regions and their expectation value In this section we obtain more explicit expressions for the electron density in the various regions of the conductor, and compute their expectation value over an ensemble of configurations of disorder, We shall restrict the analysis to the particular case of _zero temperature_. ### The density outside the sample From Eq. (2.19c) one finds, outside the sample, in the _left ballistic region_, \(x\in[-L_{0}/2,0]\), the general result \[\mathcal{W}(x\in[-L_{0}/2,0]) = \frac{1}{L_{0}}\sum_{n>0}^{n^{+}_{max}}\left|e^{ik_{n}x}+r(k_{n}) e^{-ik_{n}x}\right|^{2}+\frac{1}{L_{0}}\sum_{n<0}^{n^{-}_{max}}\left|t^{ \prime}(k_{n})e^{-ik_{n}x}\right|^{2}\] \[\approx \frac{1}{L_{0}}\sum_{n>0}^{\frac{N-\Delta N}{2}}2+\frac{1}{L_{0}} \sum_{n>0}^{\frac{N+\Delta N}{2}}\left(r(k_{n})e^{-2ik_{n}x}+cc\right)+\frac{ \Delta N}{L_{0}}\left[1+R(\epsilon_{F})\right]\.\] Here, \(n^{+}_{max}>0\) denotes the quantum number \(n\) associated with the highest level fed by the left reservoir with right-going electrons, and \(n^{-}_{max}<0\) is the quantum number \(n\) associated with the highest level fed by the right reservoir with left-going electrons. In the present model, at \(T=0\) the number of electrons traveling to the left and the number of electrons traveling to the right are fixed. We have defined \[n^{+}_{max}+|n^{-}_{max}|=N, \tag{3.2a}\] \[n^{+}_{max}-|n^{-}_{max}|=\Delta N, \tag{3.2b}\] \(N\) being the total number of electrons, so that \[n^{+}_{max}=\frac{N+\Delta N}{2}, \tag{3.3a}\] \[|n^{-}_{max}|=\frac{N-\Delta N}{2}. \tag{3.3b}\] We also recall that we have defined \(k_{n}\) as a positive number throughout the whole analysis. In the _right ballistic region_, \(x\in[L,L_{0}/2]\), \[\mathcal{W}(x\in[L,L_{0}/2]) = \frac{1}{L_{0}}\sum_{n<0}^{n^{-}_{max}}\left|e^{-ik_{n}x}+r^{ \prime}(k_{n})e^{ik_{n}x}\right|^{2}+\frac{1}{L_{0}}\sum_{n>0}^{n^{+}_{max}} \left|t(k_{n})e^{ik_{n}x}\right|^{2} \tag{3.4a}\] \[\approx \frac{1}{L_{0}}\sum_{n>0}^{\frac{N-\Delta N}{2}}\left[2+\left(r^{ \prime}(k_{n})e^{2ik_{n}x}+cc\right)\right]+\frac{\Delta N}{L_{0}}T(\epsilon_{ F}). \tag{3.4b}\] Here and in the previous equations, \(R(\epsilon_{F})=|r(\epsilon_{F})|^{2}\) and \(T(\epsilon_{F})=|t(\epsilon_{F})|^{2}\) are the reflection and transmission coefficients, respectively. The average over an ensemble of disorder configurations gives the general results \[\left\langle\mathcal{W}(x\in[-L_{0}/2,0])\right\rangle = \mathcal{W}_{0}+\frac{\Delta N}{L_{0}}\left\langle R(\epsilon_{F}) \right\rangle+\frac{1}{L_{0}}\sum_{n>0}^{\frac{N+\Delta N}{2}}\left(\langle r( \epsilon_{n})\rangle e^{-2ik_{n}x}+cc\right)\] (3.5a) and \[\left\langle\mathcal{W}(x\in[L,L_{0}/2])\right\rangle = \mathcal{W}_{0}-\frac{\Delta N}{L_{0}}\langle R(\epsilon_{F}) \rangle+\frac{1}{L_{0}}\sum_{n>0}^{\frac{N-\Delta N}{2}}\left[\langle r^{ \prime}(k_{n})\rangle e^{2ik_{n}x}+cc\right]\,, \tag{3.5b}\] where we have written \(\mathcal{W}_{0}=N/L_{0}\) for the electron density in the absence of disorder. _In the DMPK approximation_[1, 53], the last terms in Eqs. (3.5a) and (3.5b) vanish and \[\left\langle\mathcal{W}(x\in[-L_{0}/2,0])\right\rangle_{DMPK} = \mathcal{W}_{0}+\frac{\Delta N}{L_{0}}\left\langle R(\epsilon_{F})\right\rangle \tag{3.6a}\] \[\left\langle\mathcal{W}(x\in[L,L_{0}/2])\right\rangle_{DMPK} = \mathcal{W}_{0}-\frac{\Delta N}{L_{0}}\left\langle R(\epsilon_{F} )\right\rangle. \tag{3.6b}\] ### The density inside the sample: \(x\in[0,l]\) Inside the system proper, Eq. (2.19c) gives the general result (see App. B) \[\mathcal{W}(0\leq x\leq L) = \frac{1}{L_{0}}\left\{\sum_{n>0}\Big{|}\alpha_{2}^{*}(\epsilon_{ n})\mathrm{e}^{ik_{n}x}-\beta_{2}^{*}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\Big{|}^{2}T( \epsilon_{n})f_{\mu_{1}\beta}(\epsilon_{n})\right. \tag{3.7}\] \[\left.+\sum_{n<0}\Big{|}\beta_{1}(\epsilon_{n})\mathrm{e}^{ik_{n} x}+\alpha_{1}^{*}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\Big{|}^{2}T(\epsilon_{n})f_{ \mu_{2}\beta}(\epsilon_{n})\right\},\] which contains the contribution of the electrons that impinge on the system from the left with positive momentum \(k_{n}\), and from the right with negative momentum \(-k_{n}\). Here, \(T(\epsilon_{n})=|t(\epsilon_{n})|^{2}\) is the transmission coefficient. Just as above, we consider the _zero-temperature_ limit, \(T=0\), while the two chemical potentials will be taken to be different, the left one being higher, \(\mu_{1}>\mu_{2}\). The expectation value of the electron density of Eq. (3.7) over an ensemble of configurations of disorder is given by \[L_{0}\left\langle\mathcal{W}(x\in[0,L])\right\rangle = \sum_{n>0}^{n_{max}^{+}}\left\langle\left|\alpha_{2}^{*}(\epsilon_{n })\mathrm{e}^{ik_{n}x}-\beta_{2}^{*}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\right|^ {2}\!T(\epsilon_{n})\right\rangle \tag{3.8a}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{n< 0}^{n_{max}^{-}}\left\langle\left|\beta_{1}(\epsilon_{n})\mathrm{e}^{ik_{n}x} +\alpha_{1}^{*}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\right|^{2}\!T(\epsilon_{n})\right\rangle\] \[= \sum_{n=0}^{|n_{max}^{-}|}\Bigg{[}\left\langle\left|\alpha_{2}^{* }(\epsilon_{n})\mathrm{e}^{ik_{n}x}-\beta_{2}^{*}(\epsilon_{n})\mathrm{e}^{-ik _{n}x}\right|^{2}\!T(\epsilon_{n})\right\rangle\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+\left\langle\left|\beta_{1}(\epsilon_{n})\mathrm{e}^{ik_{n}x}+\alpha_ {1}^{*}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\right|^{2}\!T(\epsilon_{n})\right\rangle \Bigg{]}\] \[+\sum_{n=|n_{max}^{-}|}^{n_{max}^{+}}\left\langle\left|\alpha_{2 }^{*}(\epsilon_{n})\mathrm{e}^{ik_{n}x}-\beta_{2}^{*}(\epsilon_{n})\mathrm{e} ^{-ik_{n}x}\right|^{2}\!T(\epsilon_{n})\right\rangle \tag{3.8b}\] 1) E.g., in _equilibrium_, \(\mu_{1}=\mu_{2}\), \[L_{0}\left\langle\mathcal{W}(x\in[0,L])\right\rangle = \sum_{n=0}^{n_{max}^{+}=|n_{max}^{-}|=\frac{N}{2}}\Bigg{[}\left \langle\left|\alpha_{2}^{*}(\epsilon_{n})\mathrm{e}^{ik_{n}x}-\beta_{2}^{*}( \epsilon_{n})\mathrm{e}^{-ik_{n}x}\right|^{2}\!T(\epsilon_{n})\right\rangle\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left\langle \left|\beta_{1}(\epsilon_{n})\mathrm{e}^{ik_{n}x}+\alpha_{1}^{*}(\epsilon_{n}) \mathrm{e}^{-ik_{n}x}\right|^{2}\!T(\epsilon_{n})\right\rangle\Bigg{]}\] We compute the above expectation value of the electron density over an ensemble of configurations of disorder in the _DMPK approximation_[1, 53], following the procedure of Ref. [20]. We find \[\left\langle\left|\alpha_{2}^{*}(\epsilon_{n})\mathrm{e}^{ik_{n}x }-\beta_{2}^{*}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\right|^{2}\!T(\epsilon_{n}) \right\rangle_{DMPK} =1-\int_{0}^{\infty}\int_{0}^{\infty}g(\lambda_{1},\lambda_{2})p_{ \,s_{n}^{(1)}}(\lambda_{1})p_{\,s_{n}^{(2)}}(\lambda_{2})d\lambda_{1}d \lambda_{2} \tag{3.10a}\] \[\left\langle\left|\beta_{1}(\epsilon_{n})\mathrm{e}^{ik_{n}x}+ \alpha_{1}^{*}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\right|^{2}\!T(\epsilon_{n}) \right\rangle_{DMPK} =1+\int_{0}^{\infty}\int_{0}^{\infty}g(\lambda_{1},\lambda_{2})p_ {\,s_{n}^{(1)}}(\lambda_{1})p_{\,s_{n}^{(2)}}(\lambda_{2})d\lambda_{1}d \lambda_{2}\;, \tag{3.10b}\] where \[g(\lambda_{1},\lambda_{2})=\frac{\lambda_{1}-\lambda_{2}}{1+ \lambda_{1}+\lambda_{2}}\;, \tag{3.11a}\] \[s_{n}^{(1)}\equiv\frac{x}{\ell_{n}},\;\;\;\;\;\;s_{n}^{(2)}\equiv \frac{L-x}{\ell_{n}}. \tag{3.11b}\] Here, the index \(i=1,2\) refers to the fraction of the wire on the left and on the right of the observation point \(x\), respectively. The parameter \(\lambda_{i}\geq 0\) is one of the variables (in addition to two phases) defining a transfer matrix for one open channel, and \(p_{s^{(i)}}(\lambda_{i})\) is its statistical distribution given by DMPK for a specific value of \(s_{n}^{(i)}\) (for one open channel, the DMPK equation reduces to Melnikov's [7]). The transmission coefficient is given in terms of \(\lambda\) as \(T=1/(1+\lambda)\). To illustrate the meaning of the probability density \(p_{s}(\lambda)\), we give in App. C Melnikov's equation and the first and second moments of \(\lambda\) associated with such a distribution, where \(s=L/\ell\), \(\ell\) being the transport elastic mean free path, which is the only microscopic parameter in the DMPK formalism. In the sum (3.9), each term with \(n>0\) has its counterpart for \(n<0\); for one given \(\epsilon_{n}\) we then have, due to Eq. (3.10) \[\left\langle\left|\alpha_{2}^{*}(\epsilon_{n})\mathrm{e}^{ik_{n}x}-\beta_{2}^ {*}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\right|^{2}\!T(\epsilon_{n})\right\rangle _{DMPK}+\left\langle\left|\beta_{1}(\epsilon_{n})\mathrm{e}^{ik_{n}x}+\alpha_{ 1}^{*}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\right|^{2}\!T(\epsilon_{n})\right\rangle _{DMPK}=2 \tag{3.12}\] independent of \(x\), and so (\(N\)= total number of electrons inside \(L_{0}\)) \[\left\langle\mathcal{W}(x\in[0,L])\right\rangle_{DMPK} = \frac{1}{L_{0}}\frac{N}{2}2=\frac{N}{L_{0}}\equiv\mathcal{W}_{0}\;, \tag{3.13}\] within the DMPK model. Notice that we do not obtain the result reported in Ref. [20], because the contributions of electrons traveling in both directions compensate to give a constant value. 2) We go back to the case where _the two chemical potentials are not equal_, the left one being higher, \(\mu_{1}>\mu_{2}\). We do not have an equilibrium state, but a stationary state: from Eq. (3.8b) \[L_{0}\left\langle\mathcal{W}(x\in[0,L])\right\rangle_{DMPK}=2|n_{ max}^{-}|\] \[+\sum_{n=|n_{max}^{-}|}^{n_{max}^{+}}\left\langle\left|\alpha_{2} ^{*}(\epsilon_{n})\mathrm{e}^{ik_{n}x}-\beta_{2}^{*}(\epsilon_{n})\mathrm{e}^ {-ik_{n}x}\right|^{2}\!T(\epsilon_{n})\right\rangle\] We have used the fact that there are \(|n_{max}^{-}|\) terms in the first part of Eq. (3.8b), each giving a contribution of 2, which in turn arises from the DMPK approximation, as shown in Eq. (3.12). Recalling that \(N-\Delta N=2|n_{max}^{-}|\), we then have \[L_{0}\left\langle\mathcal{W}(x\in[0,L])\right\rangle_{DMPK} \approx (N-\Delta N)+\Delta N\left\langle\left|\alpha_{2}^{*}(\epsilon_{F })\mathrm{e}^{ik_{F}x}-\beta_{2}^{*}(\epsilon_{F})\mathrm{e}^{-ik_{F}x}\right|^ {2}T(\epsilon_{F})\right\rangle\] \[= N+\Delta N\left[\left\langle\left|\alpha_{2}^{*}(\epsilon_{F}) \mathrm{e}^{ik_{F}x}-\beta_{2}^{*}(\epsilon_{F})\mathrm{e}^{-ik_{F}x}\right|^ {2}T(\epsilon_{F})\right\rangle-1\right]\] \[= N-\Delta N\int_{0}^{\infty}\int_{0}^{\infty}g(\lambda_{1}, \lambda_{2})p_{s_{F}^{(1)}}(\lambda_{1})p_{s_{F}^{(2)}}(\lambda_{2})d\lambda_ {1}d\lambda_{2}\] \[\left\langle\mathcal{W}(x\in[0,L])\right\rangle_{DMPK} = \mathcal{W}_{0}-\frac{\Delta N}{L_{0}}\int_{0}^{\infty}\int_{0}^ {\infty}g(\lambda_{1},\lambda_{2})p_{s_{F}^{(1)}}(\lambda_{1})p_{s_{F}^{(2)}} (\lambda_{2})d\lambda_{1}d\lambda_{2}.\] Here, \(s_{F}^{(1)}=x/\ell_{F}\) and \(s_{F}^{(2)}=(L-x)/\ell_{F}\), \(\ell_{F}\) being the mean free path at the Fermi level. ## 4 The equilibrium density at zero temperature and its expectation value over disorder In the present section we consider the disordered system in _equilibrium_ at \(T=0\), i.e., with no chemical potential difference (\(\mu_{1}=\mu_{2}=\mu\)) between the two reservoirs. At \(T=0\), the total number of electrons is fixed and equal to \(N\). The Fermi levels of the electrons traveling to the left and to the right, and the corresponding number of electrons, \(n_{max}^{+}\) and \(|n_{max}^{-}|\), are equal, i.e., \[k_{F}=\frac{2\pi n_{max}^{+}}{L_{0}}=\frac{2\pi|n_{max}^{-}|}{L_{0}},\ \ \ n_{max}^{+}=|n_{max}^{-}|=\frac{N}{2}. \tag{4.1}\] In equilibrium, the electron density can be written as \[\frac{\mathcal{W}(x)}{\mathcal{W}_{0}}=\frac{2}{N}\sum_{n=1}^{N/2}w_{n}(x), \tag{4.2}\] where \(\mathcal{W}_{0}=N/L_{0}\) is the ballistic electronic density, and \[w_{n}(x)\equiv\frac{1}{2}\left[w_{n}^{LI}(x)+w_{n}^{RI}(x)\right], \tag{4.3}\] represents the _dimensionless density_ for the \(n\)-th level (here and in what follows, \(n\) is understood to be a _positive_ number), which is the sum of the \(n\)-th level contributions for left incidence, \[w_{n}^{LI}(x)\equiv L_{0}|\psi_{n}^{L_{0}}\left(x\right)|^{2}, \tag{4.4a}\] and right incidence, \[w_{n}^{RI}(x)\equiv L_{0}|\psi_{-n}^{L_{0}}\left(x\right)|^{2}. \tag{4.4b}\] From table 2, in the ballistic regions the dimensionless density of the \(n\)-th level, Eq. (4.3), is written as \[w_{n}(x)=1+Re\left[r\left(k_{n}\right)e^{-2ik_{n}x}\right], \qquad-\frac{L_{0}}{2}<x<0, \tag{4.5a}\] \[w_{n}(x)=1+Re\left[r^{\prime}\left(k_{n}\right)e^{2ik_{n}x} \right],\qquad L<x<\frac{L_{0}}{2}, \tag{4.5b}\] while inside the disordered system, Eq. (3.7), is given by \[w_{n}(x)=\frac{1}{2}\Biggl{[}\Bigl{|}\alpha_{2}^{*}(\epsilon_{n})\mathrm{e}^{ ik_{n}x}-\beta_{2}^{*}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\Bigr{|}^{2}T(\epsilon_{n})+ \Bigl{|}\beta_{1}(\epsilon_{n})\mathrm{e}^{ik_{n}x}+\alpha_{1}^{*}(\epsilon_{ n})\mathrm{e}^{-ik_{n}x}\Bigr{|}^{2}T(\epsilon_{n})\Biggr{]},\ \ 0<x<L. \tag{4.5c}\] From Eqs. (4.2)-(4.5), the expectation value over an ensemble of disorder configurations of the equilibrium electron density, \(\langle\mathcal{W}(x)/\mathcal{W}_{0}\rangle\), can be found from the average of the density \(\langle w_{n}(x)\rangle\) for the individual \(N/2\) levels. The theoretical model that we shall compare with computer simulations is that provided by DMPK [1, 53]. ### Individual levels _Outside_ the system, \(-L_{0}/2<x<0\) and \(L<x<L_{0}/2\), the DMPK theoretical prediction for \(\langle w_{n}(x)\rangle\) is found by averaging Eqs. (4.5a) and (4.5b). Due to the random phase approximation considered in the DMPK model, this approach predicts a null value for the average of the reflection amplitudes of the disordered system, i.e., \[\left\langle r\left(k_{n}\right)\right\rangle_{DMPK}=\left\langle r^{\prime} \left(k_{n}\right)\right\rangle_{DMPK}=0,\ \ \ \forall n; \tag{4.6}\] therefore, outside the system \(\left\langle w_{n}(x)\right\rangle_{DMPK}=1\). _Inside_ the system, \(0<x<L\), the DMPK theoretical prediction for \(\langle w_{n}(x)\rangle\) is found from Eqs. (3.12) and (4.5c). The result is also \(\left\langle w_{n}(x)\right\rangle_{DMPK}=1\), i.e., insensitive to disorder: this is a property arising from the DMPK model. In summary, the DMPK prediction for the average density for any individual level is written as \[\left\langle w_{n}(x)\right\rangle_{DMPK}=\frac{1}{2}\left[\left\langle w_{n} ^{LI}(x)\right\rangle_{DMPK}+\left\langle w_{n}^{RI}(x)\right\rangle_{DMPK} \right]=1,\ \forall n\ \mathrm{and}\ \ \forall x. \tag{4.7}\] #### 4.1.1 Numerical results and the DMPK prediction In order to analyze the DMPK prediction given in Eq. (4.7), we carry out, for individual levels, numerical simulations to obtain the ensemble average of dimensionless density \(\langle w_{n}(x)\rangle\). The simulations are done from high-lying energy levels, \(n\gg 1\) to low-lying energy levels, \(n\sim 1\); the maximum energy level considered for all simulations is \(n_{max}^{+}=|n_{max}^{-}|=N/2=10000\), which corresponds to the Fermi level. For a given energy level \(n\), the simulation considers an ensemble of \(10^{6}\) realizations of disordered samples generated with the random potential model of Eq. (20); each disorder configuration consists of \(N_{scatt}=8000\) random delta scatterers. The numerical results found for \(\left\langle w_{n}(x)\right\rangle\) are characterized by the parameters \(s_{n}=L/\ell_{n}\) and \(z_{n}=k_{n}\ell_{n}\), which in turn depend on the average reflection coefficient of a single delta scatterer \(\left\langle R_{1}(k_{n})\right\rangle\): see Eqs. (23) - (21). The simulations are also analyzed by using the average reflection coefficient of the sample \(\left\langle R\left(k_{n}\right)\right\rangle\) and the average reflection amplitude of the sample \(\left\langle r\left(k_{n}\right)\right\rangle\). These quantities allow us to know if the numerical results are in the localized regime (\(L/\ell_{n}\gg 1\) and \(\left\langle R\left(k_{n}\right)\right\rangle\simeq 1\)), or if those satisfy the weak disorder condition (\(k_{n}\ell_{n}\gg 1\)) and the random phase approximation (\(\left\langle r\left(k_{n}\right)\right\rangle\simeq 0\)) of the DMPK model; therefore, in table 3 we present the relevant details to analyze the numerical simulations for \(\left\langle w_{n}(x)\right\rangle\). In Fig. 6, computer simulations are compared with Eq. (47). The comparison is done for a high-lying energy level, \(n=9000\gg 1\). In this case, the system is localized, the weak disorder condition is satisfied and the random phase assumption is a reasonable approximation: see table 3. The main figure shows excellent agreement between simulations and the DMPK prediction inside the disordered region. The insets represent in more detail the results outside the sample, where the simulations show small oscillations around the DMPK result, Eq. (47); this effect is due to the small, but non-zero value of \(\left\langle r\left(k_{n}\right)\right\rangle\), for left incidence, and \(\left\langle r^{\prime}\left(k_{n}\right)\right\rangle\), for right incidence, when the energy level \(n\gg 1\) is close to the Fermi level [57]. Figure 7 shows, for six different energy levels, the result of computer simulations for the expectation value of the density for left incidence \(\left\langle w_{n}^{LI}(x)\right\rangle\), right incidence \(\left\langle w_{n}^{RI}(x)\right\rangle\), and their sum: \(2\left\langle w_{n}(x)\right\rangle=\left\langle w_{n}^{LI}(x)\right\rangle+ \left\langle w_{n}^{RI}(x)\right\rangle\). Panel 7a) shows, for the maximum energy level \(n_{max}^{+}=10000\), the numerical result for \(2\left\langle w_{n}(x)\right\rangle\). For this high-lying energy level, the system is localized, the weak disorder condition is satisfied, and the random phase assumption of the DMPK model, Eq. (46), is a suitable approximation: see table 3. The comparison between the numerical result for \(2\left\langle w_{n}(x)\right\rangle\) and the DMPK prediction given in Eq. (47) shows an excellent agreement. Panel 7b) shows, for the high-lying energy level \(n=7000\), the numerical results for \(2\left\langle w_{n}(x)\right\rangle\). The numerical results are in good agreement with the DMPK prediction given in Eq. (47). From table 3, the system is localized, the weak disorder requirement is satisfied, and the random phase assumption of the DMPK model is a suitable approximation for this high-lying energy level: see Eq. (46). In this panel, three observations are in order: first, individual contributions to the electron density for left incidence \(\left\langle w_{n}^{LI}(x)\right\rangle\), and for right incidence \(\left\langle w_{n}^{RI}(x)\right\rangle\), banish before reaching the other end, so the electrons are penetrating less into the sample; second, the density appears to be constant, just as in panel 7a); finally, although small, fluctuations appear in the center of the sample signaling that the weak disorder condition starts to get lost. Panels 7c) and 7d) show, respectively, the profiles of \(2\left\langle w_{n}(x)\right\rangle\) for the energy levels \(n=5500\) and \(n=5000\). For these '_intermediate_' energy levels, the individual contributions to the electron density \(\left\langle w_{n}^{LI}(x)\right\rangle\) for left incidence, and \(\left\langle w_{n}^{RI}(x)\right\rangle\) for right incidence, banish near the center of system; therefore, for both incidences, the electrons are penetrating less into the sample, giving rise to a drop at the center of the system, where the fluctuations of \(2\left\langle w_{n}(x)\right\rangle\) become more relevant than in panel 7b). Due to this drop, the DMPK result of Eq. (4.7), is not satisfactory to describe the numerical results of \(2\left\langle w_{n}(x)\right\rangle\) for '_intermediate_' energy levels. This fact can be understood from table 3, which shows that, for both '_intermediate_' energy levels, the system is localized, however, the weak disorder requirement and the random phase approximation are being left. The numerical results of panels 7e) and 7f) show the profiles of \(2\left\langle w_{n}(x)\right\rangle\) for two low-lying energy levels; the energy levels are \(n=1500\) and \(n=10\), respectively. For those low-lying energy levels, the individual contributions to the electron density, for left incidence \(\left\langle w_{n}^{LI}(x)\right\rangle\), and right incidence \(\left\langle w_{n}^{RI}(x)\right\rangle\), banish near to the borders of the disordered system. This means that, for both incidences, the electrons do not penetrate the sample, giving rise to a dramatic drop of the profiles \(2\left\langle w_{n}(x)\right\rangle\) inside the system; therefore, the DMPK result, Eq. (4.7), is not appropriate to describe \(2\left\langle w_{n}(x)\right\rangle\) for low-lying energy levels. The failure of the DMPK model can be understood from the details shown in table 3: the weak disorder condition and the random phase approximation Figure 6: The quantity \(\left\langle w_{n}(x)\right\rangle\) from computer simulations, compared with the DMPK prediction, Eq. (4.7). The system is in equilibrium at \(T=0\), with the same chemical potential at both reservoirs. The insets represent in more detail the results outside the sample. The sample consists of \(10^{6}\) realizations. This figure repeats Fig. 7a below in more detail. are not satisfied. In panel 7e), the parameter \(z_{n}=k_{n}\ell_{n}=0.23\), while for panel 7f), \(z_{n}=k_{n}\ell_{n}=6\times 10^{-5}\), which is even farther from the weak disorder requirement. In the case of panel 7f), we have for one scatterer \(\left\langle R_{1}(k_{n})\right\rangle\approx 0.96\), while for the total sample, \(\left\langle R(k_{n})\right\rangle=1.000\) and \(\left\langle r(k_{n})\right\rangle=-0.995-i0.003\). This does not indicate that the system has become more localized, but rather that each \(\delta\) potential strength \(u_{j}\gg k_{n}\), Eq. (2.21), while the total sample has becomes more impenetrable. #### 4.1.2 Transmission Spectrum The numerical results shown in Figs. 6 and 7 for \(\left\langle w_{n}\left(x\right)\right\rangle\), and its corresponding left \(\left\langle w_{n}^{LI}\left(x\right)\right\rangle\) and right \(\left\langle w_{n}^{RI}\left(x\right)\right\rangle\) contributions, exhibit that as we go down in energy, the wave function penetrates ever less inside the disordered sample; therefore, the electron is reflected back and the transmission gradually decreases. This fact is illustrated in Fig. 8, where we present the transmission spectra for two different disorder configurations and the average over the ensemble of the transmission spectra. Panels 8a) and 8b) show the transmission spectra for two different disorder configurations, which differ drastically from each other; as it is expected, each disorder realization has its own resonances for different energy levels. In both cases, the most Figure 7: Results of computer simulations for the electron density inside and outside a disordered 1D conductor for left incidence, right incidence and their sum (\(\equiv 2w_{n}(x)\)), for six individual energy levels identified by their \(n>0\) value. The value of the level dependent \(s_{n}=L/\ell_{n}\) is indicated in each panel. The sum for both incidences is \(x\)-independent inside the sample for a rather wide range of \(n\)s. For smaller \(n\)s, that sum, inside the sample, tends to be more and more concentrated near the edges \(x=0,L\). The insets in panels a), b), e), and f) show in detail the behavior of the electron density in a small region outside the sample. important resonances are found for high-lying levels \(n\gg 1\). In contrast, there are no resonances for low-lying levels \(n\sim 1\), which probably are exponentially narrow, so they cannot be excited at low energy levels. The insets of Figs. 8a) and 8b) show a zoom in on small resonances for _intermediate_ energy levels. Those transmission resonances are low in absolute terms, but they are relatively high, compared to the transmission of their neighbors; a Lorentzian distribution was used to fit the data of those resonances showing an excellent agreement. The numerical evidence of panels 8a) and 8b) means that, for a given disorder configuration, the transmission coefficient \(T\left(\epsilon_{n}\right)\) is only relevant for high-lying levels, while intermediate levels show very small resonances and the resonances of low-lying levels are not excited. Panel 8c) shows the ensemble average of the transmission spectra \(\left\langle T\left(\epsilon_{n}\right)\right\rangle\); the behavior is in good agreement with those results of Figs. 6 and 7, i.e., in average, the transmission decreases as we go down the energy spectrum. Finally, panel 8d) shows \(\log_{10}\left\langle T\left(\epsilon_{n}\right)\right\rangle\), which emphasizes the drop of the average transmission when the energy decreases. #### 4.1.3 The Local Density of States and the Dwell Time We now offer a complementary interpretation of those results of Sec. 4.1.1. The discussion is based on the _local density of states_ (LDOS) \(\boldsymbol{\rho}_{{}_{LDOS}}\left(\epsilon_{n},x\right)\) and the _dwell time_\(\tau_{{}_{D}}\left(\epsilon_{n}\right)\): see App. D. The LDOS is the sum over left and right incidences of particle densities with unitary flux; therefore, for the \(n\)-th level, the averages \(\left\langle w_{n}\left(x\right)\right\rangle\), Eqs. (4.3)-(4.4), and \(\left\langle\boldsymbol{\rho}_{{}_{LDOS}}\left(\epsilon_{n},x\right)\right\rangle\), Eqs. (D1)-(D2), are related in the following way: \[2\pi\hbar v_{n}\left\langle\boldsymbol{\rho}_{{}_{LDOS}}\left(\epsilon_{n},x \right)\right\rangle=2\left\langle w_{n}\left(x\right)\right\rangle=\left\langle w _{n}^{LI}\left(x\right)\right\rangle+\left\langle w_{n}^{RI}\left(x\right) \right\rangle; \tag{4.8}\] here, \(v_{n}\equiv\hbar k_{n}/m\) denotes the unitary flux for the \(n\)-th energy level, which coincides with the group velocity defined in Eq. (2.2c). Equation (4.8), relates the average LDOS Figure 8: Panels a) and b) show transmission spectra \(T\left(\epsilon_{n}\right)\)\(vs\)\(n\) for two different disorder realizations: insets show Lorentzian distribution fits for resonances of intermediate energy levels. c) Average transmission spectrum \(\left\langle T\left(\epsilon_{n}\right)\right\rangle\) d) Logarithm of the average transmission spectrum \(\log_{10}\left\langle T\left(\epsilon_{n}\right)\right\rangle\). \(\left\langle\mathbf{\rho}_{{}_{\mathit{LDOS}}}\left(\epsilon_{n},x\right)\right\rangle\) to the results found in Sec. 4.1.1 for \(\left\langle w_{n}\left(x\right)\right\rangle\); we focused the analysis on the region inside the sample, i.e., \(0\leq x\leq L\). The numerical results shown in Figs. 6, 7a) and 7b), corresponding to high-lying energy levels, satisfy the DMPK prediction of Eq. (4.7). That is, \[\left\langle w_{n}\left(x\right)\right\rangle=\left\langle\mathcal{U}\left( \epsilon_{n},x\right)\right\rangle\frac{v_{n}}{2}=1,\ \ n\gg 1, \tag{4.9}\] where we have defined the following quantity, \[\left\langle\mathcal{U}\left(\epsilon_{n},x\right)\right\rangle\equiv 2\pi \hbar\left\langle\mathbf{\rho}_{{}_{\mathit{LDOS}}}\left(\epsilon_{n},x\right) \right\rangle. \tag{4.10}\] The result given in Eq. (4.9) for high-lying energy levels, agrees with the experimental results of a recent microwave measurement of energy density inside lossless 1D random media studied by Huang, _et. al._[43]; the experimental setup was a single-mode random waveguide of copper with cutoff frequency of 6.56 GHz. The experimental results were presented for the electromagnetic version of Eq. (4.10), i.e., \[\left\langle\mathcal{U}^{\mathit{Elec}}\left(\omega,x\right)\right\rangle \equiv 2\pi\left\langle\mathbf{\rho}_{{}_{\mathit{LDOS}}}^{\mathit{Elec}} \left(\omega,x\right)\right\rangle, \tag{4.11}\] which satisfies Eq. (4.9): see Fig. 2 of Ref. [43]. The measurements were made in the frequency range \(10.00-10.70\) GHz, with disordered samples of length \(L=86\) cm and a ratio \(L/\ell=3.51\), \(L\) being the system length and \(\ell\) the mean free path; therefore, Huang's experiment satisfies the weak disorder condition \(k\ell\simeq 51.3\gg 1\), as those results shown in Figs. 6, 7a) and 7b), where \(k_{n}\ell_{n}=55.04\), \(61.6\) and \(21.7\), respectively: see table 3. The numerical results shown in Fig. 7 exhibit that, as we go down the spectrum, the average dimensionless density \(2\left\langle w_{n}\left(x\right)\right\rangle\) drops at the center of the sample. From Eq. (4.8), this means that the average LDOS \(\left\langle\mathbf{\rho}_{{}_{\mathit{LDOS}}}\left(\epsilon_{n},x\right)\right\rangle\) is depleted in the interior of the system as the energy level decreases. In order to understand this, we present in Fig. 9, for a given disorder realization, some relevant profiles of the dimensionless densities for left \(w_{n}^{LI}\left(x\right)\) and right \(w_{n}^{RI}\left(x\right)\) incidences; the profiles correspond to the transmission spectrum of panel 8b). Panel 9a) is divided in two plots: the upper panel for \(w_{n}^{LI}\left(x\right)\) and the lower panel for \(w_{n}^{RI}\left(x\right)\). These profiles correspond to a high-lying energy level, where the highest resonance of the spectrum of panel 8b) takes place. We observe that, for both incidences, the electron density profile is extended inside the sample; therefore, the wave function penetrates the sample, giving rise to a relevant transmission coefficient \(T\left(\epsilon_{n}\right)\sim 1\). Panel 9b) is also divided in two plots: the upper panel for \(w_{n}^{LI}\left(x\right)\) and the lower panel for \(w_{n}^{RI}\left(x\right)\); these profiles correspond to an intermediate low resonant energy level [the one shown in the inset of Fig.8b)]. In this case, both incidences \(w_{n}^{LI}\left(x\right)\) and \(w_{n}^{RI}\left(x\right)\), show that the wave function is not extended inside the system, in accordance with a low transmission coefficient \(T\left(\epsilon_{n}\right)\ll 1\); however, the electron penetrates a short distance into the sample, and then it seems to be localized in a very narrow region inside the sample. In panels 9c) and 9d), the behavior of \(w_{n}^{LI}\left(x\right)\) and \(w_{n}^{RI}\left(x\right)\) is quite notorious. These profiles of low-lying levels, show a dramatic drop near the borders of the sample, so the wave function does not penetrate the disordered region and the transmission coefficient is negligible \(T\left(\epsilon_{n}\right)\to 0\). Roughly speaking, the results of Fig. 9 show that, for a given disorder configuration, the electron propagates through the sample '_spending_' time according to the energy level \(n\) of the incident wave function. This qualitative interpretation can be analyzed by using the concept of '_dwell time_'. The dwell time is a measure of the time spent by an electron in the disordered region \(0<x<L\) regardless of whether it is ultimately transmitted or reflected [46]. From App. D, the averages of the dwell time, the LDOS and the dimensionless density are related in the following way \[\left\langle\tau_{{}_{D}}\left(\epsilon_{n}\right)\right\rangle\equiv 2\pi \hbar\frac{1}{v_{n}}\int_{0}^{L}\left\langle\boldsymbol{\rho}_{{}_{LDOS}} \left(\epsilon_{n},x\right)\right\rangle dx=\frac{1}{v_{n}}\int_{0}^{L}2 \left\langle w_{n}\left(x\right)\right\rangle dx. \tag{4.12}\] Figure 10 shows the numerical results for the average dwell time \(\left\langle\tau_{{}_{D}}\left(\epsilon_{n}\right)\right\rangle/\tau_{{}_{0}}\) in units of the characteristic time \[\tau_{{}_{0}}=\frac{L}{v_{0}}; \tag{4.13}\] here \(L\) is the system length, while \[v_{0}=\frac{\hbar u_{0}}{m}, \tag{4.14}\] is a characteristic velocity defined in terms of the Planck's constant \(\hbar\), the electron mass \(m\) and the maximum value of the delta potential strength \(u_{0}\): see Eq. (2.20). We notice that, in average, the dwell time goes to zero in the strong scattering regime, i.e., for low-lying energy levels \(n\sim 1\), where the strength of the scatterers is much larger than the energy; in this regime, the wave function does not penetrate the sample, the electron is reflected back, the LDOS is depleted and the dwell time is too short. In Figure 9: This figure shows, for a particular disorder realization, the dimensionless electron density profile inside the disordered sample for left, \(w_{n}^{LI}\left(x\right)\), and right, \(w_{n}^{RI}\left(x\right)\), incidence for four different energy levels. Panels a) and b) correspond to resonant levels, and are divided in two plots, one on top of the other, to better visualize the behavior of each incidence. Panels c) and d) correspond to non-resonant levels. the weak disorder regime, for high-lying energy levels \(n\gg 1\), we observe a smooth decay that do not reach zero; in this case, \(\left\langle\mathbf{\rho}_{{}_{LDOS}}\left(\epsilon_{n},x\right)\right\rangle\) satisfies Eq. (4.9), the wave function penetrates the system and the electron propagates through the sample with a short dwell time. The most notorious behavior shown in Fig. 10 is for intermediate levels: as the whole sample becomes more and more impenetrable the transmission is negligible, the average dimensionless density \(\left\langle w_{n}\left(x\right)\right\rangle\) drops and the average dwell is long for intermediate energy levels but decreases for both, low-lying levels and high-lying levels. The behavior shown for \(\left\langle\tau_{{}_{D}}\left(\epsilon_{n}\right)\right\rangle\) in Fig. 10, is similar, at least qualitatively, to the results of the dwell time in potential barriers [45, 46]. #### 4.1.4 Electromagnetic analogue The DMPK prediction, Eq. (4.7), and our numerical simulations for high-lying energy levels shown in Sec. 4.1.1, agrees with recent experimental results of microwave measurement in a single-mode random waveguide [43]. This is because the propagation of electromagnetic waves through a waveguide is the optical analogue of electron conduction through a wire [58]. The analogy is direct if we simplify the vector nature of the electromagnetic waves to a scalar description. This is done by considering a two-dimensional waveguide of width \(W\) with a Transverse Electric Wave; in this case, the electric field \(\overrightarrow{\mathcal{E}}\left(x,y\right)=E\left(x,y\right)e^{-i\omega t} \widehat{\mathbf{z}}\) (frequency \(\omega\) and wavenumber \(k=\omega/c\)), has a perpendicular polarization to the propagation direction \(x\). The scalar complex field \(E\left(x,y\right)\) satisfies the scalar Helmholtz equation \[\left[\nabla^{2}+k^{2}\varepsilon\left(x,y\right)\right]E\left(x,y\right)=0, \tag{4.15}\] with boundary conditions \(E=0\). The relative dielectric constant \(\varepsilon\left(x,y\right)=1+\delta\varepsilon\left(x,y\right)\) fluctuates due to disorder in the waveguide. If the waveguide supports only one propagating mode, then this electromagnetic analogue can be described as one-dimensional problem, similarly to the 1D stationary Schrodinger equation. Figure 10: Average dwell time as a function of the energy level. The characteristic time \(\tau_{0}=mL/\hbar u_{0}\), where \(u_{0}\) is the maximum value of the delta potential strength. Although it is possible to extend some predictions of electron or quantum waves to classical waves, it is important to notice that there exist relevant differences. For instance, in a single mode electromagnetic waveguide, as we go down the frequency, the propagating mode would be closed, so we have no wave propagation; in addition, the electron scattering increases as the energy decreases, but the scattering of light falls at low frequencies. Due to these facts, it is complicated to compare our numerical results with microwave experiments in a single-mode waveguide. #### 4.1.5 Summary The results presented above allow us to conclude that, as we go down in energy, the wave function penetrates ever less inside the sample. The interpretation of \(s_{n}\) in terms of a mfp due to localization from disorder gradually gives way to an interpretation as a parameter that measures the extent to which the wave function is reflected back because each scatterer is'seen' by the electron as a higher and higher -and hence impenetrable- potential barrier. The following comments are in order. The results in panel a) of Fig. 7 are of the same nature as those of Ref. [20] and Ref. [43]. As we go down the spectrum with \(N_{scatt}\)_fixed_ (\(L\) and \(d\)_fixed_), i.e., to panel b) and all the way to f), we have the behavior which was described above. In contrast, in Ref. [20]\(s_{n}\) was increased keeping \(\ell_{n}\)_fixed_ and increasing \(N_{scatt}\) (with \(d\)_fixed_, so \(L\)_increases_). The result is a legitimate increase in localization. I.e., starting from a density profile of a similar nature as that of panel a) of Fig. 7 of the present paper, the result of increasing \(s_{n}\) was that the density profile for left incidence approached 2 in the left half of the sample, and 0 in the right half. The behavior of LDOS is just the same. LDOS is sensitive to these two different procedures, while \(s_{n}\) itself is not. We mention that the numerical results of Fig. 7 show that, for a rather wide range of \(n\gg 1\), the sum of both incidences is \(x\)-independent, as predicted by Eq. (4.7). Moreover, for levels near the Fermi level, the numerical results for left \(\left\langle w_{n}^{LI}(x)\right\rangle\) and right \(\left\langle w_{n}^{RI}(x)\right\rangle\) incidence are in good agreement with the DMPK prediction found in Ref. [20]. As the energy level decreases, the numerical results exhibit quite important statistical fluctuations at the center of the system, which is an effect of the finite size of the sample (\(10^{6}\) realizations): see panels c) and d). Fluctuations in Fig. 7 appear when we go down the energy level, and thus the transmission decreases. A similar behavior occurs in multichannel systems: low transmission eigenchannels present higher fluctuations, compared to high transmission eigenchannels [51], and are strongly correlated over a wide energy range [48]; also, as a consequence of spectral correlations, long-range correlations are expected to be observed in long samples [15]. The results of Sec. 4.1.1 show that, the wave function penetrates ever less inside the sample. This is reflected in the fact that the LDOS is depleted in the interior of the system, since the wave function is ever smaller inside. This result was related to the dwell time, which allows us to interpret the depleted of the LDOS. ### All levels We now analyze the expectation value of the _total density_, \(\left\langle\mathcal{W}\left(x\right)/\mathcal{W}_{0}\right\rangle\). From Eqs. (4.2) and (4.7), the theoretical DMPK prediction is \[\left\langle\frac{\mathcal{W}(x)}{\mathcal{W}_{0}}\right\rangle_{DMPK}=1,\ \ \ \forall x, \tag{4.16}\] which is, once again, insensitive to disorder. Fig. 11 compares the result of Eq. (4.16) with computer simulations. Inside the disordered region, DMPK gives an unsuitable description. The discrepancy is due to the low-lying levels that behave very differently from the high-lying ones, as explained in the text in relation with Fig. 7. Fig. 11 also shows statistical fluctuations, due to the finite size of the sample (\(10^{5}\) realizations). Outside the disordered system, the theoretical prediction of the DMPK model is in good agreement with numerical results, except in the vicinity of the left reservoir, \(x\simeq-L_{0}/2\), where the numerical result shows a minimum, while DMPK does not. The asymmetry in the numerical result with respect to the right reservoir, which does not show a minimum, is due to the off-center position of the sample, which lies in the region \(x\in[0,L]\). Should the sample be centered, i.e., in \(x\in[-L/2,L/2]\), Figure 11: Numerical results for \(\left\langle\frac{\mathcal{W}(x)}{\mathcal{W}_{0}}\right\rangle\)_vs_\(x/L\) for a disordered system of length \(L\) in _equilibrium_ at \(T=0\), with the same chemical potential at both reservoirs. The total system, consisting of the sample plus the ballistic regions, has a length \(L_{0}/L=125\). We are considering \(N=20000\) states, i.e., \(10,000\) levels, each level consisting of two states, one traveling to the left and one to the right. \(N\) is thus the total number of electrons. The quantity \(s_{F}=L/\ell_{F}=7.3\) for the Fermi level, and increases as the level goes down toward the ground state. The simulation consists of \(10^{5}\) realizations of disordered systems, each one with \(8000\) scatterers. To reduce the computing time, the simulation generates every disordered system by combining \(100\) building blocks, each containing \(80\) individual scatterers. ## 5 The expectation value of the logarithm of the density in equilibrium at zero temperature ### Individual levels #### 5.1.1 Outside the sample Consider first \(x<0\). For one level \(n\), with left and right incidence, Eq. (4.5a) gives \[\langle\ln w_{n}(x<0)\rangle = \left\langle\ln\left\{1+\frac{1}{2}\left[r(k_{n})e^{-2ik_{n}x}+r^{ *}(k_{n})e^{2ik_{n}x}\right]\right\}\right\rangle \tag{5.1a}\] \[= \left\langle\ln\left[1-|r(k_{n})|\cos 2(\nu\left(k_{n}\right)-k_ {n}x)\right]\right\rangle\] (5.1b) \[= \int_{0}^{\infty}d\lambda\int_{0}^{2\pi}d\nu\;p_{s_{n}}(\lambda, \nu)\ln\left[1-\sqrt{\frac{\lambda}{1+\lambda}}\cos 2(\nu-k_{n}x)\right] \tag{5.1c}\] where we have used the polar representation of the reflection amplitude, \(r=\sqrt{\lambda/(1+\lambda)}e^{2i\nu}\)[1], and the energy dependence has been obviated. Let \[d\lambda\;d\nu\;p_{s_{n}}(\lambda,\nu) = d\lambda\;p_{s_{n}}(\lambda)\;\frac{d\nu}{2\pi} \tag{5.2}\] Then \[\langle\ln w_{n}(x<0)\rangle = \int_{0}^{\infty}d\lambda\;p_{s_{n}}(\lambda)I_{n}(\lambda) \tag{5.3}\] where \[I_{n}(\lambda) = \int_{0}^{2\pi}\frac{d\nu}{2\pi}\ln\left[1-\sqrt{\frac{\lambda}{ 1+\lambda}}\cos 2(\nu-k_{n}x)\right] \tag{5.4a}\] \[= \frac{1}{\pi}\int_{0}^{\pi}\ln\left[1-\sqrt{\frac{\lambda}{1+ \lambda}}\cos\phi\right]d\phi\] (5.4b) \[= \ln\frac{\sqrt{1+\lambda}+1}{2\sqrt{1+\lambda}} \tag{5.4c}\] Consider now \(x>L\). For one level \(n\), with left and right incidence, Eq. (4.5b) gives \[\langle\ln w_{n}(x>L)\rangle = \left\langle\ln\left\{1+\frac{1}{2}\left[r^{\prime}(k_{n})e^{2ik _{n}x}+(r^{\prime}(k_{n}))^{*}e^{-2ik_{n}x}\right]\right\}\right\rangle \tag{5.5a}\] \[= \left\langle\ln\left[1+|r^{\prime}(k_{n})|\cos 2(\mu\left(k_{n} \right)+k_{n}x)\right]\right\rangle\] (5.5b) \[= \int_{0}^{\infty}d\lambda\int_{0}^{2\pi}d\mu\;p_{s_{n}}(\lambda, \mu)\ln\left[1+\sqrt{\frac{\lambda}{1+\lambda}}\cos 2(\mu+k_{n}x)\right] \tag{5.5c}\] Let \[d\lambda\;d\mu\;p_{s_{n}}(\lambda,\mu) = d\lambda\;p_{s_{n}}(\lambda)\;\frac{d\mu}{2\pi} \tag{5.6}\] Then \[\langle\ln w_{n}(x>0)\rangle = \int_{0}^{\infty}d\lambda\;p_{s_{n}}(\lambda)J_{n}(\lambda) \tag{5.7}\] where \[J_{n}(\lambda) = \int_{0}^{2\pi}\frac{d\mu}{2\pi}\ln\left[1+\sqrt{\frac{\lambda}{1+ \lambda}}\cos 2(\mu+k_{n}x)\right] \tag{5.8a}\] \[= \frac{1}{\pi}\int_{0}^{\pi}\ln\left[1+\sqrt{\frac{\lambda}{1+ \lambda}}\cos\phi\right]d\phi\] (5.8b) \[= \ln\frac{\sqrt{1+\lambda}+1}{2\sqrt{1+\lambda}} \tag{5.8c}\] #### 5.1.2 Inside the sample: \(x\in[0,L]\) From Eq. (4.5c) \[\Big{\langle}\ln w_{n}(x\in[0,L])\Big{\rangle} = \left\langle\ln\frac{1}{2}\Big{[}F_{x}\left(M_{2}\left(\epsilon_{ n}\right)\right)+G_{x}\left(M_{1}\left(\epsilon_{n}\right)\right)\Big{]} \right\rangle+\langle\ln T\left(\epsilon_{n}\right)\rangle, \tag{5.9}\] where \(\langle\ln T\left(\epsilon_{n}\right)\rangle=-L/\ell_{n}\) (see Ref. [1]). In Eq. (5.9), we have defined the following expressions \[F_{x}\left(M_{2}\left(\epsilon_{n}\right)\right) \equiv \Big{|}\alpha_{2}^{\ast}(\epsilon_{n})\mathrm{e}^{ik_{n}x}-\beta _{2}^{\ast}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\Big{|}^{2}, \tag{5.10a}\] \[G_{x}\left(M_{1}\left(\epsilon_{n}\right)\right) \equiv \Big{|}\beta_{1}(\epsilon_{n})\mathrm{e}^{ik_{n}x}+\alpha_{1}^{ \ast}(\epsilon_{n})\mathrm{e}^{-ik_{n}x}\Big{|}^{2}, \tag{5.10b}\] where the disorder sample has been divided in two subsamples, one to the left of the point \(x\), and the other to the right; each subsample has its own transfer matrix: \(M_{1}\) for the left subsample, and \(M_{2}\) for the right subsample. See App.B. So, using the polar representation for \(M_{1}\) and \(M_{2}\), we have \[\ln\frac{1}{2}\Big{[}F_{x}\left(M_{2}\left(\epsilon_{n}\right) \right)+G_{x}\left(M_{1}\left(\epsilon_{n}\right)\right)\Big{]} = \ln\Big{[}C\Big{(}\lambda_{1}\left(k_{n}\right),\lambda_{2}\left( k_{n}\right)\Big{)} \tag{5.11}\] \[- D\Big{(}\lambda_{2}\left(k_{n}\right)\Big{)}\cos[2(k_{n}x-\nu_{ 2}\left(k_{n}\right))]\] \[+ D\Big{(}\lambda_{1}\left(k_{n}\right)\Big{)}\cos[2(k_{n}x+\mu_ {1}\left(k_{n}\right))]\Big{]}\] with \[C\Big{(}\lambda_{1}\left(k_{n}\right),\lambda_{2}\left(k_{n} \right)\Big{)} = 1+\lambda_{1}\left(k_{n}\right)+\lambda_{2}\left(k_{n}\right)\equiv C \tag{5.12a}\] \[D\Big{(}\lambda_{1}\left(k_{n}\right)\Big{)} = \sqrt{\lambda_{1}\left(k_{n}\right)\left(1+\lambda_{1}\left(k_{n} \right)\right)}\equiv D_{1}\] (5.12b) \[D\Big{(}\lambda_{2}\left(k_{n}\right)\Big{)} = \sqrt{\lambda_{2}\left(k_{n}\right)\left(1+\lambda_{2}\left(k_{n} \right)\right)}\equiv D_{2} \tag{5.12c}\] and its average \[\left\langle\ln\frac{1}{2}\Big{[}F_{x}(M_{2}\left(k_{n}\right))+G_{x }(M_{1}\left(k_{n}\right))\Big{]}\right\rangle=\int_{0}^{\infty}\int_{0}^{\infty} d\lambda_{1}d\lambda_{2}p_{s_{n}^{(1)}}(\lambda_{1})p_{s_{n}^{(2)}}(\lambda_{2})\] \[\times\int_{0}^{2\pi}\frac{d\mu_{1}}{2\pi}\int_{0}^{2\pi}\frac{d \nu_{2}}{2\pi}\ln\Big{[}C+D_{1}\cos 2(k_{n}x+\mu_{1})-D_{2}\cos 2(k_{n}x-\nu_{2}) \Big{]},\] once again, we omitted the energy dependence of the polar parameters. One of the angular integrations gives \[\int_{0}^{2\pi}\frac{d\mu_{1}}{2\pi}\int_{0}^{2\pi}\frac{d\nu_{2}} {2\pi}\ln\Big{[}C+D_{1}\cos 2(k_{n}x+\mu_{1})-D_{2}\cos 2(k_{n}x-\nu_{2}) \Big{]}\] \[= \frac{1}{\pi}\int_{0}^{\pi}d\phi\ln\left\{\frac{1}{2}\Big{[}(C+D_ {1}\cos\phi)+\sqrt{(C+D_{1}\cos\phi)^{2}-D_{2}^{2}}\Big{]}\right\}\] This last result has to be inserted in (5.13) and this in (5.9). The \(x\) dependence of the result appears in the fact that \(s_{n}^{(1)}=x/\ell_{n}\) and \(s_{n}^{(2)}=(L-x)/\ell_{n}\), which denote, respectively, the scaled lengths at left and right of the position \(x\). The problem of finding \(\langle w_{n}(x)\rangle\) is thus reduced to quadratures. The comparison with computer simulations is given in Fig. 12. We observe that the agreement is excellent. Notice that the average of the logarithm of the electron density inside the sample for a high-lying state would be a decreasing straight line if we had only incidence from the left, just as in Ref. [21], and a symmetrical one for incidence from the right; the combination of the two gives the result of Fig. 12. ### All levels In this case we have not succeeded in finding a theoretical prediction. Thus, we only present the result of a computer simulation in Fig. 13. The non-equilibrium (\(\mu_{1}>\mu_{2}\)) expectation value of the density and of its logarithm at zero temperature. The contribution of all levels We now consider our system to be at zero temperature, but with a _non-zero chemical potential difference_ between the two reservoirs. Fig. 14 shows the contribution of all levels to the averaged electron density inside and outside the system, as predicted by the _DMPK model_, Eqs. (3.6) and (3.15), as well as a computer simulation. The agreement between the two descriptions in the left and right ballistic regions is very good, while the discrepancy inside the disordered system has the same origin as in the previous figures; the DMPK prediction, Eq. (3.15d), gives a small correction over \(\langle W(x)/{\cal W}_{0}\rangle=1\), which is positive for \(x\in(0,L/2)\) and negative for \(x\in(L/2,L)\), while the numerical simulations show a deep fall inside the disordered system. Fig. 15 shows the contribution of \(N=19000\) states (\(n_{max}^{+}=10000\), \(|n_{max}^{-}|=9000\) to the average of the logarithm of the electron density inside and outside the system. We only present a computer simulation, as we do not have a theoretical prediction for this case. ## 7 Summary and Conclusions In this paper we studied the electron density in a problem of electronic transport in a one-dimensional disordered multiply-scattering conductor: we analyzed the contribution of the individual electron energy levels and their total contribution. A model was proposed for the density matrix of the system placed between two reservoirs at the same temperature \(T\), but, in general, different chemical potentials \(\mu_{1},\mu_{2}\). The model is given in Eq. (2.11), and depends on the temperature \(T\) and the chemical potentials of the two reservoirs, \(\mu_{1},\mu_{2}\). With its aid, the statistical-mechanical expectation value of the electron density was evaluated. The system is not in equilibrium, but is supposed to be in a stationary state. We then computed an average over an ensemble of configurations of disorder. The theoretical analysis was performed within the DMPK model [1, 53] and the results were compared with computer simulations. We studied the statistics of the _electron density_ and of its _logarithm_ over such an ensemble of configurations, concentrating on the zero-temperature limit, \(T=0\). We first considered the situation in which the system is in equilibrium with the two reservoirs, i.e., \(\mu_{1}=\mu_{2}\). For individual energy levels way up in the spectrum, the DMPK predictions are generally very good for the average electron density, Fig 6, and also for the average of its logarithm, Fig. 12. Further down in the spectrum, the assumptions behind DMPK are not applicable. However, the nature of the results is physically well understood. As we go down in energy, the wave function penetrates ever less inside the sample. The interpretation of \(s_{n}\) in terms of a mfp due to localization from disorder gradually gives way to an interpretation as a parameter that measures the extent to which the wave function is reflected back because each scatterer is'seen' by the electron as a higher and higher, and hence impenetrable, potential barrier. This is reflected in the fact that the LDOS [59] is depleted in the interior of the system, since the wave function is ever smaller inside. The logarithm of the electron density is a self-averaging quantity: as a result, its average value computed with a finite but large sample shows very small statistical fluctuations. As a consequence of the individual-level contributions, the DMPK prediction for the average of the total electron density, i.e., the density summed over all the energy levels up to the Fermi energy, while being very good in the ballistic regions, is not adequate inside the disordered sample: again, the physical origin of the discrepancy is well understood. The above results correspond to an equilibrium situation. Out of equilibrium, when the two chemical potentials are different, i.e., \(\mu_{1}\neq\mu_{2}\), the DMPK prediction for the average of the total density is very good in the ballistic regions, while inside the sample it has a behavior similar to that of the above discussion. For the logarithm, we only present computer simulations. We have not been able to find analytical results for other statistical quantities, like the variance of the electron density. This, and other quantities, will have to wait for further analytical developments. The equivalence between electronic conductance and the transmittance, allows to extend some predictions found in the present electronic study to classical systems; however, it is important to take into account the differences between quantum systems and classical ones. Finally, we should remark that it would be desirable to find a 'gedanken' experiment designed to measure the electronic density, and compare it with our theoretical predictions: so far, we have not succeeded in this endeavor. ## Acknowledgements The authors thank F. Leyvraz, B. Shapiro and A. Z. Genack for their comments and suggestions. The authors thank C. Lopez Nataren for technical support in the numerical simulations. G. Rivas and M. Yepez thank Dr. Pier A. Mello, who unfortunately died during the review of this article: mentor, friend and a great human being. Rest in peace. ## Funding GR is financially supported by the PhD scholarship of CONACyT, under Contract No. 777351. MY and PAM are financially supported by the Sistema Nacional de Investigadores (SNI). PAM was also supported by CONACyT, under Contract No. 282927. ## Appendix A The model for the density matrix of Ref. [55] In Ref. [55], the authors study a tight-binding approach of non-interacting electrons, there being \(N_{sites}\) sites for the system proper. They consider the correlation matrix \(\langle c_{m}^{\dagger}c_{l}\rangle\), with \(m,l=1,\cdots,N_{sites}\) numbering the sites, and designate by \(d_{s},\ s=1,\cdots N\), the eigenvalues of this matrix. In Eqs. (2.11)-(2.14) of Ref. [55], the system reduced density matrix in the stationary state is found to be \[\hat{\rho_{S}}=\prod_{n=1}^{N_{sites}}\frac{e^{-a_{n}c_{n}^{\prime \hskip 1.0pt\dagger}c_{n}^{\prime}}}{1+e^{-a_{n}}}\;,\] (A1a) \[a_{n}=\ln\left(\frac{1}{d_{n}}-1\right).\] (A1b) In Sec. IIIA of Ref. [55] it is found, in the _weak-coupling limit_, \[a_{n} = \ln\left(\frac{1}{e_{n}}-1\right)\;,\] (A2a) where \[e_{n} = \gamma_{L}^{n}\;f(\epsilon_{n},\mu_{L},T_{L})+\gamma_{R}^{n}\;f( \epsilon_{n},\mu_{R},T_{R}),\] (A2b) with \[\gamma_{L}^{n}+\gamma_{R}^{n} = 1, \tag{101c}\] where \[f(\epsilon_{n},\mu,T)=\frac{1}{1+e^{\beta(\epsilon_{n}-\mu)}}\;. \tag{101d}\] denotes the Fermi function. In the present main text, \(n\) designates 'running-wave states', with \(n=0,\pm 1,\pm 2,\cdots\), instead of the'site states' of Ref. [55]. The model of Eq. (2.4) in the text is obtained by setting, in the above equations, \(T_{L}=T_{R}=T\), \(\mu_{L}=\mu_{1},\mu_{R}=\mu_{2}\), and identifying \[\gamma_{L}^{n>0}=1,\quad\gamma_{R}^{n>0}=0\quad\Rightarrow\quad a _{n>0}=\ln\left[\frac{1}{f(\epsilon_{n},\mu_{L},T)}-1\right]=\beta(\epsilon_{ n}-\mu_{L}) \tag{102a}\] \[\gamma_{L}^{n=0}=\gamma_{R}^{n=0}=\frac{1}{2}\quad\Rightarrow \quad e_{n=0}=\frac{1}{2}\left[f(\epsilon_{n}=0,\mu_{L},T)+f(\epsilon_{n}=0,\mu_{R},T)\right]\] \[\qquad\qquad\qquad\qquad\equiv f(\epsilon_{n}=0,\mu_{0},T)\quad \Rightarrow\quad a_{n=0}=-\beta\mu_{0}\] (102b) \[\gamma_{L}^{n<0}=0,\quad\gamma_{R}^{n<0}=1\quad\Rightarrow\quad a _{n<0}=\ln\left[\frac{1}{f(\epsilon_{n},\mu_{R},T)}-1\right]=\beta(\epsilon_{ n}-\mu_{R}) \tag{102c}\] in the above equations. In Eq. (102b), \(\mu_{0}\in(\mu_{L},\mu_{R})\). ## Appendix B Proof of Eq. (3.7) We have denoted by \(\alpha_{i},\beta_{i}\), \(i=1,2\) the elements of the transfer matrix \(M_{i}\) for the portions 1 and 2 of the sample, on the left and right of the observation point \(x\), respectively, i.e., \[M_{i}=\left[\begin{array}{cc}\alpha_{i}&\beta_{i}\\ \beta_{i}^{*}&\alpha_{i}^{*}\end{array}\right],\;\;\;\;\;i=1,2, \tag{103}\] with the condition \(|\alpha_{i}|^{2}-|\beta_{i}|^{2}=1\), thus satisfying the requirements of time-reversal invariance and flux conservation. When no index \(i\) is employed, we shall understand the various quantities to refer to the wire as a whole. Let \(a\) and \(b\) denote the amplitudes of the right-going and left-going waves at the point \(x\), and \(0\) and \(t\) (\(t\)=transmission amplitude) the amplitudes outside the wire on the right-hand side. Then, from the definition of the transfer matrix \(M_{2}\) we have \[M_{2}\left[\begin{array}{c}a\\ b\end{array}\right]=\left[\begin{array}{c}t\\ 0\end{array}\right]\;. \tag{104}\] We invert this equation to find \(a\) and \(b\), making use of the relation \[M_{2}^{-1}=\left[\begin{array}{cc}\alpha_{2}^{*}&-\beta_{2}\\ -\beta_{2}^{*}&\alpha_{2}\end{array}\right]\;, \tag{105}\] to find \[\left[\begin{array}{c}a\\ b\end{array}\right]=\left[\begin{array}{c}t\alpha_{2}^{*}\\ -t\beta_{2}^{*}\end{array}\right]. \tag{101}\] We thus have \[|ae^{ik_{n}x}+be^{-ik_{n}x}|^{2} = T|\alpha_{2}^{*}e^{ik_{n}x}-\beta_{2}^{*}e^{-ik_{n}x}|^{2}\equiv TF _{n^{+}}(M_{2}), \tag{102}\] which gives the result appearing in the first line of Eq. (100). Similarly, from the definition of the transfer matrix \(M_{1}\), we have \[\left[\begin{array}{c}a^{\prime}\\ b^{\prime}\end{array}\right]=M_{1}\left[\begin{array}{c}0\\ t^{\prime}\end{array}\right]=\left[\begin{array}{cc}\alpha_{1}&\beta_{1}\\ \beta_{1}^{*}&\alpha_{1}^{*}\end{array}\right]\left[\begin{array}{c}0\\ t^{\prime}\end{array}\right]=\left[\begin{array}{c}\beta_{1}t^{\prime}\\ \alpha_{1}^{*}t^{\prime}\end{array}\right]. \tag{103}\] We thus have \[\left|a^{\prime}e^{ik_{n}x}+b^{\prime}e^{-ik_{n}x}\right|^{2} = \left|\beta_{1}e^{ik_{n}x}+\alpha_{1}^{*}e^{-ik_{n}x}\right|^{2}T ^{\prime}\equiv TF_{n^{-}}(M_{1}),\] (104a) which gives the result appearing in the second line of Eq. ( 100). ## Appendix C Melnikov's equation for \(p_{s}(\lambda)\) and first and second moments of \(\lambda\) The probability density \(p_{s}(\lambda)\) satisfies Melnikov's equation [7] -which is the particular case of the DMPK equation for one open channel- given by [see also Ref. [38]] \[\frac{\partial p_{s}(\lambda)}{\partial s}=\frac{\partial}{\partial\lambda} \left[\lambda(1+\lambda)\frac{\partial p_{s}(\lambda)}{\partial\lambda}\right]. \tag{105}\] The first and second moments of \(\lambda\) are given by \[\langle\lambda\rangle_{s} = \frac{1}{2}\left(e^{2s}-1\right), \tag{106a}\] \[\langle\lambda^{2}\rangle_{s} = \frac{1}{12}\left(2e^{6s}-6e^{2s}+4\right). \tag{106b}\] ## Appendix D Local density of states and the dwell time Consider the one-dimensional electronic system of Fig. 4. For a given energy level \(\epsilon_{n}\), the local density of states (LDOS) at a point \(x\) inside the disordered system is the sum of the particle densities given rise from the left and right incidences [60], i.e., \[\boldsymbol{\rho}_{{}_{LDOS}}\left(\epsilon_{n},x\right)=\frac{L_{0}}{2\pi \hbar}\frac{1}{v_{n}}\left[\left|\psi_{n}^{L_{0}}\left(x\right)\right|^{2}+ \left|\psi_{-n}^{L_{0}}\left(x\right)\right|^{2}\right],\;\;v_{n}=\frac{\hbar k _{n}}{m}=\sqrt{\frac{2\epsilon_{n}}{m}}. \tag{107}\] here, \(v_{n}\) in the incident flux, which in 1D is the group velocity, Eq. (2.2c). The wave functions \(\psi_{n}^{L_{0}}\left(x\right)\) are taken from Table 2 and we have assumed \(n>0\). Since the dimensionless density for the \(n\)-th level \(w_{n}\left(x\right)\), Eq. (4.3), is the sum of the \(n\)-th level contributions for left incidence \(w_{n}^{LI}\left(x\right)\) and right incidence \(w_{n}^{RI}\left(x\right)\), Eq. (4.4), then the LDOS can be written as \[\boldsymbol{\rho}_{{}_{LDOS}}\left(\epsilon_{n},x\right)=\frac{L_{0}}{2\pi \hbar}\frac{1}{v_{n}}\frac{1}{L_{0}}\left[w_{n}^{LI}\left(x\right)+w_{n}^{RI} \left(x\right)\right]=\frac{1}{2\pi\hbar}\frac{\left[2w_{n}\left(x\right) \right]}{v_{n}};\] (D2) therefore, the dimensionless density is related to the LDOS as \[w_{n}\left(x\right)=\frac{v_{n}}{2}\left[2\pi\hbar\boldsymbol{\rho}_{{}_{LDOS }}\left(\epsilon_{n},x\right)\right];\] (D3) From the LDOS, Eq. (D1), the Density of States (DOS) inside the disordered sample is obtained in the following way: \[\boldsymbol{\rho}_{{}_{DOS}}\left(\epsilon_{n}\right)=\int_{0}^{L}\boldsymbol{ \rho}_{{}_{LDOS}}\left(\epsilon_{n},x\right)dx.\] (D4) The DOS is related to the time spent by the particle inside the disordered sample before being reflected or transmitted; this characteristic time is called dwell time \(\tau_{{}_{D}}\left(\epsilon_{n}\right)\), which is proportional to the number of particles inside the sample and inversely proportional to the incidence flux, i.e., \[\tau_{{}_{D}}\left(\epsilon_{n}\right)=\frac{L_{0}}{v_{n}}\int_{0}^{L}\left[ \left|\psi_{n}^{L_{0}}\left(x\right)\right|^{2}+\left|\psi_{-n}^{L_{0}}\left( x\right)\right|^{2}\right]dx,\] (D5) or in terms of the dimensionless electron densities \(w_{n}^{LI}\left(x\right)\) and \(w_{n}^{RI}\left(x\right)\), \[\tau_{{}_{D}}\left(\epsilon_{n}\right)=\frac{1}{v_{n}}\int_{0}^{L}\left[w_{n }^{LI}\left(x\right)+w_{n}^{RI}\left(x\right)\right]dx=\frac{1}{v_{n}}\int_{0 }^{L}2w_{n}\left(x\right)dx.\] (D6) therefore \[\tau_{{}_{D}}\left(\epsilon_{n}\right)=2\pi\hbar\boldsymbol{\rho}_{{}_{DOS }}\left(\epsilon_{n}\right)\] (D7) The general relation between the dwell time and the DOS in mesoscopic media is found in Ref. [47].
2308.12548
Relations Between Generalized JST Algorithm and Kalman Filtering Algorithm for Time Scale Generation
In this paper, we present a generalized Japan Standard Time algorithm (JST-algo) for higher-order atomic clock ensembles and mathematically clarify the relations of the (generalized) JST-algo and the conventional Kalman filtering algorithm (CKF-algo) in the averaged atomic time and the clock residuals for time scale generation. In particular, we reveal the fact that the averaged atomic time of the generalized JST-algo does not depend on the observation noise even though the measurement signal is not filtered in the algorithm. Furthermore, the prediction error of CKF- algo is rigorously shown by using the prediction error regarding an observable state space. It is mathematically shown that when the covariance matrices of system noises are identical for all atomic clocks, considering equal averaging weights for the clocks is a necessary and sufficient condition to ensure equivalence between the generalized JST-algo and CKF-algo in averaged atomic time. In such homogeneous systems, a necessary and sufficient condition for observation noises is presented to determine which algorithm can generate the clock residuals with smaller variances. A couple of numerical examples comparing the generalized JST-algo and CKF-algo are provided to illustrate the efficacy of the results.
Yuyue Yan, Takahiro Kawaguchi, Yuichiro Yano, Yuko Hanado, Takayuki Ishizaki
2023-08-24T04:22:34Z
http://arxiv.org/abs/2308.12548v1
Relations Between Generalized JST Algorithm and Kalman Filtering Algorithm for Time Scale Generation ###### Abstract In this paper, we present a generalized Japan Standard Time algorithm (JST-algo) for higher-order atomic clock ensembles and mathematically clarify the relations of the (generalized) JST-algo and the conventional Kalman filtering algorithm (CKF-algo) in the averaged atomic time and the clock residuals for time scale generation. In particular, we reveal the fact that the averaged atomic time of the generalized JST-algo does not depend on the observation noise even though the measurement signal is not filtered in the algorithm. Furthermore, the prediction error of CKF-algo is rigorously shown by using the prediction error regarding an observable state space. It is mathematically shown that when the covariance matrices of system noises are identical for all atomic clocks, considering equal averaging weights for the clocks is a necessary and sufficient condition to ensure equivalence between the generalized JST-algo and CKF-algo in averaged atomic time. In such homogeneous systems, a necessary and sufficient condition for observation noises is presented to determine which algorithm can generate the clock residuals with smaller variances. A couple of numerical examples comparing the generalized JST-algo and CKF-algo are provided to illustrate the efficacy of the results. Atomic clocks, state-space model, prediction, Kalman filter, time scale, atomic time. ## 1 Introduction An atomic clock ensemble is a collection of highly accurate atomic clocks that work together to achieve a precise and stable timekeeping system. Atomic clocks are devices that measure time based on the vibrations of atoms with constant resonance frequencies, e.g., cesium and rubidium atoms. Even though individual atomic clocks can be accurate, they still have some tiny variations due to the environmental factors like temperature fluctuations, external electromagnetic fields, and quantum mechanical effects. By combining the measurements from multiple atomic clocks within an ensemble, the national metrology institutes (NMIs) all over the world can reduce these variations and create a more reliable and robust timekeeping system [2, 3, 4]. The advancements in atomic clock technology and the development of accurate time scales based on these ensembles offer numerous benefits and applications that are crucial for the future smart society, e.g., satellite navigation [5], financial networks with high-frequency trading and time-sensitive transactions [6], telecommunications [7], etc. The variations in tick rates of atomic clocks are referred to as time deviations from the ideal clock behavior, which can be modeled as stochastic processes. To improve the accuracy of time scales, the researchers obtained experimental evidence for modeling the behavior of atomic clocks and the time deviations the clocks experience as a series of linear stochastic differential equations [8]. Based on this, in the task of time generation, how to properly deal with the prediction problem for time deviations is the main issue to guarantee excellent performance of the generated time [2]. The algorithm that deals with such a prediction problem of time deviations is referred to as the algorithm of averaged atomic time. The Kalman filter is a mathematical method used for state estimation/prediction in control theory and signal processing to estimate the state of a dynamic system based on a series of noisy measurements. It is well known that the key advantage of the Kalman filter is its ability to provide an optimal estimate by dynamically balancing the trade-off between the model predictions and the actual measurements, effectively reducing the impact of observation noises [9, 10, 11]. In time and frequency community, the Kalman filter has been constructed as the algorithm of averaged atomic time using the difference of clock reading between two clocks as measurement signals [2, 12, 13, 14, 15]. However, it is reported that one may face the numerical instability problem of the Kalman filter in the atomic clock ensembles [2, 16, 17, 18]. This is because practical implementations often ignore the fact that the dynamic system of atomic clock ensembles is undetectable1, whereas detectability is a necessary condition ensuring asymptotic convergence of error covariances [19, 20]. In atomic clock ensembles, since the detectability condition is broken, the computational errors in error covariances of the Kalman filter grow unboundedly and hence lead to numerical instability problem [13]. Except for the CKF-algo, the JST-algo [21], is specific to Japan's timekeeping system governed by the National Institute of Information and Communications Technology (NICT), while Japan is known for its advanced technology and precision timekeeping capabilities. The detailed algorithm can be found in [21; 22; 23] (and the reference therein) and is applicable for the atomic clock ensembles with second-order clocks. But there is no literature discussing the relation between the JST-algo and the CKF-algo. It is interesting to compare the two methods and ask when they are equal and which method is superior to the other. In this paper, we mathematically clarify the relation among the algorithms of averaged atomic time by the CKF-algo and JST-algo. Different from the preliminary version [1], we generalize the results for higher-order atomic clock ensembles. Specifically, we generalize the existing JST-algo to a generalized form for the atomic clock ensembles with higher-order models, where the proposed generalized JST-algo is reduced to the existing JST-algo for second-order clocks. By theoretically analyzing the generalized JST-algo in state-space model, we reveal the fact that the averaged atomic time of JST-algo does not depend on the observation noise even though the measurement signal is not filtered in the algorithm. Furthermore, using observable Kalman canonical decomposition, the prediction error of CKF-algo is derived. We show the fact that when the covariance matrix of the system noises is identical for all atomic clocks, considering equal averaging weights for the clocks is a sufficient and necessary condition guaranteeing equivalence between the generalized JST-algo and CKF-algo in averaged atomic time. In addition, different from the preliminary version [1], we further clarify the relation between JST-algo and CKF-algo in individual clock residuals for the homogeneous clock ensemble, and present the sufficient and necessary conditions for observation noises to determine which algorithm can generate the clock residuals with smaller variances. **Notation** We write \(\mathbb{R}\) for the set of real numbers, \(\mathbb{R}_{+}\) for the set of positive real numbers, \(\mathbb{R}^{n}\) for the set of \(n\times 1\) real column vectors, and \(\mathbb{R}^{n\times m}\) for the set of \(n\times m\) real matrices. Moreover, \(\otimes\) denotes the Kronecker product, \((\cdot)^{\mathsf{T}}\) denotes transpose, \((\cdot)^{\mathsf{T}}\) denotes the Moore-Penrose pseudoinverse, and \(\mathsf{diag}(\cdot)\) denotes a diagonal matrix. Furthermore, \(\mathbb{E}[x]\) and \(\mathbb{C}[x]\) denotes the mean value and covariance of a random variable \(x\), Finally, \(\mathds{1}_{n}\) and \(I_{n}\) denote the all-ones column vector and the identity matrix of dimension \(n\), respectively, and \[e_{1:n-1}:=\left[\begin{array}{c}I_{n-1}\\ 0\end{array}\right],\quad K^{0}:=\left[\begin{array}{cc}0&0\\ 0&K\end{array}\right].\] ## 2 Preliminaries ### Atomic Clock Consider an atomic clock ensemble composed of \(m\) clocks. Each clock works as an independent oscillator that generates a sinusoidal signal. The number of waves is counted as its clock reading which is slightly different from the ideal clock reading where the difference is referred to as the time deviation (or, equivalently, so-called phase deviation). Depending on the material property of the atomic clocks, the time deviation of clock \(j\) from the ideal clock is known to satisfy the \(n\)th order stochastic differential equation given by \[\Delta h^{j}(t)\!=\!\sum_{i=1}^{n}\frac{\alpha_{i}^{j}t^{i-1}}{(i-1)!}\!+\! \sum_{i=1}^{n}\!\!\int_{0}^{t}\!\!\int_{0}^{t_{1}}\!\!\cdots\!\int_{0}^{t_{i-1 }}\!\!\xi_{i}^{j}(t_{i})dt_{i}\cdots dt_{2}dt_{1}, \tag{1}\] where \(\alpha_{i}^{j}\in\mathbb{R}\), \(i=1,\ldots,n\), are the parameters with respect to the initial state of clock \(j\), and \(\xi_{i}^{j}\in\mathbb{R}\), \(i=1,\ldots,n\), denote \(n\) independent one-dimensional Gaussian random noises with the variance given by \(\sigma_{i}^{j}\geq 0\), \(i=1,\ldots,n\). For example, it is often assumed that the order \(n\) is given by \(n=2\) for Cesium clocks [8]. ### Task of Time Generation Consider a discrete-time sequence \(\{t_{kT}\}_{k=0,1,2,\ldots}\) with an operating period \(T\in\mathbb{R}_{+}\). In practice, it is usually assumed that a reference signal such as UTC\((t)\) (Coordinated Universal Time) is available as an external source for the local time scale generation only at \(t=kT\), where UTC\((t)\) can be regarded as the (approximated) ideal time. Therefore, the time deviations \(\{\Delta h^{j}(t)\}\) are available at \(t=kT\). However, during the time interval \(t\in(t_{0},t_{T})\), the time deviations of the atomic clocks are never measurable because the ideal time is not available. Even though the time deviations of clocks are immeasurable, the difference \(y_{ij}=\Delta h^{i}(t)-\Delta h^{j}(t)=h^{i}(t)-h^{j}(t)\) between clocks \(i\) and \(j\) is the measurable signal in the clock ensemble, where \(h^{i}(t)\) represents the actual clock reading of clock \(i\). The key to generating a time scale is predicting the immeasurable time deviations of the atomic clocks using the measurements during the time interval \(t\in(t_{0},t_{T})\). The generated time is given as \[\hat{h}_{0}(t)=\sum_{i=1}^{m}\beta_{i}\left[h^{i}(t)-\Delta\hat{h}^{i}(t) \right]\in\mathbb{R},\quad t\in(t_{0},t_{T}), \tag{2}\] where \(\beta:=(\beta_{1},\cdots,\beta_{m})^{\mathsf{T}}\) denotes the weights chosen based on the reliability of the individual clocks with \(\beta_{1}+\ldots+\beta_{m}=1\). Here, since the ideal clock reading \(h_{0}(t)\) can be expressed as \(h_{0}(t)=\hat{h}_{0}(t)\) with \(\Delta\hat{h}^{i}(t)\) replaced by \(\Delta h^{i}(t)\), the accuracy of the generated time (2) is evaluated by averaged _atomic time_ (so-called the ensemble time scale in [2]) \[\mathrm{TA}(t):=\hat{h}_{0}(t)-h_{0}(t)=\sum_{i=1}^{m}\beta_{i}[\Delta h^{i}(t )-\Delta\hat{h}^{i}(t)], \tag{3}\] which is the weighted prediction error of time deviations. The structure of the clock ensemble is summarized in Fig. 1 below. ### State-Space Model of Clock Ensemble Without loss of generality, adopting clock \(m\) as the reference clock in measurements, the \(n\)th order model (1) of the \(m\)-clock ensemble in the discrete-time sequence \(\{t_{k}\}_{k=0,1,\ldots,T}\) during the operating interval \(t\in(t_{0},t_{T})\) with sampling period \(\tau_{k}=t_{k+1}-t_{k}\in\mathbb{R}_{+}\), \(k=0,1,\ldots,T\), is equivalent to \[\Sigma:\left\{\begin{array}{c}\boldsymbol{x}[k+1]=\boldsymbol{F}[k]\boldsymbol{ x}[k]+\boldsymbol{v}[k]\\ \boldsymbol{y}[k]=\boldsymbol{H}\boldsymbol{x}[k]+\boldsymbol{w}[k]\\ \boldsymbol{x}_{\mathrm{ens}}[k]=(I_{n}\otimes\beta^{\mathsf{T}})\boldsymbol{x}[ k]\end{array}\right. \tag{4}\] where \(\mathbf{x}[k]:=(\mathbf{x}_{1}^{\mathsf{T}}[k],\ldots,\mathbf{x}_{n}^{\mathsf{T}}[k])^{\mathsf{ T}}\in\mathbb{R}^{nm}\) is the ensemble state with \(\mathbf{x}_{i}[k]:=(x_{i}^{\mathsf{T}}[k],\ldots,x_{i}^{m}[k])^{\mathsf{T}}\in \mathbb{R}^{m}\), \(x_{i}^{j}[0]=\alpha_{i}^{j}\) for \(i=1,\ldots,n\), \(j=1,\ldots,m\); \(\mathbf{y}[k]=(y_{1m}[k],\ldots,y_{(m-1)m}[k])\) is the measurement of the ensemble including the observation noise \(\mathbf{w}[k]\in\mathbb{R}^{m-1}\). In particular, the system matrix and the observation matrix \[\mathbf{F}[k]:=A[k]\otimes I_{m},\quad\mathbf{H}:=C\otimes\overline{V} \tag{5}\] of (4) are defined as \[A[k]:= A(\tau_{k}):=\left[\begin{array}{cccc}1&\tau_{k}&\frac{\tau_{k}^{2}} {2}&\ldots&\frac{\tau_{k}^{n-1}}{(n-1)!}\\ 0&1&\tau_{k}&\cdots&\frac{\tau_{k}^{n}}{(n-2)!}\\ \vdots&&\ddots&\ddots&\vdots\\ \vdots&&&1&\tau_{k}\\ 0&0&\cdots&\cdots&1\end{array}\right] \tag{6}\] \[C:= \left[\begin{array}{cccc}1&0&\cdots&0\end{array}\right]\in \mathbb{R}^{1\times n}\] (7) \[\overline{V}:= \left[\begin{array}{cccc}I_{m-1}&-\mathds{1}_{m-1}\end{array} \right]\in\mathbb{R}^{(m-1)\times m}. \tag{8}\] The signal \(\mathbf{v}[k]=(\mathbf{v}_{1}^{\mathsf{T}}[k],\ldots,\mathbf{v}_{n}^{\mathsf{T}}[k])\in \mathbb{R}^{nm}\) represents the system noise with \(\mathbf{v}_{i}[k]:=(\mathbf{v}_{i}^{j}[k],\ldots,\mathbf{v}_{i}^{m}[k])^{\mathsf{T}}\in \mathbb{R}^{m}\), where \(\mathbf{v}^{j}[k]:=(\mathbf{v}_{1}^{j}[k],\ldots,\mathbf{v}_{n}^{j}[k])^{\mathsf{T}}\in \mathbb{R}^{n}\) is the Gaussian noise, that comes from the individual noise \(\xi_{1}^{j},\ldots,\xi_{n}^{j}\), defined as \[\mathbf{v}^{j}[k]:=\int_{0}^{\tau_{k}}A(t_{k+1}-t)[\xi_{1}^{j}(t),\ldots,\xi_{n}^{ j}(t)]^{\mathsf{T}}dt. \tag{9}\] In this state space model, the state \(\mathbf{x}_{1}=(\Delta h^{1},\ldots,\Delta h^{m})^{\mathsf{T}}\in\mathbb{R}^{m}\) represents the vector of the time deviation of the clocks and \(\mathbf{x}_{\mathrm{ens}}[k]\in\mathbb{R}^{n}\) represents the (weighted) ensemble state so that \(C\mathbf{x}_{\mathrm{ens}}[k]\in\mathbb{R}\) (or equivalently, \(\beta^{\mathsf{T}}\mathbf{x}_{1}[k]\in\mathbb{R}\)) denotes the ensemble time deviation. Thus, the atomic time \(\mathrm{TA}(t)\) at \(t=t_{k}\) can be expressed by \[\mathrm{TA}[k]=C\mathbf{x}_{\mathrm{ens}}[k]-C\hat{\mathbf{x}}_{\mathrm{ens}}[k]=C\bm {\epsilon}_{\mathrm{ens}}[k] \tag{10}\] where \(\mathbf{\epsilon}_{\mathrm{ens}}[\cdot]=(I_{n}\otimes\beta^{\mathsf{T}})\mathbf{ \epsilon}[\cdot]\in\mathbb{R}^{n}\) is the ensemble prediction error from the prediction error \(\mathbf{\epsilon}[\cdot]:=\mathbf{x}[\cdot]-\hat{\mathbf{x}}[\cdot]\in\mathbb{R}^{nm}\). ### Algorithm of Averaged Atomic Time for Japan Standard Time (JST-algo) Similar to the other standard time in the world, JST is generated by integrating about 20 high-precision atomic clocks, including hydrogen-maser clocks, cesium-beam atomic clocks, and optical lattice clocks, where the atomic clocks are assumed to be in second order. The pseudocode of JST-algo is shown in Algorithm 1 above, which is only applicable for the ensemble with the second-order model (1) of the clocks, i.e, \(n=2\). The principle of JST-algo is explained in the following. ``` 1:Initialization:\(\Delta\hat{h}^{i}[0]\approx\Delta h^{i}[0]\), \(\hat{\alpha}_{2}^{i}\approx\alpha_{2}^{i}\), and \(k=1\) 2:while\(k\leq T\)do 3:for\(i=1,\ldots,m\)do\(\triangleright\) Prediction 4:\(\Delta\hat{h}^{i}[k]=\Delta\hat{h}^{i}[k-1]+\hat{\alpha}_{2}^{i}\tau_{k-1}\) 5:endfor 6:\(\Delta\hat{h}^{m}[k]=\sum_{i=1}^{m}\beta_{i}\left(\Delta\hat{h}^{i}[k]-y_{im} [k]\right)\)\(\triangleright\) Weighting 7:for\(i\neq s\)do 8:\(\Delta\hat{h}^{i}[k]=\Delta\hat{h}^{m}[k]+y_{im}[k]\)\(\triangleright\) Update 9:endfor 10:\(k\gets k+1\) 11:endwhile ``` **Algorithm 1** JST-algo [21] #### -D1 Prediction procedure Consider \(n=2\), ignoring the terms of Gaussian random noises in (1), we have \[\Delta h^{j}(t)\approx\alpha_{1}^{j}+\alpha_{2}^{j}t. \tag{11}\] It turns out that \[\Delta h^{j}(t_{k+1})\approx\Delta h^{j}(t_{k})+\alpha_{2}^{j}\tau_{k}. \tag{12}\] where the initial conditions \(\alpha_{i}^{j}\), \(i=1,2\), of the atomic clocks can be estimated using some identification methods based on the external reference UTC(\(t\)), \(t=kT\), and the operating interval \(T\). For example, we can take \[\alpha_{1}^{j} \approx\hat{\alpha}_{1}^{j}=\Delta h^{j}(t_{0}) \tag{13}\] \[\alpha_{2}^{j} \approx\hat{\alpha}_{2}^{j}=\frac{\Delta h^{j}(t_{0})-\Delta h^{j} (t_{0}-T)}{T} \tag{14}\] where \(\hat{\alpha}_{2}^{j}\) is referred to as the predicted rate in frequency, and \(\Delta h^{j}(t)=h^{j}(t)-\)UTC(\(t\)), \(t=kT\). Thus, the time deviations during the operating interval \(t\in(t_{0},t_{T})\) can be predicted by \[\Delta\hat{h}^{j}(t_{k+1})=\Delta\hat{h}^{j}(t_{k})+\hat{\alpha}_{2}^{j}\tau_{k}, \;\;\;j=1,\ldots,m. \tag{15}\] #### -D2 Weighting and updating procedure In addition to the above prediction procedure, JST-algo includes weighting and updating procedures associated with measurements to equalize the nonequal clock residuals \(\epsilon_{i}(t):=\Delta h^{i}(t)-\Delta\hat{h}^{i}(t)\) to avoid discontinuities as much as possible. This is because, without such a procedure, the discontinuity that appeared in the averaged atomic time when one of the clocks leaves the ensemble may give rise to instability [21]. In JST-algo, the predicted values (15) are to be modified as \[\Delta\hat{h}^{m}(t_{k+1}) =\sum\nolimits_{i=1}^{m}\beta_{i}\left(\Delta\hat{h}^{i}(t_{k+1})-y _{im}(t_{k+1})\right) \tag{16}\] \[\Delta\hat{h}^{i}(t_{k+1}) =\Delta\hat{h}^{m}(t_{k+1})+y_{im}(t_{k+1}),\quad i\neq m \tag{17}\] where \(\Delta\hat{h}^{i}(t_{k+1})\) in right-hand side of (16) is understood the one in (15). If there is no observation noise, the modified clock residuals after the 2-step procedure (16) and (17) are equalized as the Fig. 1: Structure of time scale generation for an \(\mathbf{m}\)-clock ensemble. averaged atomic time \(\mathrm{TA}(t_{k+1})\) after the prediction procedure (15) for all the clocks. This can be verified as \[\epsilon_{m}(t_{k+1}) =\Delta h^{m}(t_{k+1})-\sum\nolimits_{i=1}^{m}\beta_{i}\Big{(} \Delta\hat{h}^{i}(t_{k+1})-y_{im}(t_{k+1})\Big{)}\] \[=\sum\nolimits_{i=1}^{m}\beta_{i}\left(\Delta h^{m}(t_{k+1})- \Delta\hat{h}^{i}(t_{k+1})+y_{im}(t_{k+1})\right)\] \[=\sum\nolimits_{i=1}^{m}\beta_{i}\left(\Delta h^{i}(t_{k+1})- \Delta\hat{h}^{i}(t_{k+1})\right),\] \[\epsilon_{i}(t_{k+1}) =\Delta h^{i}(t_{k+1})-\Delta\hat{h}^{m}(t_{k+1})+y_{im}(t_{k+1})\] \[=\Delta h^{m}(t_{k+1})-\Delta\hat{h}^{m}(t_{k+1})=\epsilon_{m}(t_ {k+1}),\quad i\neq m.\] ### Algorithm of Averaged Atomic Time by Conventional Kalman Filter Besides JST-algo, there is another famous algorithm of averaged atomic time based on the Kalman filter. Specifically, the CKF-algo for the time scale generation in the operating interval \(t\in(t_{0},t_{T})\) is given by \[\mathbf{K}_{k}=\mathbf{P}_{k}\mathbf{H}^{\mathsf{T}}\big{(}\mathbf{H}\mathbf{P}_{k} \mathbf{H}^{\mathsf{T}}+\hat{R}\big{)}^{-1} \tag{18}\] \[\mathbf{P}_{k+1}=\mathbf{F}[k](\mathbf{P}_{k}-\mathbf{K}_{k}\mathbf{H}\mathbf{P}_{k})\bm {F}[k]^{\mathsf{T}}+\hat{W}\] (19) \[\hat{\mathbf{x}}[k+1]=\mathbf{F}[k]\hat{\mathbf{x}}[k]+\mathbf{K}_{k}(\mathbf{y}[k]- \mathbf{H}\hat{\mathbf{x}}[k]) \tag{20}\] with the initial \(\mathbf{P}_{0}=pI_{nm}\) for some constant \(p\in\mathbb{R}_{+}\) and the guess of the initial state \(\hat{\mathbf{x}}[0]=(\hat{\mathbf{x}}_{1}^{\mathsf{T}}[0],\ldots,\hat{\mathbf{x}}_{n}^{ \mathsf{T}}[0])^{\mathsf{T}}\approx\mathbf{x}[0]\), where \(\mathbf{K}_{k}\) and \(\mathbf{P}_{k}\) are the Kalman gain and error covariance, respectively, \(\hat{R}\) and \(\hat{W}\) are the guesses of the covariance in observation noise \(\mathbf{w}[k]\) and system noise \(\mathbf{v}[k]\), respectively. Then, the predicted time deviations \(\hat{\mathbf{x}}_{1}\) is hence expressed by \[\hat{\mathbf{x}}_{1}[k]=\big{(}C\otimes I_{m}\big{)}\hat{\mathbf{x}}[k]. \tag{21}\] ## 3 Main Results ### Generalized JST-algo via State-Space Model In this section, we present the generalized JST-algo for the atomic clock ensemble with higher-oder clocks, i.e., \(n\geq 2\), where the pseudocode of the generalized JST-algo is shown in Algorithm 2 below. The only difference between Algorithms 1 and 2 is in that the prediction procedure (15) for the time deviation \(\mathbf{x}_{1}=(\Delta h^{1},\ldots,\Delta h^{m})^{\mathsf{T}}\) is generalized as \[\left\{\begin{array}{l}\hat{\mathbf{x}}[k+1]=\mathbf{F}[k]\hat{\mathbf{x}}[k]\\ \hat{\mathbf{x}}_{1}[k+1]=\big{(}C\otimes I_{m}\big{)}\hat{\mathbf{x}}[k+1]\end{array}\right. \tag{22}\] which is compatible with (15) for the second-oder clocks since \(\hat{\mathbf{x}}_{2}[k]=\hat{\mathbf{x}}_{2}[0]=(\hat{\alpha}_{2}^{1},\ldots,\hat{ \alpha}_{m}^{m})^{\mathsf{T}}\) stands for the set of predicted rate in frequency in (14) for any \(k=0,1,\ldots,T\). ``` 1:Initialization:\(\hat{\mathbf{x}}[0]\approx\mathbf{x}[0]\), and \(k=1\) 2:while\(k\leq T\)do 3:\(\hat{\mathbf{x}}[k]=\mathbf{F}[k-1]\hat{\mathbf{x}}[k-1]\)\(\triangleright\) Prediction 4:\((\Delta h^{1}[k],\ldots,\Delta h^{m}[k])^{\mathsf{T}}=\big{(}C\otimes I_{m} \big{)}\hat{\mathbf{x}}[k]\) 5:\(\Delta\hat{h}^{m}[k]=\sum_{i=1}^{m}\beta_{i}\left(\Delta\hat{h}^{i}[k]-y_{im}[k ]\right)\)\(\triangleright\) Weighting 6:for\(i\neq m\)do 7:\(\Delta\hat{h}^{i}[k]=\Delta\hat{h}^{m}[k]+y_{im}[k]\)\(\triangleright\) Update 8:endfor 9:\(k\gets k+1\) 10:endwhile ``` **Algorithm 2** Generalized JST-algo In the state space form, the weighting procedure (16) along with the prediction procedure (22) can be expressed by \[\Delta\hat{h}^{m}[k+1]=\beta^{\mathsf{T}}\Big{\{}\big{(}C\otimes I_{m}\big{)} \mathbf{F}[k]\mathbf{x}[k]-e_{1:m-1}\mathbf{y}[k+1]\Big{\}} \tag{23}\] and hence the individual predicted time deviation of the other clocks in (16) is subsequently updated by \[\Delta\hat{h}^{i}[k+1]= \Delta\hat{h}^{m}[k+1]+y_{im}[k+1]\] \[= \beta^{\mathsf{T}}\Big{\{}\big{(}C\otimes I_{m}\big{)}\mathbf{F}[k]\bm {x}[k]-e_{1:m-1}\mathbf{y}[k+1]\Big{\}}\] \[+y_{im}[k+1],\quad i\neq m. \tag{24}\] As a result, the generalized JST-algo is expressed as \[\hat{\mathbf{x}}_{1}[k+1]= \mathds{1}_{m}\beta^{\mathsf{T}}\Big{\{}\Big{(}C\otimes I_{m} \Big{)}\mathbf{F}[k]\hat{\mathbf{x}}[k]-e_{1:m-1}\mathbf{y}[k+1]\Big{\}}\] \[+e_{1:m-1}\mathbf{y}[k+1], \tag{25}\] \[\hat{\mathbf{x}}_{2:n}[k+1]= \big{(}A_{2:n}[k]\otimes I_{m}\big{)}\hat{\mathbf{x}}_{2:n}[k] \tag{26}\] where \(A_{2:n}[k]\) is the dimension-reduced matrix of the matrix \(A[k]\) by removing the first row and column. ### Equivalence of The Generalized JST-algo and CKF-algo in Averaged Atomic Time In this section, we reveal the equivalence between JST-algo and CKF-algo in averaged atomic time \(\mathrm{TA}[k]\). Specifically, we begin with a fundamental theoretical analysis of JST-algo. **Theorem 1**: _Consider the system model (4) for an \(m\)-clock ensemble. For a given initial guess \(\hat{\mathbf{x}}[0]\), it follows that the averaged atomic time \(\mathrm{TA}[k]\) of the generalized JST-algo_ \[\mathrm{TA}[k]=C\mathbf{\epsilon}_{\mathrm{ens}}[k],\quad k=0,1,\ldots,T, \tag{27}\] _does not depend on the observation noise \(\mathbf{w}[\cdot]\) for any weight \(\beta\) satisfying \(\beta_{1}+\ldots+\beta_{m}=1\). In addition, the ensemble prediction error \(\mathbf{\epsilon}_{\mathrm{ens}}\) is given by_ \[\mathbf{\epsilon}_{\mathrm{ens}}[k+1]=A[k]\mathbf{\epsilon}_{\mathrm{ens}}[k]+\big{(}I_{n} \otimes\beta^{\mathsf{T}}\big{)}\mathbf{v}[k] \tag{28}\] _For the following analysis, we let_ \[P:=\mathds{1}_{m}\beta^{\mathsf{T}},\quad\overline{P}:=I_{m}-\mathds{1}_{m} \beta^{\mathsf{T}}\] _which are projection matrices satisfying_ \[\overline{P}=\overline{V}^{\dagger}\overline{V}-\mathds{1}_{m}(\beta-\tfrac{1}{ m}\mathds{1}_{m})^{\mathsf{T}},\quad\overline{P}e_{1:m-1}\overline{V}=\overline{P}.\] _Now letting \(\overline{V}^{\dagger}:=\overline{V}^{\dagger}-\mathds{1}_{m}(\beta-\tfrac{1}{ m}\mathds{1}_{m})^{\mathsf{T}}e_{1:m-1}\), the predicted time deviations of the JST-algo are written as_ \[\hat{\mathbf{x}}_{1}[k+1]= \big{(}CA[k]\otimes P\big{)}\hat{\mathbf{x}}[k]+\overline{P}e_{1:m -1}\mathbf{y}[k+1]\] \[= \big{(}CA[k]\otimes\overline{P}\big{)}\hat{\mathbf{x}}[k]+\overline{P}e _{1:m-1}\big{\{}\big{(}CA[k]\otimes\overline{V}\big{)}\mathbf{x}[k]\] \[\qquad+\big{(}C\otimes\overline{V}\big{)}\mathbf{v}[k]+\mathbf{w}[k+1]\big{\}}\] \[= \big{(}CA[k]\otimes P\big{)}\hat{\mathbf{x}}[k]+\big{(}CA[k]\otimes \overline{P}\big{)}\mathbf{x}[k]\] \[+\overline{P}\mathbf{v}_{1}[k]+\overline{V}^{\dagger}\mathbf{w}[k+1], \tag{29}\] _Thus the prediction error \(\mathbf{\epsilon}[\cdot]:=\mathbf{x}[\cdot]-\hat{\mathbf{x}}[\cdot]\) of JST-algo follows_ \[\mathbf{\epsilon}_{1}[k+1] where \(\mathbf{\epsilon}_{1}\) is understood as the clock residuals \(\hat{\mathbf{x}}_{1}-\hat{\mathbf{x}}_{1}\) of time deviations. Equivalently, we have \[\mathbf{\epsilon}[k+1] =\underbrace{\left[\begin{array}{c}CA[k]\otimes P\\ 0\quad A_{2:n}[k]\otimes I_{m}\end{array}\right]}_{F^{\mathsf{I}}\big{(}A[k] \otimes I_{m}\big{)}}\mathbf{\epsilon}[k]-\underbrace{\left[\begin{array}{c} \overline{V}^{\mathsf{I}}\\ 0\\ \overline{F}\end{array}\right]}_{\overline{F}}\mathbf{w}[k+1]\] \[+\underbrace{\left[\begin{array}{c}C\otimes P\\ 0\quad I_{n-1}\otimes I_{m}\end{array}\right]}_{F^{\mathsf{I}}}\mathbf{v}[k]. \tag{31}\] Here, note that \(\overline{F}\), \(F^{\mathsf{I}}\) satisfy \(\big{(}I_{n}\otimes\beta^{\mathsf{T}}\big{)}\overline{F}=0\), \(\big{(}I_{n}\otimes\beta^{\mathsf{T}}\big{)}F^{\mathsf{I}}=I_{n}\otimes\beta^{ \mathsf{T}}\) due to \(\beta^{\mathsf{T}}\overline{V}^{\mathsf{I}}=0\), \(\beta^{\mathsf{T}}P=\beta^{\mathsf{T}}\) and hence we obtain \[\mathbf{\epsilon}_{\mathrm{ens}}[k+1] =\big{(}I_{n}\otimes\beta^{\mathsf{T}}\big{)}\big{\{}F^{ \mathsf{I}}\big{(}A[k]\otimes I_{m}\big{)}\mathbf{\epsilon}[k]\] \[\quad\quad+F^{\mathsf{I}}\mathbf{v}[k]-\overline{F}\mathbf{w}[k+1]\big{\}}\] \[=\big{(}I_{n}\otimes\beta^{\mathsf{T}}\big{)}\mathbf{\epsilon}[k]+ \big{(}I_{n}\otimes\beta^{\mathsf{T}}\big{)}\mathbf{v}[k], \tag{32}\] which completes proof. **Remark 1**: _The result of Theorem 1 can contribute to theoretically explaining why NMIs all over the world often require more atomic clocks to generate more accurate time scales. Specifically, consider a homogeneous ensemble with a constant sampling interval \(\tau_{k}=\tau\) for \(k=0,1,\ldots,T\), where the covariance of state noise \(\mathbf{v}[k]\) is written as \(Q\otimes I_{m}\) for some \(Q\). In this case, supposing that the initial error \(\mathbf{\epsilon}[0]=\hat{\mu}_{0}\otimes\mathds{1}_{m}\) for some \(\hat{\mu}_{0}\in\mathbb{R}^{n}\) and the initial covariance of prediction error is \(\mathbf{P}_{0}\), it can be shown from (27) that the expected value of the averaged atomic time \(\mathbb{E}[\mathrm{TA}[k]]=CA^{k}(\tau)\hat{\mu}_{0}\) does not depend on the number \(m\) of the atomic clocks, but the covariance_ \[\mathbb{C}(\mathrm{TA}[k])= C\Big{\{}A^{k}(\tau)\big{(}I_{n}\otimes\beta^{\mathsf{T}} \big{)}\mathbf{P}_{0}\big{(}I_{n}\otimes\beta\big{)}A^{k}(\tau)^{\mathsf{T}}\] \[\quad\quad+\sum_{i=0}^{k-1}A^{i}(\tau)\beta^{\mathsf{T}}\beta QA^ {i}(\tau)^{\mathsf{T}}\Big{\}}C^{\mathsf{T}} \tag{33}\] _is diminished by increasing the number \(m\) of the atomic clocks when the weights of the clocks are set to all the same (since \(\beta^{\mathsf{T}}\beta=\frac{1}{m}\) under \(\beta=\frac{1}{m}\mathds{1}_{m}\)). In other words, better prediction performance of the averaged atomic time can be achieved for the atomic clock ensemble with larger number \(m\) of the clocks._ Now, we begin to make a theoretical analysis for CKF-algo. It is well known that the CKF-algo may result in numerical instability in the real implementation of time generations. This is because the computation error accumulates in the Kalman gains due to the divergence of error covariance under undetectability of the state space model. Two covariance reduction methods are developed in [16] and [24] to suppress the numerical instability in the real implementation. However, neither of them can absolutely avoid the appearance of numerical instability especially for the case when the initial covariance \(\mathbf{P}_{0}\) is large, and the theoretical analysis of CKF-algo for the ideal case without computation errors is still unclear. To reveal the theoretical expression of averaged atomic time \(\mathrm{TA}[k]\) of the CKF-algo, we note that the state profile \(\mathbf{x}\) of the ensemble can be decomposed by observable Kalman canonical decomposition as \[\mathbf{x}=\left[\begin{array}{cc}I_{n}\otimes\overline{V}^{\mathsf{I}}&I_{n} \otimes\mathds{1}_{m}\end{array}\right]\left[\begin{array}{c}\mathbf{\xi}_{0} \\ \mathbf{\xi}_{0}\end{array}\right] \tag{34}\] where \(\mathbf{\xi}_{0}:=\big{(}I_{n}\otimes\overline{V}\big{)}\mathbf{x}\in\mathbb{R}^{nm-n}\), \(\mathbf{\xi}_{0}:=\frac{1}{m}\big{(}I_{n}\otimes\mathds{1}_{m}^{\mathsf{T}}\big{)} \mathbf{x}\in\mathbb{R}^{n}\) denote the observable and unobservable state, respectively. Using this fact and letting \[\mathbf{\epsilon}_{0}[k] :=\mathbf{\xi}_{0}[k]-\hat{\mathbf{\xi}}_{0}[k]=\big{(}I_{n}\otimes \overline{V}\big{)}\mathbf{\epsilon}[k], \tag{35}\] \[\mathbf{\epsilon}_{0}[k] :=\mathbf{\xi}_{0}[k]-\hat{\mathbf{\xi}}_{0}[k]=\frac{1}{m}\big{(}I_{n} \otimes\mathds{1}_{m}^{\mathsf{T}}\big{)}\mathbf{\epsilon}[k], \tag{36}\] the ensemble prediction error \(\mathbf{\epsilon}_{\mathrm{ens}}\) is transformed as \[\mathbf{\epsilon}_{\mathrm{ens}}[k] =\big{(}I_{n}\otimes\beta^{\mathsf{T}}\big{)}\left[\begin{array}{ cc}I_{n}\otimes\overline{V}^{\mathsf{I}}&I_{n}\otimes\mathds{1}_{m}\end{array} \right]\left[\begin{array}{c}\mathbf{\epsilon}_{0}[k]\\ \mathbf{\epsilon}_{0}[k]\end{array}\right]\] \[=\big{(}I_{n}\otimes\beta^{\mathsf{T}}\overline{V}^{\mathsf{I}} \big{)}\mathbf{\epsilon}_{0}[k]+\mathbf{\epsilon}_{0}[k]. \tag{37}\] That is to say, once we derive the dynamics of \(\mathbf{\epsilon}_{0}[k]\) and \(\mathbf{\epsilon}_{0}[k]\), the dynamics of \(\mathbf{\epsilon}_{\mathrm{ens}}[k]\) can be accordingly derived. In terms of CKF-algo, it can be theoretically shown that the predicted observable state \(\hat{\mathbf{\xi}}_{\mathrm{o}}\) under CKF-algo follows \[\hat{\mathbf{K}}_{k}=\hat{\mathbf{P}}_{k}\mathbf{H}_{\mathrm{o}}^{\mathsf{T}} \big{(}\mathbf{H}_{\mathrm{o}}\hat{\mathbf{P}}_{k}\mathbf{H}_{\mathrm{o}}^{\mathsf{T}}+ \hat{\mathbf{R}}\big{)}^{-1} \tag{38}\] \[\hat{\mathbf{P}}_{k+1}=\mathbf{F}_{\mathrm{o}}[k](\hat{\mathbf{P}}_{k}-\hat{\bm {K}}_{k}\mathbf{H}_{\mathrm{o}}\hat{\mathbf{P}}_{k})\mathbf{F}_{\mathrm{o}}[k]^{\mathsf{T}}+W_ {\mathrm{o}}\] (39) \[\hat{\mathbf{\xi}}_{\mathrm{o}}[k+1]=\mathbf{F}_{\mathrm{o}}[k]\hat{\mathbf{ \xi}}_{\mathrm{o}}[k]+\hat{\mathbf{K}}_{k}(\mathbf{y}[k]-\mathbf{H}_{\mathrm{o}}\hat{\mathbf{ \xi}}_{\mathrm{o}}[k]) \tag{40}\] where \(\mathbf{F}_{\mathrm{o}}[k]:=A[k]\otimes I_{m-1}\), \(\mathbf{H}_{\mathrm{o}}:=C\otimes I_{m-1}\), \(W_{\mathrm{o}}:=(I_{n}\otimes\overline{V})\hat{W}(I_{n}\otimes\overline{V})^{ \mathsf{T}}\), are the system matrix, measurement matrix, and system noise covariance for the observable subspace. Note that \(\hat{\mathbf{K}}_{k}:=(I_{n}\otimes\overline{V})\mathbf{K}_{k}\) and \(\hat{\mathbf{P}}_{k}:=(I_{n}\otimes\overline{V})\mathbf{P}_{k}(I_{n}\otimes\overline{V})^ {\mathsf{T}}\) are understood as the Kalman gain and the error covariance of CKF-algo in observable state space, respectively. The detailed derivation of (40) is attached in Appendix below for reference. The next result shows that the averaged atomic time \(\mathrm{TA}[k]\) of JST-algo and CKF-algo are equivalent to each other if we adopt some specific guesses of the noise covariance and choose equal weights for the clocks. **Theorem 2**: _Consider the system model (4) for an \(m\)-clock ensemble. For a given initial guess \(\hat{\mathbf{x}}[0]\), if the guess \(\hat{W}\) of the system noise covariance is given by \(\hat{W}=Q\otimes I_{m}\) for some \(Q\geq 0\), then the averaged atomic time \(\mathrm{TA}[k]\) of the CKF-algo is given by (27) with the ensemble prediction error_ \[\mathbf{\epsilon}_{\mathrm{ens}}[k+1]= A[k]\mathbf{\epsilon}_{\mathrm{ens}}[k]+(I_{n}\otimes\beta^{ \mathsf{T}})\mathbf{v}[k]\] \[-(I_{n}\otimes\beta^{\mathsf{T}}\overline{V}^{\mathsf{I}})\hat{\mathbf{K}}_{k} (\mathbf{H}_{\mathrm{o}}\mathbf{\epsilon}_{\mathrm{o}}^{\mathrm{CKF}}[k]+\mathbf{w}[k]) \tag{41}\] _where the prediction error of observable state \(\mathbf{\xi}_{\mathrm \(\mathbf{K}_{k}\mathbf{H})\mathbf{\epsilon}[k]-\mathbf{K}_{k}\mathbf{w}[k]+\mathbf{v}[k]\), we have \[\mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k+1]= A[k]\mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k]+\frac{1}{m}\big{(}I_{n} \otimes\mathds{1}_{m}^{\sf T}\big{)}\mathbf{v}[k]\] \[-\frac{1}{m}\big{(}I_{n}\otimes\mathds{1}_{m}^{\sf T}\big{)}\mathbf{K }_{k}(\mathbf{H}\mathbf{\epsilon}[k]+\mathbf{w}[k]) \tag{43}\] Note that the condition \(\hat{W}=Q\otimes I_{m}\) indicates \((I_{n}\!\otimes\!\mathds{1}_{m}^{\sf T})\mathbf{K}_{k}=(I_{n}\!\otimes\!\mathds{1 }_{m}^{\sf T})\mathbf{P}_{k}\mathbf{H}^{\sf T}\big{(}\mathbf{H}\mathbf{P}_{k}\mathbf{H}^{\sf T}+ \hat{R}\big{)}^{-1}=0\) for \(k=0,1,\ldots,T\), because \((I_{n}\otimes\mathds{1}_{m}^{\sf T})\mathbf{P}_{k}\mathbf{H}^{\sf T}=0\), \(k=0,1,\ldots,T\). In particular, since \(\mathbf{P}_{0}=pI_{nm}\), it follows that \((I_{n}\otimes\mathds{1}_{m}^{\sf T})\mathbf{P}_{0}\mathbf{H}^{\sf T}=0\) holds. Furthermore, let \((I_{n}\otimes\mathds{1}_{m}^{\sf T})\mathbf{P}_{k}\mathbf{H}^{\sf T}:=\mathbf{U}_{k}\), we have \[\mathbf{U}_{1}= (I_{n}\otimes\mathds{1}_{m}^{\sf T})\left(\mathbf{F}[0](\mathbf{P}_{0}- \mathbf{K}_{0}\mathbf{H}\mathbf{P}_{0})\mathbf{F}[0]^{\sf T}+\hat{W}\right)\mathbf{H}^{\sf T}\] \[= pA[0]A[0]^{\sf T}(I_{n}\otimes\mathds{1}_{m}^{\sf T})\mathbf{H}^{\sf T }+(I_{n}\otimes\mathds{1}_{m}^{\sf T})\hat{W}\mathbf{H}^{\sf T}=0,\] \[\mathbf{U}_{2}= (I_{n}\otimes\mathds{1}_{m}^{\sf T})\left(\mathbf{F}[1](\mathbf{P}_{0}- \mathbf{K}_{1}\mathbf{H}\mathbf{P}_{1})\mathbf{F}[1]^{\sf T}+\hat{W}\right)\mathbf{H}^{\sf T}\] \[= A[1](I_{n}\otimes\mathds{1}_{m}^{\sf T})\left(\mathbf{F}[0](\mathbf{P}_ {0}-\mathbf{K}_{0}\mathbf{H}\mathbf{P}_{0})\mathbf{F}[0]^{\sf T}+\hat{W}\right)\] \[\cdot\mathbf{F}[1]^{\sf T}\mathbf{H}^{\sf T}+(I_{n}\otimes\mathds{1}_{m}^ {\sf T})\hat{W}\mathbf{H}^{\sf T}\] \[= pA[1]A[0]A[0]^{\sf T}A[1]^{\sf T}(I_{n}\otimes\mathds{1}_{m}^{ \sf T})\mathbf{H}^{\sf T}=0,\] whereas the proof for \(\mathbf{U}_{k}=0,\,k>2\) can be similarly handled. Thus, the prediction error of unobservable state is given by \[\mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k+1]= A[k]\mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k]+\frac{1}{m}\big{(}I_{n} \otimes\mathds{1}_{m}^{\sf T}\big{)}\mathbf{v}[k] \tag{44}\] Now, noting that \(\beta^{\sf T}\overline{V}^{\sf T}\overline{V}=\beta^{\sf T}-\frac{1}{m} \mathds{1}_{m}^{\sf T}=\beta_{\rm df}^{\sf T}\), it follows from \[\mathbf{\epsilon}_{\rm ens}[k+1]= \big{(}I_{n}\otimes\beta^{\sf T}\overline{V}^{\sf T}\big{)}\mathbf{ \epsilon}_{\rm 0}^{\rm CKF}[k+1]+\mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k+1]\] \[= \big{(}I_{n}\otimes\beta^{\sf T}\overline{V}^{\sf T}\big{)}\Big{(} \mathbf{F}_{\rm o}[k]\mathbf{\epsilon}_{\rm o}^{\rm CKF}[k]+(I_{n}\otimes\overline{V}) \mathbf{v}[k]\Big{)}\] \[+A[k]\mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k]+\frac{1}{m}\big{(}I_{n} \otimes\mathds{1}_{m}^{\sf T}\big{)}\mathbf{v}[k]\] \[-\big{(}I_{n}\otimes\beta^{\sf T}\overline{V}^{\sf T}\big{)}\hat {\mathbf{K}}_{k}\Big{(}\mathbf{H}_{\rm o}\mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k]+\mathbf{w}[k]\Big{)}\] \[= A[k]\big{(}I_{n}\otimes\beta^{\sf T}\big{)}\mathbf{\epsilon}[k]+\big{(} I_{n}\otimes\beta^{\sf T}\big{)}\mathbf{v}[k]\] \[-\big{(}I_{n}\otimes\beta^{\sf T}\overline{V}^{\sf T}\big{)}\hat {\mathbf{K}}_{k}\Big{(}\mathbf{H}_{\rm o}\mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k]+\mathbf{w}[k]\Big{)} \tag{45}\] that (41) holds. Then it can be seen that CKF-algo and JST-algo generate the same averaged atomic time \(\rm TA[k]\) if and only if \(\big{(}I_{n}\!\otimes\!\beta^{\sf T}\overline{V}^{\sf T}\big{)}\hat{\mathbf{K}}_{k}= \big{(}I_{n}\!\otimes\!\beta^{\sf T}_{\rm df}\overline{V}^{\sf T}\big{)}\mathbf{K}_{k }=(I_{n}\!\otimes\!\beta^{\sf T}_{\rm df})\mathbf{K}_{k}=0\), i.e., \((I_{n}\otimes\beta^{\sf T}_{\rm df})\mathbf{P}_{k}\mathbf{H}^{\sf T}=0\), \(k=0,1,\ldots,T\). Recalling \(\mathbf{P}_{0}=pI_{nm}\) and \(\beta_{\rm df}\) satisfies \(\beta^{\sf T}_{\rm df}\mathds{1}_{m}=0\), it follows that \((I_{n}\otimes\beta^{\sf T}_{\rm df})\mathbf{P}_{0}\mathbf{H}^{\sf T}=0\) and \((I_{n}\otimes\beta^{\sf T}_{\rm df})\mathbf{P}_{1}\mathbf{H}^{\sf T}=0\) holds if and only if \(\beta_{\rm df}=0\), i.e., \(\beta=\frac{1}{m}\mathds{1}_{m}\), which completes the proof. **Remark 2**: _The equivalence result in Theorem 2 indicates that more precise averaged atomic time can be achieved by the clock ensemble with larger number \(m\) of the clocks for both JST-algo and CKF-algo in a homogeneous ensemble (see Remark 1). However, it is worth noting that the computation cost of the CKF-algo may be larger than the generalized JST-algo when the number \(m\) of the clocks is too big because CKF-algo requires more matrix computation than the generalized JST-algo. A discussion in terms of the runtime of CKF-algo and the generalize JST-algo will be given in Section IV later._ ### Relation Between The Generalized JST-algo and CKF-algo in Clock Residual Except the comparison of accuracy with the averaged atomic time \(\rm TA[k]\), it is important to compare clock residuals \(\epsilon_{i}[k]:=\Delta h^{i}[k]-\Delta\hat{h}^{i}[k]\) of the CKF-algo and the generalized JST-algo sequence the clock residuals are related to stability of the generated time. The next result reveals that the mean of the clock residual \(\mathbf{\epsilon}_{1}=(\epsilon_{1},\ldots,\epsilon_{m})^{\sf T}\) of the generalized JST-algo and CKF-algo may be eventually equivalent in the case if equal weights are considered for the clocks. **Theorem 3**: _Consider the system model (4) for an \(m\)-clock ensemble. For a given initial guess \(\hat{\mathbf{x}}[0]\), if the weights of the atomic clocks are all equal, i.e., \(\beta=\frac{1}{m}\mathds{1}_{m}\), and if the guess \(\hat{W}\) of the state noise covariance is given by \(\hat{W}=Q\otimes I_{m}\), for some \(Q\geq 0\), then the clock residuals \(\mathbf{\epsilon}_{1}\) of the generalized JST-algo and CKF-algo respectively follow_ \[\mathbf{\epsilon}_{1}^{\rm JST}[k]=-\overline{V}^{\sf T}\mathbf{w}[k]+C\mathbf{ \epsilon}_{\rm 0}^{\rm CKF}[k]\mathds{1}_{m} \tag{46}\] \[\mathbf{\epsilon}_{1}^{\rm CKF}[k]=\overline{V}^{\sf T}\mathbf{H}_{\rm o} \mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k]+C\mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k]\mathds{1}_{m} \tag{47}\] _for \(k=1,2,\ldots,T\), where \(\mathbf{\epsilon}_{\rm o}^{\rm CKF}[k]\) and \(\mathbf{\epsilon}_{\rm 0}^{\rm CKF}[k]\) are given by (42) and (44), respectively._ _Furthermore, if the sampling interval is constant, i.e., \(\tau_{k}=\tau\) for \(k=0,\ldots,T\), then the mean of clock residual \(\mathbf{\epsilon}_{1}\) satisfy_ \[\lim_{k\to\infty}\Big{\mathbb{E}\Big{[}\mathbf{\epsilon}_{1}^{\rm JST}[k]-\mathbf{ \epsilon} to zero because of the observable pair \((\mathbf{F}_{\rm o}[k],\mathbf{H}_{\rm o})\). Thus, (48) is immediate since \[\mathbb{E}\Big{[}\mathbf{\epsilon}_{1}^{\rm JST}[k]-\mathbf{\epsilon}_{1}^{ \rm CKF}[k]\Big{]} =-\mathbb{E}\Big{[}\overline{V}^{\dagger}\mathbf{w}[k]+\overline{V}^{ \dagger}\mathbf{H}_{\rm o}\mathbf{\epsilon}_{\rm o}^{\rm CKF}[k]\Big{]}\] \[=-\overline{V}^{\dagger}\mathbf{H}_{\rm o}\mathbb{E}\Big{[}\mathbf{ \epsilon}_{\rm o}^{\rm CKF}[k]\Big{]} \tag{53}\] is converging to 0. Now, since \[\mathbb{C}\Big{[}\mathbf{\epsilon}_{1}^{\rm JST}[k]\Big{]}-\mathbb{C }\Big{[}\mathbf{\epsilon}_{1}^{\rm CKF}[k]\Big{]} =-\mathbb{C}\Big{[}\overline{V}^{\dagger}\mathbf{H}_{\rm o}\mathbf{ \epsilon}_{\rm o}^{\rm CKF}[k]\Big{]}\] \[=-\overline{V}^{\dagger}\mathbf{H}_{\rm o}\hat{\mathbf{P}}[k]\mathbf{H}_{\rm o }^{\sf T}\big{(}\overline{V}^{\dagger}\big{)}^{\sf T}\leq 0\] holds for \(\mathbf{w}[k]=0\), \(k=0,1,\ldots,T\), the proof is complete. **Remark 3**: _Note that the term \(C\mathbf{\epsilon}_{\rm o}^{\rm CKF}[k]\) in both (46) and (47) under the generalized JST-algo and CKF-algo is nothing but the averaged atomic time \({\rm TA}[k]\) since the prediction error \(\mathbf{\epsilon}_{\rm o}^{\rm CKF}\) of observable state in (44) reduces to the ensemble prediction error \(\mathbf{\epsilon}_{\rm ens}\) in (28) when the conditions of Theorem 3 are satisfied. Therefore, the result (46) in Theorem 3 indicates that if there is no observation noise, i.e., \(\mathbf{w}[k]=0\), \(k=0,\ldots,T\), then the clock residuals under the generalized JST-algo are equalized to the averaged atomic time \({\rm TA}[k]\) for all the clocks, which is consistent with the analysis in Section II-D.2. Meanwhile, the result also indicates that sophisticated measuring equipment (hardware) is required in the implementation of the generalized JST-algo to guarantee stability of the generated local time (otherwise the clock residuals may be significantly different to each other and hence instability may be risen when one of the clocks leaves the ensemble)._ **Remark 4**: _Theorem 3 indicates that JST-algo can guarantee lower covariance of the clock residual \(\mathbf{\epsilon}_{1}\) than CKF-algo when the observation noise is small enough. More precisely, if the actual covariance \(R\) of the observation noise \(\mathbf{w}[k]\) satisfies_ \[R-\mathbf{H}_{\rm o}\hat{\mathbf{P}}_{\rm ss}\mathbf{H}_{\rm o}^{\sf T}<0 \tag{54}\] _then the covariance of the clock residuals \(\mathbf{\epsilon}_{1}[k]\) of the generalized JST-algo is never larger than CKF-algo as \(k\to\infty\) due to_ \[\lim_{k\to\infty}\Big{\{}\mathbb{C}\Big{[}\mathbf{\epsilon}_{1}^{\rm JST }[k]\Big{]}-\mathbb{C}\Big{[}\mathbf{\epsilon}_{1}^{\rm CKF}[k]\Big{]}\Big{\}}\] \[=\overline{V}^{\dagger}\Big{(}R-\mathbf{H}_{\rm o}\hat{\mathbf{P}}_{\rm ss }\mathbf{H}_{\rm o}^{\sf T}\Big{)}\big{(}\overline{V}^{\dagger}\big{)}^{\sf T} \leq 0. \tag{55}\] _where \(\hat{\mathbf{P}}_{\rm ss}\) is the steady-state covariance of (39) satisfying the algebraic Riccati equation given by_ \[0= -\mathbf{F}_{\rm o}[k]\hat{\mathbf{P}}_{\rm ss}\mathbf{H}_{\rm o}^{\sf T} \big{(}\mathbf{H}_{\rm o}\hat{\mathbf{P}}_{\rm ss}\mathbf{H}_{\rm o}^{\sf T}+\hat{R}\big{)} ^{-1}\mathbf{H}_{\rm o}\hat{\mathbf{P}}_{\rm ss}\mathbf{F}_{\rm o}[k]^{\sf T}\] \[+\mathbf{F}_{\rm o}[k]\hat{\mathbf{P}}_{\rm ss}\mathbf{F}_{\rm o}[k]^{\sf T}- \hat{\mathbf{P}}_{\rm ss}+W_{\rm o}. \tag{56}\] _In such a case, combining the result of Theorems 2 and 3, the generalized JST-algo is hence considered as a better algorithm than CKF-algo for homogeneous ensembles. This is because in such a case, the clock residual \(\epsilon_{1}\) of clock 1 satisfies_ \[\lim_{k\to\infty}\Big{\{}\mathbb{C}\Big{[}\epsilon_{1}^{\rm JST }[k]\Big{]}-\mathbb{C}\Big{[}\epsilon_{1}^{\rm CKF}[k]\Big{]}\Big{\}}\] \[=\lim_{k\to\infty}e_{i}\left(\mathbb{C}\Big{[}\mathbf{\epsilon}_{1}^{ \rm JST}[k]\Big{]}-\mathbb{C}\Big{[}\mathbf{\epsilon}_{1}^{\rm CKF}[k]\Big{]} \right)\mathbf{e}_{i}^{\sf T}\] \[=e_{i}\overline{V}^{\dagger}\Big{(}R-\mathbf{H}_{\rm o}\hat{\mathbf{P}}_{ \rm ss}\mathbf{H}_{\rm o}^{\sf T}\Big{)}\big{(}\overline{V}^{\dagger}\big{)}^{\sf T }\mathbf{e}_{i}^{\sf T}\leq 0, \tag{57}\] _where \(e_{i}=[e_{i}^{j}]_{j=1,\ldots,m}\) denotes standard basis given by \(e_{i}^{j}=1\), \(e_{i}^{j}=0\), \(j\neq i\), e.g., \(e_{1}=[1\ \ 0\ \ldots\ 0]\)._ In the case if the covariance \(R\) of the observation noise \(\mathbf{w}[k]\) is too large to satisfy the condition (54), the next result can be used to further compare the variance of clock residual of a specific clock using the steady-state covariance. **Theorem 4**: _Consider the system model (4) for an \(m\)-clock ensemble. For a given initial guess \(\hat{\mathbf{x}}[0]\), if the conditions of Theorem 3 are all satisfied but with possible non-zero observation noises, i.e., \(R\geq 0\), then the variance of residual \(\epsilon_{i}\) of clock \(i\) of the generalized JST-algo and CKF-algo satisfy_ \[\lim_{k\to\infty}\Big{\{}\mathbb{C}\Big{[}\epsilon_{i}^{\rm JST}[k]\Big{]}- \mathbb{C}\Big{[}\epsilon_{i}^{\rm CKF}[k]\Big{]}\Big{\}}<0 \tag{58}\] _if and only if the covariance \(R\) of the observation noise \(\mathbf{w}[k]\) satisfies_ \[\mathcal{L}_{i}:=e_{i}\overline{V}^{\dagger}\Big{(}R-\mathbf{H}_{\rm o}\hat{\mathbf{P}}_{ \rm ss}\mathbf{H}_{\rm o}^{\sf T}\Big{)}\big{(}\overline{V}^{\dagger}\big{)}^{\sf T}e _{i}^{\sf T}<0. \tag{59}\] The result is a direct consequence of Theorem 3 with (55) and (57). ## 4 Numerical Simulations This section provides a couple of examples to demonstrate our results. In particular, we use a 5-clock ensemble (Example 1) with homogeneous second-order clocks to verify equivalence result in Theorem 2, and use a 3-clock ensemble (Example 2) with third-order clocks to verify the result of Theorems 3 and 4 in terms of clock residuals. ### Example 1: Second-order Clocks Consider a second-order homogeneous atomic clock ensemble with \(m=5\) clocks where variances of the system noises are set to \(\sigma_{1}^{j}=2.0587e-20\), \(\sigma_{2}^{j}=4.0760e-28\) for all the clocks. The sampling period is set to \(\tau=0.1\)s. The variances of observation noises are set to \(1e-12\) for all the clocks, i.e., \(R=1e-12I_{4}\). In the simulation, the initial state \(\mathbf{x}[0]\) of this 5-clock ensemble is set to a deterministic value around \(x_{i}^{j}[0]\in(1e-15,2e-15)\). The initial predicted value is set to \(\hat{\mathbf{x}}[0]=1e-15\mathds{1}_{5}\). The guess of the state noise (resp., measurement) covariance is set to the same as the actual one satisfying \(\hat{W}=Q\otimes I_{m}\) for some \(Q\geq 0\) (resp., \(\hat{R}=R\)). We let \(T=36000\) so that we can discuss the performance of the algorithms in one hour. #### 4.1.1 Equal Weights In the case of equal weights for the clocks, i.e., \(\beta=\frac{1}{5}\mathds{1}_{5}\), the averaged atomic time \({\rm TA}[k]\) of JST-algo is illustrated as the black line in Fig. 2 where the one simulated by CKF-algo with \(\mathbf{P}_{0}=1e-8I\) is shown as the grey line. In this case, it follows from Theorem 2 that CKF-algo and JST-algo ideally generate the same averaged atomic time \({\rm TA}[k]\) at least if there are no calculation errors. The averaged atomic time \({\rm TA}[k]\) of CKF-algo with the covariance reduction method [13] is illustrated as the yellow line in Fig. 2. It can be seen from this figure that the covariance reduction method is able to suppress the behavior of numerical instability of CKF-algo in real implementation so that the generated averaged atomic time is close to the theoretical value (or, equivalently, the value of JST-algo). However, this method can not completely avoid numerical instability (see the different overlapping Allan deviation of the averaged atomic time of CKF-algo [13], [24] and JST-algo represented by the yellow and black lines in Fig. 3). In terms of the short time performance in 5 minutes, it can be seen from Fig. 3 that the overlapping Allan deviation of CKF-algo coincides with JST-algo (see the dashed line and the black markers). This is because the calculation errors are negligible at the beginning of calculations and hence the averaged atomic times are almost the same under CKF-algo and JST-algo in real implementation. #### 3.1.2 Nonequal Wights Now, we consider the case with non-equal weights. Let \(\beta=(0.250,0.375,0.125,0.125,0.1250)^{\mathsf{T}}\), it follows from Theorem 2 that since the necessary condition \(\beta=\frac{1}{m}\mathds{1}_{m}\) for equivalence is not satisfied, CKF-algo and JST-algo can not generate the same averaged atomic time \(\mathrm{TA}[k]\). This fact can be verified by the averaged atomic time shown in Fig. 4, where the black and grey lines correspond to \(\mathrm{TA}[k]\) of JST-algo and CKF-algo in theory, respectively. #### 3.1.3 Runtime Finally, it is interesting to note that JST-algo is superior to CKF-algo in the runtime when there are a number of atomic clocks in the ensemble. Figure 5 shows the runtime of JST-algo and CKF-algo versus the number \(m\) of the clocks. It can be seen from the figure that the runtime of CKF-algo is likely to be exponentially increased when we increase the size of the ensemble, but the runtime of JST-algo is almost the same when increasing the number of clocks from 2 to 20. ### Example 2: Third-order Clocks Consider a third-order homogeneous atomic clock ensemble with \(m=3\) clocks where variances of the system noises are set to \(\sigma_{1}^{j}=9e-26\), \(\sigma_{2}^{j}=7.5e-34\), and \(\sigma_{3}^{j}=1e-47\) for all the clocks. The sampling period is set to \(\tau=1\)s. In the simulation, both of the initial state \(\mathbf{x}[0]\) and the guess of the initial state \(\hat{\mathbf{x}}[0]\) of this 3-clock ensemble are set to \(\hat{\mathbf{x}}[0]=\mathbf{x}[0]=1e-28\mathds{1}_{9}\). The guess of the state noise covariance is set to the same as the actual one satisfying \(\hat{W}=Q\otimes I_{m}\) for some \(Q\geq 0\). Furthermore, we let \(\beta=\frac{1}{3}\mathds{1}_{3}\). #### 3.1.1 Confidence Interval of Averaged Atomic Time The confidence interval of the averaged atomic time \(\mathrm{TA}[k]\) is shown in Fig. 6 with the variance of observation noises being \(1e-12\) for all the clocks (i.e., \(R=1e-12I_{3}\)) to compare the generalized JST-algo, CKF-algo, and CKF-algo with Brown and Greenhall's correction. It can be seen from the figure Figure 4: The averaged atomic time \(\mathbf{TA}[\mathbf{k}]\) under JST-algo and CKF-algo (theory) with non-equal weights Figure 5: The runtime of CKF-algo and JST-algo versus the number \(m\) of the clocks, where each runtime is taken as the averaged value of the runtimes of 500 simulations. Figure 3: Overlapping Allan deviations of the time scale of JST-algo, CKF-algo, and CKF-algo [13] with equal weights. Figure 6: 98%-confidence interval of \(\mathbf{TA}[\mathbf{k}]\) under the CKF-algo, CKF-algo with Brown and Greenhall’s correction, and the generalized JST-algo. The solid line represents the mean of \(\mathbf{TA}[\mathbf{k}]\) in 50 times of simulations. that even though the confidence intervals of those algorithms coincide with each other in the early stage but they are diverse from each other in the later stage. The generalized JST-also can further narrow the confidence interval from CKF-algo with Brown's correction [16] and Greenhall's correction [24], meaning that numerical stability is improved. #### 3.2.2 Clock Residual With Large Observation Noise Note that \(R=1e-12I_{3}\) is too large to satisfy the condition (54). It can be calculated that \(\mathcal{L}_{1}=\mathcal{L}_{2}=5.56e-13\), \(\mathcal{L}_{3}=2.22e-13\). Thus, since the inequality (59) holds with the opposite signs, it follows from Theorem 4 that CKF-algo is better than the generalized JST-algo in generating smaller variances of residual for all the 3 clocks. This result can be verified by the residual of clock 1 illustrated in Fig. 7 where the red (resp., grey) line represents the one under CKF-algo (resp., JST-algo) with \(\boldsymbol{P}_{0}=1e-13I\). It can be seen from this figure that the residual of clock 1 under the generalized JST-algo is exactly the same as the theoretical value calculated by (46), which verifies Theorem 3. #### 3.2.3 Clock Residual With Small Observation Noise Let the variance of observation noises be set to \(1e-27\) for all the clocks, i.e., \(R=1e-27I_{3}\), so that the conditions (54), (59) are satisfied with \(\mathcal{L}_{1}=\mathcal{L}_{2}=-6.0000e-26\), and \(\mathcal{L}_{3}=-6.0005e-26\). It follows from Theorem 4 that the generalized JST-algo is better than CKF-algo in generating smaller variances of individual residuals for all the 3 clocks. Without loss of generality, the result for clock 1 can be verified by the grey and red lines representing the residual of clock 1 of CKF-algo and the generalized JST-algo in Fig. 8. In addition, we note that the red line in Fig. 9 represents theoretical difference \(\mathcal{L}_{1}=\lim_{k\rightarrow\infty}\{\mathbb{C}[\epsilon_{1}^{\rm JST} [k]]-\mathbb{C}[\epsilon_{1}^{\rm CKF}[k]]\}\) between the generalized JST-algo and CKF-algo in residual of clock 1, which is close to the actual value with many stochastic paths and hence verifies the results (55) and (57). #### 3.2.4 Discussion on Size of Observation Noise Now we briefly discuss the relation between the variance \(R=rI_{3}\) of observation noises and the superiority of the generalized JST-algo compared to CKF-algo. It can be seen from Fig. 10 that when the observation noises are small enough (e.g., \(r\) is in the region \(A\) of the figure), \(\mathcal{L}_{i}<0\) holds for all the clocks and hence the generalized JST-algo is better than CKF-algo in each of the individual residuals (even though the averaged atomic times \(\mathrm{TA}[k]\) of the two methods are identical to each other as we discussed in Theorem 2). Alternatively, if the observation noises are large enough (e.g., \(r\) is in the region \(B\) of the figure), \(\mathcal{L}_{i}>0\) holds for all the clocks and hence the generalized JST-algo is worse than CKF-algo in each of the individual residuals. Therefore, recalling the discussion in Section 3.2.4 about the runtime of JST-algo and CKF-algo, when the observation noises are small enough in such a homogeneous clock ensemble, it is suggested to imply the generalized JST-algo (instead of CKF-algo) for time scale generation especially in the case when the number of the clocks is large so that both the individual residuals and the runtime of the algorithms can be reduced. Figure 8: The actual residual of clock 1 under the CKF-algo and the generalized JST-algo with tiny observation noises. Figure 7: The actual and theoretical residual of clock 1 under the CKF-algo and the generalized JST-algo with large observation noises. The theoretical value for JST-algo is calculated by (46) in Theorem 3. Figure 9: The actual and theoretical difference \(\mathcal{L}_{1}\) between JST-algo and CKF-algo in the residual of clock 1. ## V Conclusion In this paper, we studied the comparison between two time scale generation algorithms using atomic clock ensembles, that are, CKF-algo and JST-algo. We presented a generalized JST-algo via the state-space model of the higher-order atomic clock ensemble, where the proposed generalized JST-algo is reduced to the existing JST-algo for second-order clocks. By revealing the theoretical expressions of the averaged atomic times, we discussed the relation between the generalized JST-algo and CKF-algo. It is found that even though the measurement signal is not filtered, JST-algo can yield the averaged atomic times independent to the observation noise. The prediction error of Kalman filtering algorithm was rigorously shown by using the prediction error regarding an observable state space. We proved that the generalized JST-algo is equivalent to CKF-algo in the sense of generating averaged atomic time if and only if equal averaging weights are considered for the atomic clocks when the covariance matrices of system noises are identical for all the clocks. In such a homogeneous clock ensemble, we further revealed the theoretical relation between the generalized JST-algo and CKF-algo in the individual clock residuals and presented the sufficient and necessary condition for observation noises to determine which algorithm can generate the clock residuals with smaller variances. We discussed the relation between the runtime of the algorithms versus the number of clocks in one of the numerical examples. It is found that if the observation noises are tiny enough by some sophisticated measuring equipment, one is suggested to imply the generalized JST-algo (instead of CKF-algo) for time scale generation since the generalized JST-algo can generate smaller clock residuals and the calculation cost is lower than CKF-algo if the number of the atomic clocks is large.
2304.12845
(Local) Differential Privacy has NO Disparate Impact on Fairness
In recent years, Local Differential Privacy (LDP), a robust privacy-preserving methodology, has gained widespread adoption in real-world applications. With LDP, users can perturb their data on their devices before sending it out for analysis. However, as the collection of multiple sensitive information becomes more prevalent across various industries, collecting a single sensitive attribute under LDP may not be sufficient. Correlated attributes in the data may still lead to inferences about the sensitive attribute. This paper empirically studies the impact of collecting multiple sensitive attributes under LDP on fairness. We propose a novel privacy budget allocation scheme that considers the varying domain size of sensitive attributes. This generally led to a better privacy-utility-fairness trade-off in our experiments than the state-of-art solution. Our results show that LDP leads to slightly improved fairness in learning problems without significantly affecting the performance of the models. We conduct extensive experiments evaluating three benchmark datasets using several group fairness metrics and seven state-of-the-art LDP protocols. Overall, this study challenges the common belief that differential privacy necessarily leads to worsened fairness in machine learning.
Héber H. Arcolezi, Karima Makhlouf, Catuscia Palamidessi
2023-04-25T14:18:12Z
http://arxiv.org/abs/2304.12845v2
# (Local) Differential Privacy has NO Disparate Impact on Fairness ###### Abstract In recent years, Local Differential Privacy (LDP), a robust privacy-preserving methodology, has gained widespread adoption in real-world applications. With LDP, users can perturb their data on their devices before sending it out for analysis. However, as the collection of multiple sensitive information becomes more prevalent across various industries, collecting a single sensitive attribute under LDP may not be sufficient. Correlated attributes in the data may still lead to inferences about the sensitive attribute. This paper empirically studies the impact of collecting multiple sensitive attributes under LDP on fairness. We propose a novel privacy budget allocation scheme that considers the varying domain size of sensitive attributes. This generally led to a better privacy-utility-fairness trade-off in our experiments than the state-of-art solution. Our results show that LDP leads to slightly improved fairness in learning problems without significantly affecting the performance of the models. We conduct extensive experiments evaluating three benchmark datasets using several group fairness metrics and seven state-of-the-art LDP protocols. Overall, this study challenges the common belief that differential privacy necessarily leads to worsened fairness in machine learning. Keywords:Fairness Local Differential Privacy Machine Learning. ## 1 Introduction The advent of the Big Data era has brought many benefits but has also raised significant concerns about privacy and algorithm bias in Machine Learning (ML). On the one hand, with massive amounts of data generated and collected by various entities, protecting individuals' personal information has become increasingly challenging. In this context, research communities have proposed different methods to preserve privacy, with \(\epsilon\)-differential privacy (\(\epsilon\)-DP) [16] standing out as a formal definition that allows quantifying the privacy-utility trade-off with the parameter \(\epsilon\) (the smaller, the more private). At the same time, there have been many efforts to develop methods and metrics to evaluate and promote fairness in ML due to unequal treatments of individuals or groups based on factors such as race, gender, or socio-economic status [5, 29, 30, 31]. This means that privacy and fairness are essential for ML to apply in practice successfully. In real-life scenarios, it is not common anymore for entities to have access to _sensitive_ (or _protected1_) attributes like race due to legal restrictions and regulations2 governing their collection. Therefore, it can be difficult for these entities to quantify/assess the fairness of the models they deploy since they cannot access the protected attributes used for the fairness assessment. One way to address this problem [32], ignoring legal feasibility, is to enable users to share their sensitive attributes using protocols satisfying Local Differential Privacy (LDP) [25], and learn a non-discriminatory predictor. Footnote 1: Throughout this paper, we use the term _sensitive_ attribute from a privacy perspective and the term _protected_ attribute from a fairness perspective. Note that we always consider _protected_ attributes as _sensitive_ attributes. Footnote 2: For example, the General Data Protection Regulation (GDPR) [3]. However, while collecting the sensitive attribute in a privacy-preserving manner may seem sufficient, it is worth noting that proxy variables can exist [24] and can still lead to inferences about the sensitive attribute (_e.g._, by exploiting correlations). It is also important to acknowledge that proxy variables may be considered as personal information under the GDPR, requiring the same level of privacy protection. Thus, as collecting multiple sensitive information (_i.e._, _multidimensional data_) becomes increasingly prevalent in various industries, protecting this information is a legal obligation and an ethical responsibility. Therefore, this paper contributes to an in-depth empirical analysis of how pre-processing multidimensional data with \(\epsilon\)-LDP affects the fairness and utility in ML binary classification tasks. We evaluated several group fairness metrics [5, 30], including disparate impact [9], equal opportunity [21], and overall accuracy [12], on benchmark datasets, namely, Adult [14], ACSCoverage [14], and LSAC [40]. To broaden the scope of our study, we have experimentally assessed seven state-of-the-art LDP protocols, namely, Generalized Randomized Response (GRR) [23], Binary Local Hashing (BLH) [10], Optimal Local Hashing (OLH) [39], RAPPOR [18], Optimal Unary Encoding (OUE) [39], Subset Selection (SS) [41, 38], and Thresholding with Histogram Encoding (THE) [39]. Moreover, since proxy variables can still introduce unintended biases and thus lead to unfair decisions [24], we consider the setting in which each proxy (sensitive attribute) is collected independently under LDP guarantees. In other words, applying this independent setting automatically removes the correlation between the proxy attributes. To this end, the privacy budget \(\epsilon\) should be divided among all sensitive attributes to ensure \(\epsilon\)-LDP under sequential composition [17]. Let \(d_{s}\) be the total number of sensitive attributes, the LDP literature for multidimensional data [37, 6] considers a **uniform** solution that collects each sensitive attribute under \(\frac{\epsilon}{d_{s}}\)-LDP. In this paper, we propose a new **k-based** solution that considers the varying domain size \(k\) of different sensitive attributes. More precisely, for the \(j\)-th sensitive attribute, we allocate \(\epsilon_{j}=\frac{\epsilon\cdot k_{j}}{\sum_{i=1}^{d_{s}}k_{i}}\). Overall, our study challenges the common belief that using DP necessarily leads to worsened fairness in ML [8, 20]. Our findings show that training a classifier on LDP-based multidimensional data slightly improved fairness results without significantly affecting classifier performance. We hope this work can aid practitioners in collecting multidimensional user data in a privacy-preserving manner by providing insights into which LDP protocol and privacy budget-splitting solutions are best suited to their needs. In summary, the three main contributions of this paper are: * We empirically analyze the impact of pre-processing multidimensional data with \(\epsilon\)-LDP on fairness and utility; * We compare the impact of seven state-of-the-art LDP protocols under a homogeneous encoding when training ML binary classifiers (see Fig. 1) on fairness and utility; * We propose a new privacy budget splitting solution named k-based, which generally led to a better privacy-utility-fairness trade-off in our experiments. All our codes are available in a **GitHub repository**[2]. **Outline.** The rest of this paper is organized as follows. Section 2 discusses related work. In Section 3, we present the notation, fairness, and LDP protocols used. Next, Section 4 states the problem addressed in this paper and the proposed k-based solution. Section 5 details the experimental setting and main results. Finally, we conclude this work indicating future perspectives in Section 6. ## 2 Related Work The recent survey work by Fioretto _et al._[19] discusses two views about the relationship between central DP and fairness in learning and decision tasks. The first view considers DP and fairness in an aligned space (_e.g._, [15]), which mainly corresponds to individual fairness metrics. The other view regards DP and fairness as "enemies" (_e.g._, [20, 34, 8]), which mainly corresponds to group fairness notions. For instance, Pujol _et al._[34] investigated disparities in decision tasks using \(\epsilon\)-DP data. Regarding learning tasks, Bagdasaryan, Poursaeed, & Shmatikov [8] studied the impact of training \(\epsilon\)-DP deep learning (_a.k.a. gradient perturbation_) models on unprivileged groups. By keeping the same hyperparameters as the non-private baseline model, the authors noticed that the accuracy for the unprivileged group dropped more than for the privileged one. Similarly, Ganev et al. [20] have also noticed disparities for the unprivileged group when generating \(\epsilon\)-DP synthetic data for training ML models by also keeping default hyperparameters of the differentially private generative models. In this paper, we aim to explore to what extent training an ML classifier on \(\epsilon\)-LDP multidimensional data (_a.k.a. input perturbation_) while fixing the same set of hyperparameters negatively impacts the unprivileged group is valid. Regarding the local DP setting, the work of Mozannar, Ohannessian, & Srebro [32] was the first one to propose a fair classifier when sanitizing only the protected attribute with \(\epsilon\)-LDP in both training and testing sets. More recently, the work of Chen _et al._[13] considers a "semi-private" setting in which a small portion of users share their protected attribute with no sanitization and all other users apply an \(\epsilon\)-LDP protocol. While the two aforementioned research works [13, 32] answer interesting questions by collecting a single sensitive attribute using only the GRR [23] protocol, we consider in this work multiple sensitive attributes, which reflects real-world data collections, seven \(\epsilon\)-LDP protocols, and several fairness and utility metrics. In addition, we also propose a new privacy budget splitting solution named k-based, which generally leads to better fairness and performance in ML binary classification tasks. ## 3 Preliminaries and Background This section briefly reviews the group fairness metrics, LDP, and LDP protocols. The notation used throughout this paper is summarized in Table 1. Note that in this work, we always consider a single protected attribute and assess fairness w.r.t. that attribute. For LDP, we consider a set of sensitive attributes instead. Moreover, the protected attribute is always considered sensitive, but the opposite is untrue. ### Group Fairness Metrics In this paper, we focus on group fairness metrics, which assess the fairness of ML models for different demographic groups that differ by the protected attribute (_e.g._, race, gender, age,...). Let \(A_{p}\) be the protected attribute, \(\hat{Y}\) be a predictor of a binary target \(Y\in\{0,1\}\). The metrics we use to evaluate fairness are: * **Disparate Impact (DI)**[9]. DI is defined as the ratio of the proportion of positive predictions (\(\hat{Y}=1\)) for the _unprivileged_ group (\(A_{p}=0\)) over the ratio of the proportion of positive predictions for the _privileged_ group (\(A_{p}=1\)). The formula for DI is: \[\text{DI}=\frac{\Pr[\hat{Y}=1|A_{p}=0]}{\Pr[\hat{Y}=1|A_{p}=1]}.\] (1) \begin{table} \begin{tabular}{c c} \hline Symbol & Description \\ \hline \(n\) & Number of users \\ \([n]\) & Set of integers, \(\{1,2,\dots,n\}\) \\ \(\mathbf{x}_{i}\) & \(i\)-th coordinate of vector \(\mathbf{x}\) \\ \(z=\mathcal{M}(v)\) & Protocol \(\mathcal{M}\) perturbs \(v\) into \(z\) under \(\epsilon\)-LDP \\ \(X\) & Set of “non-sensitive” attributes \\ \(A_{s}\) & Set of sensitive attributes (**privacy viewpoint**) \\ \(A_{p}\) & Protected attribute (**fairness viewpoint**), \(A_{p}\in A_{s}\) \\ \(Z_{s}\) & Set of locally differentially private sensitive attributes, \(Z_{s}=\mathcal{M}(A_{s})\) \\ \(k_{j}\) & Domain size of the \(j\)-th attribute \\ \(d_{s}\) & Number of sensitive attributes, \(d_{s}=|A_{s}|\) \\ \(Y\) & Set of target values, \(Y=\{0,1\}\) \\ \(D\) & Original dataset, \(D=(X,A_{s},Y)\) \\ \(D_{z}\) & Dataset with sanitized sensitive attributes, \(D_{z}=(X,Z_{s},Y)\) \\ \hline \end{tabular} \end{table} Table 1: Notations Note that a perfect DI value is equal to 1. * **Statistical Parity Difference (SPD) [4].** Instead of the ratio, SDP computes the difference in the proportion of positive predictions for _unprivileged_ and _privileged_ groups and is defined as: \[\text{SPD}=\Pr[\hat{Y}=1|A_{p}=1]-\Pr[\hat{Y}=1|A_{p}=0].\] (2) A perfect SPD value is equal to 0. * **Equal Opportunity Difference (EOD) [21].** EOD measures the difference between the true positive rates (_i.e._, recall) of the _unprivileged_ group and the _privileged_ groups. Formally, EOD is defined as: \[\text{EOD}=\Pr[\hat{Y}=1|A_{p}=1]-\Pr[\hat{Y}=1|A_{p}=0].\] (3) A perfect EOD value is equal to 0. * **Overall Accuracy Difference (OAD) [12].** OAD measures the difference between the overall accuracy rates between the _privileged_ group and the _unprivileged_ group. Formally, OAD is represented as: \[\text{OAD}=\Pr[\hat{Y}=Y|A_{p}=1]-\Pr[\hat{Y}=Y|A_{p}=0].\] (4) A perfect OAD value is equal to 0. ### Local Differential Privacy In this article, we use LDP [25] as the privacy model, which is formalized as: Definition 1 (\(\epsilon\)-Local Differential Privacy): A randomized algorithm \(\mathcal{M}\) satisfies \(\epsilon\)-local-differential-privacy (\(\epsilon\)-LDP), where \(\epsilon>0\), if for any pair of input values \(v_{1},v_{2}\in Domain(\mathcal{M})\) and any possible output \(z\) of \(\mathcal{M}\): \[\Pr[\mathcal{M}(v_{1})=z]\leq e^{\epsilon}\cdot\Pr[\mathcal{M}(v_{2})=z].\] Proposition 1 (Post-Processing [17]): _If \(\mathcal{M}\) is \(\epsilon\)-LDP, then \(f(\mathcal{M})\) is also \(\epsilon\)-LDP for any function \(f\)._ Proposition 2 (Sequential Composition [17]): _Let \(\mathcal{M}_{1}\) be an \(\epsilon_{1}\)-LDP protocol and \(\mathcal{M}_{2}\) be an \(\epsilon_{2}\)-LDP protocol. Then, the protocol \(\mathcal{M}_{1,2}(v)=(\mathcal{M}_{1}(v),\mathcal{M}_{2}(v))\) is \((\epsilon_{1}+\epsilon_{2})\)-LDP._ ### LDP Protocols Let \(A_{s}=\{v_{1},\ldots,v_{k}\}\) be a sensitive attribute with a discrete domain of size \(k=|A_{s}|\), in this subsection, we briefly review seven state-of-the-art LDP protocols. #### 3.1.1 Generalized Randomized Response (GRR) GRR [23] uses no particular encoding. Given a value \(v\in A_{s}\), \(GRR(v)\) outputs the true value \(v\) with probability \(p\), and any other value \(v^{\prime}\in A_{s}\setminus\{v\}\), otherwise. More formally: \[\forall z\in A_{s}:\quad\Pr[z=a]=\begin{cases}p=\frac{e^{*}}{e^{*}+k-1}\text{ if }z=a\\ q=\frac{1}{e^{*}+k-1}\text{ otherwise,}\end{cases}\] in which \(z\) is the perturbed value sent to the server. #### 3.1.2 Binary Local Hashing (BLH) Local Hashing (LH) protocols [39, 10] can handle a large domain size \(k\) by first using hash functions to map an input value to a smaller domain of size \(g\) (typically \(2\leq g\ll k\)), and then applying GRR to the hashed value. Let \(\mathscr{H}\) be a universal hash function family such that each hash function \(H\in\mathscr{H}\) hashes a value in \(A_{s}\) into \([g]\), _i.e._, \(H:A_{s}\rightarrow[g]\). With BLH, \([g]=\{0,1\}\), each user selects at random one hash function \(H\), calculates \(b=H(v)\), and perturbs \(b\) to \(z\) as: \[\Pr[z=1]=\begin{cases}p=\frac{e^{*}}{e^{*}+1}\text{ if }b=1\\ q=\frac{1}{e^{*}+1}\text{ if }b=0.\end{cases}\] The user sends the tuple \(\left\langle H,z\right\rangle\), _i.e._, the hash function and the perturbed value. Thus, for each user, the server can calculate \(S\left(\left\langle H,z\right\rangle\right)=\{v|H(v)=z\}\). #### 3.1.3 Optimal LH (OLH) To improve the utility of LH protocols, Wang _et al._[39] proposed OLH in which the output space of the hash functions in family \(\mathscr{H}\) is no longer binary as in BLH. Thus, with OLH, \(g=\lfloor e^{*}+1\rceil\), each user selects at random one hash function \(H\), calculates \(b=H(v)\), and perturbs \(b\) to \(z\) as: \[\forall i\in[g]:\quad\Pr[z=i]=\begin{cases}p=\frac{e^{*}}{e^{*}+g-1}\text{ if }b=i\\ q=\frac{1}{e^{*}+g-1}\text{ if }b\neq i.\end{cases}\] Similar to BLH, the user sends the tuple \(\left\langle H,z\right\rangle\) and, for each user, the server can calculate \(S\left(\left\langle H,z\right\rangle\right)=\{v|H(v)=z\}\). #### 3.1.4 Rappor The RAPPOR [18] protocol uses One-Hot Encoding (OHE) to interpret the user's input \(v\in A_{s}\) as a one-hot \(k\)-dimensional vector. More precisely, \(\mathbf{v}=OHE(v)\) is a binary vector with only the bit at position \(v\) set to \(1\) and the other bits set to \(0\). Then, RAPPOR randomizes the bits from \(\mathbf{v}\) independently to generate \(\mathbf{z}\) as follows: \[\forall i\in[k]:\quad\Pr[\mathbf{z}_{i}=1]=\begin{cases}p=\frac{e^{*/2}}{e^{* /2}+1}\text{ if }\mathbf{v}_{i}=1,\\ q=\frac{1}{e^{*/2}+1}\text{ if }\mathbf{v}_{i}=0,\end{cases}\] where \(p+q=1\) (_i.e._, symmetric). Afterwards, the user sends \(\mathbf{z}\) to the server. #### 3.3.2 Optimal Unary Encoding (OUE) To minimize the variance of RAPPOR, Wang _et al._[39] proposed OUE, which perturbs the \(0\) and \(1\) bits asymmetrically, _i.e._, \(p+q\neq 1\). Thus, OUE generates \(\mathbf{z}\) by perturbing \(\mathbf{v}\) as follows: \[\forall i\in[k]:\quad\Pr[\mathbf{z}_{i}=1]=\begin{cases}p=\frac{1}{2}&\text{if } \mathbf{v}_{i}=1,\\ q=\frac{1}{e^{\epsilon}+1}&\text{if }\mathbf{v}_{i}=0.\end{cases}\] Afterwards, the user sends \(\mathbf{z}\) to the server. #### 3.3.3 Subset Selection (SS) The SS [38; 41] protocol randomly selects \(1\leq\omega\leq k\) items within the input domain to report a subset of values \(\Omega\subseteq A_{s}\). The user's true value \(v\) has higher probability of being included in the subset \(\Omega\), compared to the other values in \(A_{s}\setminus\{v\}\). The optimal subset size that minimizes the variance is \(\omega=\lfloor\frac{k}{e^{\epsilon}+1}\rceil\). Given a value \(v\in A_{s}\), \(SS(v)\) starts by initializing an empty subset \(\Omega\). Afterwards, the true value \(v\) is added to \(\Omega\) with probability \(p=\frac{\omega e^{\epsilon}}{\omega e^{\epsilon}+k-\omega}\). Finally, it adds values to \(\Omega\) as follows: * If \(v\in\Omega\), then \(\omega-1\) values are sampled from \(A_{s}\setminus\{v\}\) uniformly at random (without replacement) and are added to \(\Omega\); * If \(v\notin\Omega\), then \(\omega\) values are sampled from \(A_{s}\setminus\{v\}\) uniformly at random (without replacement) and are added to \(\Omega\). Afterwards, the user sends the subset \(\Omega\) to the server. #### 3.3.4 Thresholding with Histogram Encoding (THE) Histogram Encoding (HE) [39] encodes the user value as a one-hot \(k\)-dimensional histogram, _i.e._, \(\mathbf{v}=[0.0,0.0,\ldots,1.0,0.0,\ldots,0.0]\) in which only the \(v\)-th component is \(1.0\). \(HE(\mathbf{v})\) perturbs each bit of \(\mathbf{v}\) independently using the Laplace mechanism [16]. Two different input values \(v_{1},v_{2}\in A_{s}\) will result in two vectors with L1 distance of \(\Delta=2\). Thus, HE will output \(\mathbf{z}\) such that \(\mathbf{z}_{i}=\mathbf{v}_{i}+\text{Lap}\left(\frac{2}{\epsilon}\right)\). To improve the utility of HE, Wang _et al._[39] proposed THE such that the user reports (or the server computes): \(S(\mathbf{z})=\{v\mid\mathbf{z}_{v}~{}>~{}\theta\}\), in which \(\theta\) is the threshold with optimal value in \((0.5,1)\). In this work, we use scipy.minimize_scalar to optimize \(\theta\) for a fixed \(\epsilon\) as: \(\min\limits_{\theta\in(0.5,1)}\quad\frac{2e^{\epsilon\theta/2}-1}{(1+e^{ \epsilon(\theta-1/2)}-2e^{\epsilon\theta/2})^{2}}\). ## 4 Problem Setting and Methodology We consider the scenario in which the server collects a set of multiple sensitive attributes \(A_{s}\) under \(\epsilon\)-LDP guarantees from \(n\) distributed users \(U=\{u_{1},\ldots,u_{n}\}\). Furthermore, in addition to the LDP-based multidimensional data, we assume that the users will also provide non-sanitized data \(X\), which we consider as "non-sensitive" attributes. The server aims to use both sanitized \(Z_{s}=\mathcal{M}(A_{s})\) and non-sanitized data \(X\) to train an ML classifier with a binary target variable \(Y=\{0,1\}\). Notice, however, that we will be training an ML classifier on \(D_{z}=(X,Z_{s},Y)\) but testing on \(D=(X,A_{s},Y)\) as the main goal is to _protect the privacy of the data used to train the ML model_ (_e.g._, to avoid membership inference attacks [22], reconstruction attacks [35], and other privacy threats [28]). In other words, instead of considering a system for on-the-fly LDP sanitization of test data, as in [32], we only sanitize the training set. With these elements in mind, our primary goal is to study the impact of training an ML classifier on \(D_{z}=(X,Z_{s},Y)\) compared to \(D=(X,A_{s},Y)\) on fairness and utility, using different LDP protocols and privacy budget splitting solutions. More precisely, we consider the setting where each sensitive attribute in \(A_{s}\) is collected independently under LDP guarantees. In this case, to satisfy \(\epsilon\)-LDP following Proposition 2, the privacy budget \(\epsilon\) must be split among the total number of sensitive attributes \(d_{s}=|A_{s}|\). To this end, the state-of-the-art [6, 37] solution, named **uniform**, propose to split the privacy budget \(\epsilon\) uniformly among all attributes, _i.e._, allocating \(\frac{\epsilon}{d_{s}}\) for each attribute. However, as different sensitive attributes have different domain sizes \(k_{j}\), for \(j\in[d_{s}]\), we propose a new solution named **k-based** that splits the privacy budget \(\epsilon\) proportionally to the domain size of the attribute. That is, for the \(j\)-th attribute, we will allocate \(\epsilon_{j}=\frac{\epsilon\cdot k_{j}}{\sum_{i=1}^{d}k_{i}}\). In addition, each LDP protocol has a different way of encoding and perturbing user's data. We thus propose to compare all LDP protocols under the same encoding when training the ML classifier. More specifically, we will use OHE and Indicator Vector Encoding (IVE) [1] as all LDP protocols from Section 3.3 are designed for categorical data or discrete data with known domain. For example, let \(\Omega\) be the reported subset of a user after using SS as LDP protocol. Following IVE, we create a binary vector \(\mathbf{z}=[b_{1},\dots,b_{k}]\in\{0,1\}^{k}\) of length \(k\), where the \(v\)-th entry is set to \(1\) if \(v\in\Omega\), and \(0\), otherwise. In other words, \(\mathbf{z}\) represents the subset \(\Omega\) in a binary format. Fig. 1 illustrates the LDP encoding and perturbation at the user side and how to achieve a "homogeneous encoding" for all the seven LDP protocols at the server side. Last, all "non-sensitive" attributes \(X\) are encoded using OHE. ## 5 Experimental Evaluation In this section, we present our experiments' setting and main results. The primary research questions (RQ) we aim to answer are: * **RQ1.** Overall, how does preprocessing multidimensional data with \(\epsilon\)-LDP affect the fairness and utility of ML binary classifiers with the same hyperparameters used before and after sanitization? * **RQ2.** Which privacy budget-splitting solution leads to less harm to the fairness and utility of an ML binary classifier? * **RQ3.** How do different LDP protocols affect the fairness and utility of an ML binary classifier, and which one is more suitable for the different real-world scenarios applied? ### Setup of Experiments **General setting.** For all experiments, we consider the following setting: * **Environment.** All algorithms are implemented in Python 3 with Numpy [36], Numba [27], and Multi-Freq-LDPy [7] libraries, and run on a local machine with 2.50GHz Intel Core i9 and 64GB RAM. The codes we develop for all experiments are available in a **GitHub repository**[2]. * **ML classifier.** We used the state-of-the-art3 LGBM [26] as predictor \(\hat{Y}\). Footnote 3: [https://www.kaggle.com/kaggle-survey-2022](https://www.kaggle.com/kaggle-survey-2022). * **Encoding.** We only use discrete and categorical attributes, which are encoded using OHE or IVE (see Fig. 1) and the target is binary, _i.e._, \(Y\in\{0,1\}\). * **Training and testing sets.** We randomly select 80% as training set and the remaining 20% as testing set. We apply LDP on the training set only. That is, the samples in the testing set are the original samples (_i.e._, no LDP). Figure 1: Overview of client-side encoding and perturbation steps for the seven different LDP protocols applied. On the server side, there is also a post-processing step with one-hot encoding (OHE) or indicator vector encoding (IVE), if needed. * **Stability.** Since LDP protocols, train/test splitting, and ML algorithms are randomized, we report average results over 20 runs. **Datasets.** Table 2 summarizes all datasets used in our experiments. For ease of reproducibility, we use real-world and open datasets. * **Adult.** We use 26000 as threshold to binarize the target variable "income" of the _reconstructed Adult_ dataset [14]. After cleaning, \(n=45849\) samples are kept. We excluded "capital-gain" and "capital-loss" and used the remaining 10 discrete and categorical attributes. We considered \(A_{s}=\{\text{gender, race, native-country, age}\}\) as sensitive attributes for LDP sanitization and \(A_{p}=\text{gender}\) as the protected attribute for fairness assessment. * **ACSCoverage.** This dataset4 is retrieved with the folktables[14] Python package and the binary target "PUBCOV" designates whether an individual is covered by public health insurance or not. We select the year 2018 and the "Texas" state, with \(n=98739\) samples. We removed "DEAR", "DEYE", "DREM", and "PINCP" and used the remaining 15 discrete and categorical attributes. We considered \(A_{s}=\{\text{DIS, AGEP, SEX, SCHL}\}\) as sensitive attributes for LDP sanitization and \(A_{p}=\text{DIS}\) as the protected attribute (_i.e._, disability) for fairness assessment. Footnote 4: The full documentation for the description of all attributes is in [https://www.census.gov/programs-surveys/acs/microdata/documentation.html](https://www.census.gov/programs-surveys/acs/microdata/documentation.html). * **LSAC.** This dataset is from the Law School Admissions Council (LSAC) National Bar Passage Study [40] and the binary target "pass_bar" indicates whether or not a candidate has passed the bar exam. After \begin{table} \begin{tabular}{l l l l l} \hline \hline _Dataset_ & \(n\) & \(A_{p}\) & \(A_{s}\)_, domain size_\(k\) & \(Y\) \\ \hline Adult & 45849 & gender & - gender, \(k=2\) & income \\ & & & - race, \(k=5\) & \\ & & & - native country, \(k=41\) & \\ & & & - age, \(k=74\) & \\ ACSCoverage & 98739 & DIS & - DIS, \(k=2\) & PUBCOV \\ & & & - AGEP, \(k=50\) & \\ & & & - SEX, \(k=2\) & \\ & & & - SCHL, \(k=24\) & \\ LSAC & 20427 & race & - race, \(k=2\) & pass bar \\ & & & - gender, \(k=2\) & \\ & & & - family income, \(k=5\) & \\ & & & - full time, \(k=2\) & \\ \hline \hline \end{tabular} \end{table} Table 2: Description of the datasets used in the experiments. cleaning, \(n=20427\) samples are kept. We only consider as attributes: 'gender', 'race', 'family income', 'full time', 'undergrad GPA score' (discretized to \(\{1.5,2.0,...,4.5\}\)), and 'LSAT score' (rounded to the closest integer). The 'race' attribute was binarized to {black, other}. We set \(A_{s}=\{\)race, gender, family income, full time} as sensitive attributes for LDP sanitization and \(A_{p}=\)race as the protected attribute for fairness assessment. **Evaluated methods.** The methods we use and compare are: * **(Baseline) NonDP.** This is our baseline with LGBM trained over original data (_i.e._, \(D=(X,A_{s},Y)\)). We searched for the best hyperparameters using Bayesian optimization [11] through 100 iterations varying: \(max\_depth\in[3,50]\), \(n\_estimators\in[50,2000]\), and \(learning\_rate\in(0.01,0.25)\); * **LDP protocols.** We pre-processed \(Z_{s}=\mathcal{M}(A_{s})\) of the training sets using all seven LDP protocols from Section 3.3 (_i.e._, GRR, RAPPOR, OUE, SS, BLH, OLH, and THE) as \(\mathcal{M}\). We used the best hyperparameters found for the NonDP model and trained LGBM over \(D_{z}=(X,Z_{s},Y)\). For all datasets, we set \(d_{s}\) to 4. That is, \(d_{s}=|A_{s}|=4\). To satisfy \(\epsilon\)-LDP (_cf._ Definition 2), we split the privacy budget \(\epsilon\) following the two solutions described in Section 4 (_i.e._, the state-of-the-art uniform and our k-based solution). **Metrics.** We evaluate the performance of LGBM trained over the original data (_i.e._, NonDP baseline) and LDP-based data on privacy, utility, and fairness: * **Privacy.** We vary the privacy parameter in the range of \(\epsilon=\{0.25,0.5,1,2,4,8,10,20,50\}\). At \(\epsilon=0.25\) the ratio of probabilities is bounded by \(e^{0.25}\approx 1.3\) giving nearly indistinguishable distributions, whereas at \(\epsilon=50\) almost no privacy is guaranteed. * **Utility.** We use accuracy (acc), f1-score (f1), area under the receiver operating characteristic curve (auc), and recall as utility metrics; * **Fairness.** We use the metrics of Section 3.1 (_i.e._, DI, SPD, EOD, and OAD). ### Main Results **LDP impact on fairness.** Fig. 2 (Adult), Fig. 3 (ACSCoverage), and Fig. 4 (LSAC) illustrate the privacy-fairness trade-off for the NonDP baseline and all the seven LDP protocols, considering both uniform and our k-based privacy budget splitting solutions. From these figures, one can notice that fairness is, in general, slightly improved for all seven LDP protocols under both the uniform and the k-based solution. For instance, for the DI metric in Fig. 2, the NonDP data indicates a value of 0.44 showing discrimination against women and, by applying LDP protocols, DI tended to increase to \(\sim\)0.48 (with \(\epsilon=0.25\)) resulting in a slight improvement in fairness. Similarly, SPD decreased from 0.37 to \(\sim\)0.34 after applying LDP protocols. The same behavior is obtained for EOD. The exception was in Fig. 3 for the OAD metric in which the gap between privileged and unprivileged groups was accentuated (favoring the unprivileged group). More specifically, the NonDP baseline has OAD equal to -0.17, and after satisfying LDP for both uniform and k-based solutions and using all LDP protocols, the gap between the privileged and unprivileged groups increased to -0.3. In other words, we start with favoritism towards the unprivileged group (negative value) and this favoritism increased after LDP. Note also that when applying the uniform privacy budget splitting solution (see left-side plots), all fairness metrics were less robust to LDP than our k-based solution and, thus, returned to the NonDP baseline value in low privacy regimes. With our k-based solution (see right-side plots), all fairness metrics continued to be slightly better for all privacy regimes for the Adult dataset in Fig. 2. For the ACSCoverage dataset, not all fairness metrics returned to the NonDP baseline value and for the LSAC dataset, a similar behavior was noticed for both uniform and k-based solutions. These differences are mainly influenced by the domain size \(k\) of the sensitive attributes. For instance, while Adult has sensitive attributes with higher values of \(k\), LSAC has many binary sensitive attributes. **LDP impact on utility.** Fig. 5 (Adult), Fig. 6 (ACSCoverage), and Fig. 7 (LSAC) illustrate the privacy-utility trade-off for the NonDP baseline and all the seven LDP protocols, considering both uniform and our k-based privacy budget splitting solutions. From these figures, one can note that, in general, the impact of \(\epsilon\)-LDP on utility metrics is minor. For instance, for the Adult dataset Figure 2: Fairness metrics (y-axis) by varying the privacy guarantees (x-axis), the \(\epsilon\)-LDP protocol, and the privacy budget splitting solution (_i.e._, uniform on the left-side and our k-based on the right-side), on the Adult [14] dataset. Figure 4: Fairness metrics (y-axis) by varying the privacy guarantees (x-axis), the \(\epsilon\)-LDP protocol, and the privacy budget splitting solution (_i.e._, uniform on the left-side and our k-based on the right-side), on the LSAC [40] dataset. Figure 3: Fairness metrics (y-axis) by varying the privacy guarantees (x-axis), the \(\epsilon\)-LDP protocol, and the privacy budget splitting solution (_i.e._, uniform on the left-side and our k-based on the right-side), on the ACSCoverage [14] dataset. Figure 5: Utility metrics (y-axis) by varying the privacy guarantees (x-axis), the \(\epsilon\)-LDP protocol, and the privacy budget splitting solution (_i.e._, uniform on the left-side and our k-based on the right-side), on the Adult [14] dataset. Figure 6: Utility metrics (y-axis) by varying the privacy guarantees (x-axis), the \(\epsilon\)-LDP protocol, and the privacy budget splitting solution (_i.e._, uniform on the left-side and our k-based on the right-side), on the ACSCoverage [14] dataset. (Fig. 5), only \(\sim 1\%\) of utility loss for all metrics is observed. Regarding privacy budget splitting, for the Adult dataset, our k-based solution is more robust to LDP as it only drops in higher privacy regimes (_i.e._, smaller \(\epsilon\) values) than the uniform solution. One main explanation for this behavior is because there is more discrepancy in the domain size \(k\)'s of the sensitive attributes \(A_{s}\) and, consequently, more privacy budget \(\epsilon\) are allocated to those attributes with high \(k\). For this reason, the uniform solution preserved more utility for the ACSCoverage dataset in Fig. 6, and both solutions had similar results for the LSAC dataset in Fig. 7 due to sensitive attributes with small domain size \(k\). **Summary.** We summarize our main findings for the three research questions formulated at the beginning of Section 5. **(RQ1)** Using the same hyperparameters configuration, \(\epsilon\)-LDP positively affects fairness in ML (see Figs. 2-4) while having a negligible impact on model's utility (see Figs. 5-7). This contrasts the findings of [8, 20] that state that under the same hyperparameters configuration, \(\epsilon\)-DP negatively impacts fairness. Although the aforementioned research works concern _gradient perturbation_ in central DP, the recent work of de Oliveira _et al._[33] has shown that when searching for the best hyperparameters for both non-private and DP models, the \(\epsilon\)-DP impact on fairness is negligible. In our case, we focused on _input perturbation_, _i.e._, randomizing multiple sensitive attributes before training any ML algorithm, and discovered a positive impact of \(\epsilon\)-(L)DP on fairness. **(RQ2)** Our k-based solution consistently led to better fairness than Figure 7: Utility metrics (y-axis) by varying the privacy guarantees (x-axis), the \(\epsilon\)-LDP protocol, and the privacy budget splitting solution (_i.e._, uniform on the left-side and our k-based on the right-side), on the LSAC [40] dataset. the state-of-the-art uniform solution when there exist sensitive attributes with high domain size \(k\) (_e.g._, for both Adult and ACSCoverage datasets). Naturally, when all sensitive attributes have a binary domain, our k-based solution is equivalent to the uniform solution. For this reason, both state-of-the-art uniform and our k-based solution led to similar privacy-utility-fairness trade-off for the LSAC dataset (see Figs. 4 and 7). Therefore, regarding utility, k-based is better when sensitive attributes have higher domain sizes \(k\), which coincides with real-world data collections. **(RQ3)** In general, GRR and SS presented the best privacy-utility-fairness trade-off for all three datasets. This is because GRR has only one perturbed output value and because SS is equivalent to GRR when \(\omega=1\), thus, not introducing inconsistencies for a user's profile. The term _inconsistency_ refers to an user being multiple categories in a given attribute, _i.e._, being both woman and man at the same time. In fact, this is precisely what happens with UE protocols that perturb each bit independently or with LH protocols in which many values can hash to the same perturbed value. For this reason, since BLH hashes the input set \(V\to\{0,1\}\), it consistently presented the worst utility results for all three datasets, and only for ACSCoverage (see Fig. 3), it presented slightly better fairness results than all other LDP protocols. ## 6 Conclusion and Perspectives This paper presented an in-depth empirical study of the impact of pre-processing multidimensional data with seven state-of-the-art \(\epsilon\)-LDP protocols on fairness and utility in binary classification tasks. In our experiments, GRR [23] and SS [38, 41] presented the best privacy-utility-fairness trade-off than RAPPOR [18], OUE [39], THE [39], BLH [10], and OLH [39]. In addition, we proposed a new privacy budget splitting solution named k-based, which generally led to better fairness and performance results than the state-of-the-art solution that splits \(\epsilon\) uniformly [6, 37]. Globally, while previous research [8, 20] has highlighted that DP worsens fairness in ML under the same hyperparameter configuration, our study finds that LDP slightly improves fairness and does not significantly impair utility. Indeed, there is still much to explore in the area of privacy-fairness-aware ML, and this study's empirical results can serve as a basis for future research directions. For instance, we intend to investigate the privacy-utility-fairness trade-off on binary classification tasks when varying the distribution of the protected attribute, the target, and their joint, and propose new methods accordingly. Furthermore, we plan to investigate the impact of LDP pre-processing on different ML algorithms, such as deep neural networks. Last, we also aim to investigate the impact of optimizing ML model's hyperparameters on the privacy-utility-fairness trade-off for LDP protocols. #### 6.0.1 Acknowledgements This work was supported by the European Research Council (ERC) project HYPATIA under the European Union's Horizon 2020 research and innovation programme. Grant agreement n. 835294.
2301.01759
Microgrid Optimal Energy Scheduling with Risk Analysis
Risk analysis is currently not quantified in microgrid resource scheduling optimization. This paper conducts a conditional value at risk (cVaR) analysis on a grid-disconnected residential microgrid with distributed energy resources (DER). We assume the infrastructure to set up an ad-hoc microgrid is already in place for a residential neighborhood with power sources such as photovoltaic (PV), diesel, and battery energy storage system (BESS). With this scenario in mind, we solve day-ahead scheduling to optimally allocate various resources to match demand in scenarios where neighborhoods, especially residential, are disconnected from the overall grid such as in flooding, hurricanes, winter storms, or operational failures. The goal is to provide an alternative framework to optimize power availability for priority customers and strengthen the overall grid against dips in power outside of normal operating considerations. The focus of this paper will be taking in renewable energy sources from PV combined with diesel and BESS while minimizing cost. Case studies demonstrate that with the proposed energy management system, microgrids can be implemented to be more resilient against new challenges.
Ali Siddique, Cunzhi Zhao, Xingpeng Li
2023-01-04T18:50:22Z
http://arxiv.org/abs/2301.01759v1
# Microgrid Optimal Energy Scheduling ###### Abstract Risk analysis is currently not quantified in microgrid resource scheduling optimization. This paper conducts a conditional value at risk (cVaR) analysis on a grid-disconnected residential microgrid with distributed energy resources (DER). We assume the infrastructure to set up an ad-hoc microgrid is already in place for a residential neighborhood with power sources such as photovoltaic (PV), diesel, and battery energy storage system (BESS). With this scenario in mind, we solve day-ahead scheduling to optimally allocate various resources to match demand in scenarios where neighborhoods, especially residential, are disconnected from the overall grid such as in flooding, hurricanes, winter storms, or operational failures. The goal is to provide an alternative framework to optimize power availability for priority customers and strengthen the overall grid against dips in power outside of normal operating considerations. The focus of this paper will be taking in renewable energy sources from PV combined with diesel and BESS while minimizing cost. Case studies demonstrate that with the proposed energy management system, microgrids can be implemented to be more resilient against new challenges. Battery degradation, Conditional value at risk, Day-ahead scheduling, Energy management system, Microgrid, Risk management, Optimization. Nomenclature \(D_{P_{t}}\) Priority customer demand defined as customers where electricity cannot be curtailed. \(D_{e_{t}}\) Essential customer demand. Defined as residential customers whose electricity is curtailed. \(D_{e_{t}}\) Essential customer curtailed. Defined as residential customers whose electricity is curtailed. \(D_{NetLoad_{t}}\) Demand load of the system subtracted from any residential PV that is generated. \(D_{Load_{t}}\) Demand total for all customers in microgrid. \(P_{BESS_{t}}\) Power of the battery energy storage system. \(P_{PV_{t}}\) Power value of photovoltaic residential solar panels. \(P_{D_{t}}\) Power output of diesel generator. \(P_{Dim_{t}}\) Power output minimum for diesel generator. \(P_{Dim_{tmax}}\) Power output maximum for diesel generator. \(P_{Total_{t}}\) Total available power. \(P_{B_{max}}\) The maximum discharge power of the battery. \(P_{D_{t}}^{B}\) The discharging power of the battery. \(P_{c_{t}}^{B}\) The charging power of the battery. \(P_{c_{t}}^{B}\) The charging power of the battery. \(P_{c_{tmax}}^{B}\) The maximum charge power of the battery. \(C_{batt,red}\) The additional price ($) of the battery cost when the battery state of charge is outside the green zone. \(C_{fuel}\) Fuel cost ($/kW) of diesel generation. \(C_{B_{R_{t}}}\) Degradation cost ($) of the battery. \(C_{B_{Total}}\) The capital cost ($) of the battery. \(C_{D_{e}}\) Cost ($/kW) of curtailing essential customers. ## I Introduction There have been 500 weather events in North America impacting 50,000 customers for each event from 2005-2015 [1]. Similar increased electricity outages due to weather have been reported on other continents. These increases in the severity of natural disasters are due to the forces of climate change [2]. Also, blackouts have occurred due to operational errors resulting in millions of customers losing power [3]. Lastly, attacks against the grid have become more common from foreign actors [4]. Both trends have emphasized the need for a more distributed and decentralized electric grid which should function to some extent even if disconnected from the overall electric utility. A microgrid is defined by the Department of Energy as "_a group of interconnected loads and distributed energy resources within clearly defined electrical boundaries that acts as a single controllable entity with respect to the grid_" [5]. Microgrid technology has become increasingly more common in the past few decades due to its ability to supply areas with geographical constraints, disaster prone issues, and rural areas. It is also an effective tool for electricity distribution and reliability. Additionally, a microgrid has the capacity to disconnect from the main grid and be self-sufficient for a period of time but it can also remain connected and function alongside a larger grid system in normal operations. This is essential in a blackout or disaster scenario since a microgrid can disconnect from the other supply issues or even equipment damage that could be occurring elsewhere. This allows the microgrid to avoid cascading failures and provide reliable power in its specific service area [6]. The focus of this paper will be on the microgrid's ability to disconnect from the larger electric grid in a time of outages and be able to reliably provide power to a specific section otherwise referred as an island state. However, this requires that a microgrid have its own energy management system (EMS) and far more refined control methods than a traditional EMS since both the energy demand and consumption is at a far more granular level [7]-[10]. These enhanced requirements are implemented in this paper with two systems. Firstly, the day-ahead scheduling is used to optimize resource allocation since an emergency usually unfolds on a day-to-day basis. This system also makes sure that demand is being met. Lastly, it also allows cost approximation to allocate the correct energy supply ensuring effectiveness and ideal dispatching [11]. In addition to physical infrastructure, new forms of EMS including intermittent energy such as solar panels must be considered for resource allocation [11]. Microgrid functionality must be built into the system as more microgrids are being integrated or being developed alongside the main grid. This will have far reaching consequences in energy management systems as large changes in both the generation and consumption of energy are rapidly shifting. The energy management system in a regular electrical system has incredible reliability and is a marvel of the modern world. Unfortunately, this reliability and interconnectedness is only guaranteed for normal conditions. The electric grid's ability to respond to issues under abnormal conditions such as storms, flooding, or other disasters may be reduced [6]. This paper primarily focus on such circumstances where the normal standards for reliability are not available. The high standard is only possible due to a vast and durable interconnected system which relies on large-scale generation transmitted to distributed residential systems. These infrastructure advantages are guaranteed in a natural disaster where due to damage, the system can be disconnected into multiple sections. When this happens, individual residential homes or industrial systems must have previously installed redundant energy resources such as diesel generation or BESS. Otherwise, their ability to receive electricity is entirely dependent on the speed at which the whole system can be reintegrated into a default state [12]. Therefore, advanced EMS software is necessary along with more resilient physical assets to harden the overall grid [1]. There are also new forms of distributed generation which change the dynamics of power transmission. All these factors require a rethinking of acceptable risk which currently is not acknowledged for existing systems. This paper utilizes day-ahead scheduling with specific time segments by assigning certain cost objectives to various resources including solar power, load curtailment, BESS, and diesel generation. This allows the model to create the most effective mix of resources to supply a load while minimizing resource usage throughout the day. This paper presents one such approach to reduce unreliability by looking at day-ahead scheduling resource allocation which is then analyzed through a risk management method specifically a conditional value at risk (cVaR) analysis method to determine the risk factor of load curtailment throughout the day. This framework points out how intermittent resources and non-critical load curtailment can increase reliability [13, 14, 15]. The goal is to understand that not only load curtailment can be necessary in certain situations but how to quantify this necessity to ensure that system reliability is maximized in an emergency. It also creates a starting point to discuss instances where property that is currently controlled by individual use can be used in a more communal manner. This will allow a more sophisticated conversation about non-critical load curtailment instead of the current reality of demand reduction occurring haphazardly [15]. The remainder of the paper is organized as follows. Section II presents and describes the mathematic model for microgrid optimal scheduling. Section III presents the proposed cVaR analysis framework. Case study is presented in Section IV. Finally, Section V concludes the paper. ## II Mathematical Model The objective function in this paper is to maximize power availability for priority customers by minimizing risk and cost of volatile power generation sources. \[\min\sum\{\mathcal{C}_{B_{R}}P_{BESS_{R}}+\mathcal{C}_{fuel}P_{ ll_{R}}+\mathcal{C}_{D_{\epsilon}}D_{e_{\epsilon}} \tag{1}\] \[\qquad\qquad\qquad+\mathcal{U}_{r}C_{batt,red}\}\] The objective function represented by (1) is a variation of the cost function of traditional unit commitment models showing resource allocation for BESS, diesel, and load curtailment while balancing demand and PV generation. \[D_{Load_{\epsilon}}-\ D_{e_{\epsilon_{\epsilon}}}=P_{ll_{t}}+P_{BESS_{\epsilon} }+P_{PV_{\epsilon}} \tag{2}\] Constraint (2) represents a basic requirement for all electric grid operations ensuring that demand meets supply. The usage of \(D_{e_{\epsilon_{\epsilon}}}\) to minimize demand will be explained in the Load Curtailment section. Diesel systems are a useful fuel source around the world in grid operations as a DER alongside BESS and residential PV [3]. As a base constraint, there is a maximum discharge and charge capacity for diesel generators to meet technical limitations as \[P_{D_{lmin}}\leq P_{D_{l\epsilon}}\leq P_{D_{lmax}} \tag{3}\] Equation (4) defines the fundamental connection between how demand is configured in the system. Constraint (5) sets the grouping of priority customers and essential customers. Priority customers are a fraction defined by \(\epsilon\) of the essential customers. In the essential customer group, only \(D_{e_{\epsilon_{\epsilon}}}\) is defined as essential customer curtailed are removed from the system as shown in (6). Equation (7) limits the \(D_{e_{\epsilon}}\) and \(D_{e_{\epsilon_{\epsilon}}}\), (8) - (10) enforces the BESS status to be charging, discharging or idle. Constraints (11)-(13) limit the charging and discharging power. Equation (14) defines the cost factor for any usage of the battery outside the green zone. \[D_{Net\ Load_{t}}=D_{p_{t}}+D_{e_{t}}-P_{PV_{t}} \tag{4}\] \[D_{p_{t}}=\varepsilon D_{e_{t}}\] (5) \[D_{p_{t}}+D_{e_{t}}-D_{e_{t}}=P_{Total_{t}}\] (6) \[D_{e_{t}}\geq D_{e_{t}}\geq 0\] (7) \[U_{C_{t}}\{1,charging\ state.\ 0,not\ charging\}\] (8) \[U_{D_{t}}\{1,discharging\ state\ 0,not\ discharging\}\] (9) \[U_{C_{t}}+U_{D_{t}}\leq 1\] (10) \[P_{BESS_{t}}=P_{t}^{B}-P_{Ct}^{B}\] (11) \[0\leq P_{C_{t}}^{B}\leq U_{Ct}P_{max}^{B}\] (12) \[0\leq P_{B_{t}}^{B}\leq U_{D_{t}}P_{max}^{B}\] (13) \[\left\{\begin{aligned} SOC^{green}_{min}\leq SOC_{t} \leq SOC^{green}_{max}\,\quad U_{r}=0\\ SOC^{green}_{min}>SOC_{t}\ or\ SOC_{t}\ > SOC^{green}_{max}\,\quad U_{r}=1\end{aligned}\right. \tag{14}\] The cost of the battery system is connected to the maximum life cycle to calculate the overall cost of the battery as connected to cycle count. This allows us to take a specific portion of battery usage such as one day and connect it to the overall cost of the battery by (15). \(N_{bat_{t}}\) in (16) is the number of cycles the battery is at while \(\lambda_{Nu_{bat_{t}}}\) is the capacity factor loss at the \(N\)th cycle. Equation (17) defines the total cost of the BESS and (18) represents the difference of the degradation cost between different time intervals. \[N_{bat_{t}}=\sum_{t=0}^{t}\frac{1}{2}(DoD_{t}+DoC_{t}) \tag{15}\] \[N_{bat,max_{t}}-N_{bat_{t}}=\frac{N_{bat,max_{t}}(1-\lambda_{Nu_{ bat_{t}}})}{\lambda_{Nu_{bat_{t}}}*\gamma}\] (16) \[C_{B_{N_{t}}}=\lambda_{Nu_{bat_{t}}}*C_{B_{Total}}\] (17) \[\Delta\lambda=\lambda_{Nu_{bat_{t}}}-\lambda_{Nu_{bat_{t-1}}} \tag{18}\] ## III Proposed cVaR Framework This section explains how the costs defined in the model for day-ahead scheduling is used in the cVaR framework. Fig. 1 presents the process from day-ahead scheduling to cVaR analysis. First, all the scenarios in the day-ahead scheduling must be completed. This means that for one time interval, t, there will be hundreds of scenarios operating with different demand constraints and PV generation. Then, when all N scenarios have been completed, they will create a large set of data points of cost optimized resource allocation including any possible load curtailment. These load curtailment measurements can be tested for stability and resiliency and used to create a risk profile using cVaR analysis. ### _cVaR Formulation_ The use of risk-constrained scenarios in financial models and utilities is to maximize profit with an internal pricing mechanism [16]. cVaR is a popular risk calculation algorithm. It is built on the work of value at risk (VaR) which calculates how to reduce risk within a certain confidence level (\(\beta\)) by minimizing loss due to the uncertainty in specific variables [16] otherwise defined as equation (19). The \(f(x,y)\) factor in (19) is defined as the losses calculated. The x denotes the variables available to fine tune and reduce risk where y represents the volatile uncertainty inherent in our system. By minimizing the worst-case scenario of y, the system could create an expected risk profile. This is calculated by taking the smallest possible cost (\(\alpha\)) that is greater or equal to \(f(x,y)\) and then calculated the risk factor over \(\beta\). This can be used to calculate the level of risk inherent in investing in certain markets and diversification tools (such as cash or bond hedging). \[\begin{array}{l}VaR=min\{\alpha\in R:P\{f(x,y)\leq\alpha\}\geq\beta\}\quad for \ 0\\ \leq\beta\leq 1\end{array} \tag{19}\] Unfortunately, VaR suffers from two key issues. Mathematically, it has a lack of convexity and subadditivity making it non-ideal for intensive calculation operations. Secondly, VaR only minimizes losses within a given confidence level and does not consider losses occurring at a confidence level outside of its boundaries at 1-\(\beta\). cVaR allows a better grasp for situations where a small likelihood of risk could have a huge effect [17]. cVaR as a financial constraint is seen in equation (20). \[cVaR=\mathbb{E}_{y}(f(x,y)|f(x,y)\geq VaR) \tag{20}\] In this evolution of the original VaR equation, the cVaR is now taking the expected value of random variables above its VaR consideration. In other words, it is taking the loss factors inherent in the system and calculating them in situations of 1-\(\beta\) or above the standard confidence interval. This is a much more robust and flexible system since it allows forecasting of situations where non-likely events outside of the confidence interval occur. Additionally, a higher cVaR means the system is inherently less stable because in non-normal situations, the losses can be considerably higher. To transition from the above equations to models with samples, (20) can be converted into equation (21). \[cVaR=\min\left(\alpha+\frac{1}{N(1-\beta)}\sum_{t=1}^{N}[f(x,y)\ -\alpha]^{+}\right) \tag{21}\] The first shift here is the addition of N moving the model from continuous to samples with N scenarios. The second change is that the positive component of our losses taken by known x and volatile y subtracted by \(\alpha\) as our hedging cost. For our formulation, we can then replace \([f(x,y)-\alpha]^{+}\) with \(z_{i}\) as shown in (22). The cVaR equation can now be redefined with z\({}_{i}\)as seen in (23). \[z_{i}=[f(x,y)-\alpha]^{+} \tag{22}\] \[VaR=min\left(\alpha+\frac{1}{N(1-\beta)}\sum_{i=1}^{N}z_{i}\right) \tag{23}\] ### _cVaR Application in Microgrid_ This section explains how cVaR will be used to maximize power reliability for priority customers. cVaR gives a weighted average of risk above the normal confidence level. This allows a calculation of the risk in high-demand scenarios that can occur in emergency situations. Equation (24) takes \(f(x,y)\) Figure 1: Procedure of the proposed microgrid scheduling and risk analysis. from (19) and defines the combined losses as demand subtracted from diesel, PV, and BESS [14]. \(\alpha\) in (25) is set as the smallest load curtailment while maintaining stability at the confidence level \(\beta\)[16]. The \(\alpha\) value will be measured in units of kilowatts. While keeping with cVaR convention, demand load that is curtailed will be referred to as \(\alpha\) moving forward. All this can be represented as: \[f(x,y)=D_{Net\ Load_{t}}-P_{BESS_{t}}-P_{D_{t}} \tag{24}\] \[\text{z}_{i}=[D_{Net\ Load_{t}}-P_{BESS_{t}}-P_{D_{t}}-\alpha]^{+} \tag{25}\] ## IV Case Studies The test residential microgrid is designed with currently available commercial products. It is designed with a battery system made up of twenty Tesla Powerwall batteries with a capacity of 15 kWh each that starts at an initial value of 10 kWh for each battery. \(B_{i}\) is the capital cost of the BESS system at $10,000 [18]. The standard rooftop residential solar output is at 4 kW during peak solar generation [19]. There are ten residential homes in need of power all with installed solar panels. \(\varepsilon\) in (5) is set to 0.5 therefore priority customers were a total of 33% of total demand. This means that a maximum of 66% of customers can be essential customers [11]. A simulation of all 187 possible scenarios, \(N\), is run and the battery, diesel, and PV combination are recorded for each specific segment. \(\beta\) is defined as 5% for the confidence level in this analysis. The input data for the scenarios including load demand and PV generation is graciously provided by Pecan Street. This is part of Pecan Street's Dataport Project [19] which includes the world's largest resource for residential energy use data, electric transportation and has been expanded to include residential water usage, electric transportation, and regenerative agriculture [20]. Electricity demand as well as PV generation will have expected statistical deviation from historical data. \(SOC_{min}^{green}\) is defined as 20% and \(SOC_{max}^{green}\) as 80% in the model using values from previous research [21]. \(P_{D_{lmin}}\) is set as 0 and \(P_{D_{lmax}}\) is set as 3.75 kW for the system generator. The diesel generator is assumed to have sufficient fuel to operate during the whole course of the day. \(P_{D_{max}}^{B}\) and \(P_{cmax}^{B}\) is defined as 5 kW for one Tesla Powerwall [18]. AT represents the length of time segment which is 15 minutes in this paper. There are 96 segments for a 24-hour period. The load curtailment if any for each fifteen-minute interval is recorded. The load curtailment is divided by the total demand supplied and recorded in a matrix. Python is used to take these values and calculate the conditional value at risk for the most demanding and highest load curtailment of the five percent of scenarios (nine scenarios) of the total set of 187 scenarios for all time segments. The results highlight the cVaR analysis on the microgrid system for one full day or 96 segments on a total of 187 scenarios. Fig. 2 shows in how many instances curtailment was necessary in the model. This showcases a high level of self-sufficient reliability that above would be a boon to the existing electrical grid infrastructure. The system had zero instances of load curtailment for 90% of scenarios. It had a maximum of 13 instances of load curtailment in the most challenging 5% of cases for all segments. From a system wide load curtailment view, now we can take a more in depth look at the 5% of challenging scenarios in terms of balancing generation and demand. The standard deviation shown in Fig. 3 presents the difference in values of the dataset for each segment. Within each segment, there is a 20-30% standard deviation indicating the model is robust. These results prove that the model can take in very different demand constraints and respond appropriately to the need of the specific scenario. Interestingly, the standard deviation is largely consistent throughout the day, indicating that the load curtailment deviation is not too different between sample segments. An exception to this is late mornings to end of the afternoon when the generation of residential PV are sufficient, there are far less load curtailments and therefore the standard deviation is lower. Fig. 4 presents the twenty-six scenarios or 13.9% of the entire scenario dataset that was responsible for all load curtailment. This is expected since the model was tested on a robust dataset which has microgrid scenarios with larger than expected demands. This is very likely in emergency situations due to weather conditions, and it is important to note how the microgrid would react in these scenarios. Figure 4: Active curtailment during an entire day. Figure 3: Standard deviation of load curtailment in cVaR analysis. Figure 2: Time segments with Active Curtailment. As shown in Fig. 5, the behavior of the case study matched expectations in the following ways. The risk when calculating real time energy management for the hours of 10 AM to 4 PM were reduced and in some time-segments brought to zero. This means residential solar times matched demand at these times and reduced risk of load curtailment. This is one of the main benefits of residential solar. It especially helps microgrids in providing a power source for a part of the day. Load curtailment was expected to be used in a small percentage of the case. The correct and targeted load curtailments can improve the system's reliability for priority customers. This is complementary to grid hardening efforts but has the advantage of lower costs because it can be built with existing infrastructure. ## V Conclusions cVaR analysis is conducted in a stand-alone microgrid alongside day ahead scheduling in this paper. The proposed energy management system demonstrates the adaptability of a multitude of generation sources being utilized along with load curtailment in different demand-constraint scenarios. The objective was to conduct a risk assessment on a microgrid system to assess likelihood of load curtailment. This allows for evaluating the risk of existing system infrastructure facing controlled load curtailment in a disaster scenario. Instead of proposing a brand new microgrid installation, existing electrical infrastructure in neighborhoods particularly those with high residential penetration can be retrofitted with additional diesel generation and battery storage services alongside its own energy management system with the proposed energy management system.
2307.14964
Angular Momentum-Dependent Spectral Shift in Chiral Vacuum Cavities
Based on a previously proposed unitary transformation for cavity quantum electrodynamics, we investigate the spectral shift of an atom induced by quantum fluctuations in a chiral vacuum cavity. Remarkably, we find an intriguing angular momentum-dependent shift in the spectra of bound states. Our approach surpasses conventional perturbative calculations and remains valid even in the strong-coupling limit. In addition, we establish a cavity-interaction picture for calculating the chiral vacuum Rabi oscillation in the strong-coupling limit for a generic central potential, without using the rotating wave approximation. The anomalous spectral shift revealed in this study possesses both fundamental and practical significance and could be readily observed in experiments.
Qing-Dong Jiang
2023-07-27T15:56:37Z
http://arxiv.org/abs/2307.14964v1
# Angular Momentum-Dependent Spectral Shift in Chiral Vacuum Cavities ###### Abstract Based on a previously proposed unitary transformation for cavity quantum electrodynamics, we investigate the spectral shift of an atom induced by quantum fluctuations in a chiral vacuum cavity. Remarkably, we find an intriguing angular momentum-dependent shift in the spectra of bound states. Our approach surpasses conventional perturbative calculations and remains valid even in the strong-coupling limit. In addition, we establish a cavity-interaction picture for calculating the chiral vacuum Rabi oscillation in the strong-coupling limit for a generic central potential, without using the rotating wave approximation. The anomalous spectral shift revealed in this study possesses both fundamental and practical significance and could be readily observed in experiments. _Introduction._--Vacuum is not void; instead, it is full of quantum fluctuations with virtual particles constantly being created and annihilated. The vacuum quantum fluctuations give rise to a plethora of well-known phenomena, including the Casimir effect [1; 2; 3; 4], the Lamb shift [5; 6; 7], anomalous magnetic moment [8; 9], vacuum Rabi oscillations [10; 11], and spontaneous photon emission [12]. In addition to these fundamental effects, physicists have directly probed the electromagnetic fluctuations within a vacuum cavity (often referred to as a "dark cavity") [13; 14]. The cavity offers a notable advantage as it allows for significant amplification of the quantum fluctuations by squeezing the cavity volume [15; 16; 17; 18]. This amplification can be described by the Hamiltonian of quantum electrodynamics for a single mode within a cavity of volume \(V\): \[\hat{H}_{\text{cavity}}=\frac{\epsilon_{0}V}{2}(E^{2}+c^{2}B^{2})=\frac{ \epsilon_{0}V}{2}(\dot{A}^{2}+c^{2}|\mathbf{k}\times\mathbf{A}|^{2})\] This Hamiltonian can be directly mapped onto the Hamiltonian of a harmonic oscillator \[\hat{H}_{\text{HO}}=\frac{m}{2}(\dot{x}^{2}+\Omega^{2}x^{2})\] by the substitution \(A\mapsto x\), \(\epsilon_{0}V\mapsto m\), and \(ck\equiv\omega_{c}\mapsto\Omega\). The ground state of a harmonic oscillator exhibits quantum fluctuations, quantified by the root-mean-square standard deviation of position: \(\Delta x=\left(\frac{\hbar}{2\hbar m}\right)^{\frac{1}{2}}\). This shows that the vacuum fluctuations of the vector potential are given by \(\Delta A=(\hbar/2\omega_{c}\epsilon_{0}V)^{\frac{1}{2}}\). Clearly, reducing the cavity volume can amplify quantum fluctuations significantly. In recent years, researchers have achieved remarkable success in creating extremely small cavities, approaching the nanoscale [19; 20; 21]. These advancements have paved the way for exploring the realm of strong light-matter coupling across various setups. Comparing to the Floquet method (i.e., engineering material properties with electromagnetic radiations), using cavity quantum fluctuations for material property engineering has obvious advantages [22; 23; 24; 25; 26; 27; 28]: i) Within the cavity, the interactions between light and matter surpass the limitations imposed by classical light-matter interaction bounded by the fine structure constant. As a result, the properties of materials can be deeply tailored in cavities. ii) Engineering materials and molecules within vacuum cavities is superior to the Floquet method where external electromagnetic radiation heat up the system and destroy quantum effects. iii) Cavity quantum fluctuations allow for the engineering of material properties in an equilibrium manner, in sharp contrast to the Floquet method that drives the system out of equilibrium, resulting in transient and complex physical properties. Over the past several years, researchers have presented pioneering proposals, with some already realized, to utilize cavity quantum fluctuations for engineering material conductivity [29; 30], inducing anomalous superconductivity [31; 32; 33; 34], changing band structure [35; 36; 37], and even modulating chemical reactivity [38; 39; 40; 41; 42; 43; 44]. These advancements exemplify the profound potential of cavity quantum fluctuations in tailoring material properties. Nevertheless, despite its advantages, using cavity quantum fluctuations to control quantum states of matter faces a major obstacle. Unlike real electric or magnetic fields, most quantum fluctuations inherently maintain parity symmetry (PS) and time-reversal symmetry (TRS). As a result, their ability to manipulate material and molecular properties is constrained. To induce substantial changes in material properties, it becomes essential to encode symmetry breaking into quantum fluctuations. In recent years, multiple works have shown the impact of discrete symmetry breaking on phenomena induced by quantum fluctuations. Notable examples include symmetry breaking induced anomalous Casimir forces [45; 46], chirality selection in chemical reactions [47; 48], and topological phase transition [49; 50; 51]. A recent work by Wilczek and the author [52] highlighted the combined power of symmetry breaking and quantum fluctuations. It shows that symmetry breaking can be transmitted from materials to their vicinity by vacuum quantum fluctuations. The vacuum in proximity to a symmetry-broken material was referred to as its _Quantum Atmosphere_. In this Letter, we present what, to the best of our knowledge, are the first fully quantum mechanical predictions of angular momentum(AM)-dependent spectral shifts, induced by the quantum fluctuations in a chiral cavity. Chiral cavities can be feasibly realized using magneto-optical materials, which has been extensively studied recently [53; 54; 23]. Because chiral cavities break time-reversal symmetry, what we are calculating is actually the spectral shift induced by the TRS-broken quantum atmosphere in a chiral cavity. It is worth noting that while our primary focus centers on the AM-dependent spectral shift, our analytical calculation of the cavity-Lamb (CL) shift in the strong coupling limit should be equally valuable. In the last part, we establish framework--the cavity-interaction picture--to calculate time-dependent phenomena. This enables us to compute chiral vacuum Rabi oscillation in the strong coupling limit for a generic central potential, without relying on the rotating wave approximation. _Chiral unitary transformation and cavity-induced potential shift._-- To set the stage, we examine the generic Hamiltonian \[\hat{H}=\frac{1}{2m}\left(\hat{\mathbf{p}}-q\hat{\mathbf{A}}\right)^{2}+V( \mathbf{r})+\hbar\omega_{c}\hat{a}^{\dagger}\hat{a}, \tag{1}\] which captures the interaction between a charged particle (with mass \(m\), charge \(q\)) and a photonic mode in a cavity. We assume an external single-particle potential, \(V(\mathbf{r})\), and a single photonic mode with frequency \(\omega_{c}\), where \(\hat{a}\) and \(\hat{a}^{\dagger}\) are the annihilation and creation operators of photons, respectively. Generalization to multimode cases is straightforward. The vector potential \(\hat{\mathbf{A}}\) can be expressed as \(\hat{\mathbf{A}}=A_{0}\left(\mathbf{\varepsilon}^{*}\hat{a}^{\dagger}+\mathbf{ \varepsilon}\hat{a}\right)\), where \(\mathbf{\varepsilon}\) represents the polarization of the cavity photonic modes. The mode amplitude is \(A_{0}=\sqrt{\frac{\hbar}{2\epsilon_{0}V\omega_{c}}}\), with \(V\) the cavity volume. In the context of cavity quantum electrodynamics, a dimensionless parameter \(g=\sqrt{\frac{(qA_{0})^{2}}{m\hbar\omega_{c}}}\) is commonly used to quantify the strength of the light-matter coupling. The regime \(10^{-1}\leqslant g\leqslant 1\) is referred to as strong coupling. For \(g\geqslant 1\), it is referred to as deep strong coupling, indicating even stronger interaction between light and matter1. In this letter, we focus on the chiral cavity case, where the photonic polarization is \(\mathbf{\varepsilon}=\frac{1}{\sqrt{2}}\left(\mathbf{e_{x}}+i\mathbf{e_{y}}\right)\), and \(\mathbf{e_{x(y)}}\) represents the unit vector in the x(y)-direction. A recent seminal advancement in cavity quantum electrodynamics is the ability to decouple matter and light degrees of freedom, either in the weak or strong coupling limits, through a special unitary transformation [55; 56]. This transformation is elegantly achieved by applying the unitary operator: Footnote 1: The terminology differs slightly from the standard context of quantum optics, where the term “strong coupling” typically refers to reversible interactions between photons in the cavity mode and the atom. Here, “strong coupling” and “weak coupling” signify the relative strength of the light-matter coupling in comparison to the cavity mode energy. \[\hat{U}=\exp\left[-i\frac{\xi}{\hbar}\hat{\mathbf{p}}\cdot\hat{\mathbf{\pi}}\right],\text{ with }\xi=\frac{g}{1+g^{2}}\sqrt{\frac{\hbar}{m\omega_{c}}} \tag{2}\] to the original Hamiltonian \(H_{C}\), where \(\hat{\mathbf{\pi}}=i\left(\mathbf{\varepsilon}^{*}\hat{a}^{\dagger}-\mathbf{\varepsilon} \hat{a}\right)\) is the photonic momentum operator. The parameter \(\xi\) is chosen to eliminate the linear light-matter coupling term \((\hat{\mathbf{p}}\cdot\hat{\mathbf{A}})\). Remarkably, this unitary transformation yields an equivalent yet formally much neater Hamiltonian: \[\hat{H}^{\prime}(\xi) = \hat{U}^{\dagger}\hat{H}\hat{U}\] \[= \frac{\hat{\mathbf{p}}^{2}}{2m_{\text{eff}}}+V\left(\mathbf{r}+ \xi\hat{\mathbf{\pi}}+\frac{\xi^{2}}{2\hbar}\hat{\mathbf{p}}\times\mathbf{e_{z}} \right)+\hbar\omega_{\text{eff}}\hat{a}^{\dagger}\hat{a}\] with the renormalized mass \(m_{\text{eff}}=m(1+g^{2})\) and the effective cavity frequency \(\omega_{\text{eff}}=\omega_{c}(1+g^{2})\). It is important to note that the light-matter coupling is fully encapsulated in the shifted single-particle potential, offering a key advantage of the transformed Hamiltonian (Eq.(3)). Several remarks are in order to better understand the above transformation: * The key advantage of Eq.(3) is that the light-matter coupling is fully encoded in the shifted single-particle potential. * The coupling parameter \(\xi=\frac{g}{1+g^{2}}\sqrt{\frac{\hbar}{m\omega_{c}}}\) approaches zero not only in the weak-coupling limit (\(g\to 0\)) but also in the strong-coupling limit (\(g\rightarrow\infty\)). * Cavity light-matter interactions lead to an increase in both the effective mass of the particle (\(m_{\text{eff}}>m\)) and the effective mode frequency (\(\omega_{\text{eff}}>\omega_{c}\)). These features allow the application of perturbation theory (in terms of \(\xi\)) to investigate strong light-matter coupling in cavities. In the following sections, we explore several prominent effects induced by chiral cavities, including angular momentum-dependent spectral shift, the cavity Lamb shift, and chiral vacuum Rabi oscillations. _Cavity QED renormalized spectra._-- We now examine the influence of quantum fluctuations in a cavity on the spectral shift of a bound state governed by the Hamiltonian: \[\hat{H}=\frac{\left(\hat{\mathbf{p}}-q\hat{\mathbf{A}}\right)^{2}}{2m}+V(r)+\hbar \omega_{c}\hat{a}^{\dagger}\hat{a}. \tag{4}\] where \(V(r)\) is a central potential and \(m\) is the bare mass of the particle. By applying a cavity unitary transforma tion to the Hamiltonian, we obtain: \[\hat{H}^{\prime}=\frac{\hat{\mathbf{p}}^{2}}{2m_{\mathrm{eff}}}+\hat{V}\left(r \right)+\hbar\omega_{\mathrm{eff}}\,\hat{a}^{\dagger}\hat{a}+\Delta\hat{V}. \tag{5}\] In this expression, \(\Delta\hat{V}\) is the perturbative potential for small \(\xi\), which can be further expanded to second order as: \[\Delta\hat{V} = \hat{V}\left(\mathbf{r}+\mathbf{\hat{\tau}_{c}}\right)-\hat{V}\left(r\right) \tag{6}\] \[\approx \mathbf{\hat{\tau}_{c}}\cdot\mathbf{\nabla}V(r)+\frac{1}{2}\left(\mathbf{ \hat{\tau}_{c}}\cdot\mathbf{\nabla}\right)^{2}V(r).\] Here, \(\mathbf{\hat{\tau}_{c}}=\xi\mathbf{\hat{\pi}}+\frac{\xi^{2}}{2\hbar}\hat{\mathbf{p}} \times\mathbf{e_{z}}\). This formulation applies to both weak and strong light-matter interactions, as emphasized. In the intermediate light-matter coupling strength, where perturbative expansion is not valid, Eq.(5) may still offer advantages in certain situations [23]. With the above preparation, we can employ perturbation theory using the unperturbed states, which are the product states of the n-th bound state \(|\psi_{n}\rangle\) and the cavity vacuum state (zero photon) \(|0\rangle_{\mathrm{cav}}\), i.e., \(|\Psi_{n}\rangle=|\psi_{n}\rangle\otimes|0\rangle_{\mathrm{cav}}\), where \(|\psi_{n}\rangle\) represents the bound state of a particle with an effective mass \(m_{\mathrm{eff}}\) in a central potential \(V(r)\). While the effective mass approaches the bare mass \(m\) in the weak-coupling limit, it significantly deviates from the bare mass in the strong light-matter coupling regime. The first-order perturbation calculation yields the energy shift: \[\Delta E_{n}=\langle\Psi_{n}|\Delta\hat{V}|\Psi_{n}\rangle=\Delta E_{n}^{ \mathrm{AM}}+\Delta E_{n}^{\mathrm{CL}}, \tag{7}\] where \(\Delta E_{n}^{\mathrm{AM}}\) and \(\Delta E_{n}^{\mathrm{CL}}\) are the angular momentum (AM)-dependent shift and the cavity-Lamb (CL) shift, respectively. They are given by \[\Delta E_{n}^{\mathrm{AM}} = \frac{\xi^{2}}{2\hbar}\langle\psi_{n}|\frac{1}{r}\frac{dV(r)}{dr} \hat{L}_{z}|\psi_{n}\rangle \tag{8}\] \[\Delta E_{n}^{\mathrm{CL}} = \frac{\xi^{2}}{4}\langle\psi_{n}|\nabla^{2}V(r)|\psi_{n}\rangle. \tag{9}\] The expressions, Eq. (8) and Eq. (9), are the key findings of this letter and remain applicable in both the weak and strong coupling regimes. Eq.(8) indicates that the cavity quantum fluctuations indeed encode the breaking of time-reversal symmetry. This is because, in the presence of time-reversal symmetry, states with opposite angular momentum, \(l_{z}=\pm 1\), would have the same energy. Notably, reversing the cavity's chirality induces a sign change in the AM-dependent spectral shift, showing the essential importance of the cavity's chirality. In what follows, we will provide examples to illustrate these two key formulas and demonstrate they predict directly measurable effects. _Anomalous spectral shift in two examples._-- We evaluate the AM-dependent shift and the CL shift in two examples: the Hydrogen atom and the two-dimensional harmonic oscillator. Let us first focus on the spectral shift of the Hydrogen atom model to gain physical understanding. For the Hydrogen atom, with the potential \(V(r)=-k/r\) (where \(k\equiv e^{2}/4\pi\epsilon_{0}\)), we determine the spectral shifts for each energy level. By substituting the eigen function of the Hydrogen atom into the formulas, we obtain the spectral shifts of the bound state \(|\psi_{n,l,l_{z}}\rangle\), where \(n\), \(l\), and \(m\) are the principal, azimuthal, and magnetic quantum numbers, respectively: \[\Delta E_{n,l,l_{z}}^{\mathrm{AM}} = \frac{l_{z}\,\xi^{2}\,k}{2a_{\mathrm{eff}}^{3}n^{3}l(l+\frac{1}{2 })(l+1)}; \tag{10}\] \[\Delta E_{n,l,l_{z}}^{\mathrm{CL}} = \frac{\pi\xi^{2}\,k}{n^{3}a_{\mathrm{eff}}^{3}}\,\delta_{l,0}\, \delta_{l_{z},0}. \tag{11}\] where \(a_{\mathrm{eff}}=4\pi\epsilon_{0}\hbar^{2}/m_{\mathrm{eff}}e^{2}\) is the effective Bohr radius. In these calculations, we have used the relations \(\langle 1/r^{3}\rangle=1/a_{0}^{3}n^{3}l(l+1/2)(l+1)\) and \(\nabla^{2}V=4\pi k\delta(r)\). The spectral shifts can be easily estimated. For instance, the AM-dependent spectral shift of the first excited state with angular momentum \(l=1\) and \(l_{z}=\pm 1\) is given by \(\Delta E_{2,1,\pm 1}=\pm\left(\frac{\xi}{a_{\mathrm{eff}}}\right)^{2}\frac{ \mathrm{Ry}}{24}\frac{m_{\mathrm{eff}}}{m}\approx 0.3\,\mathrm{meV}\), where \(g=0.01\), \(\omega_{c}=10^{16}s^{-1}\), and \(\mathrm{Ry}\) is the Rydberg energy. This estimation assumes a single-mode scenario, but it can be extended to include multiple modes. Furthermore, we can recover the Lamb shift by considering a large cavity (i.e., weak light-matter coupling limit) and integrating over all possible mode frequencies. It yields \[\Delta E^{\mathrm{Lamb}} = \sum_{n}\frac{\hbar}{m\omega_{c,n}}\frac{g^{2}}{2}\langle\Psi_{n} |\nabla^{2}V(r)|\Psi_{n}\rangle \tag{12}\] \[= \frac{1}{8\epsilon_{0}\pi^{2}}\int d\omega_{c}\frac{\hbar}{ \omega_{c}}\frac{q^{2}}{m^{2}}\langle\Psi_{n}|\nabla^{2}V(r)|\Psi_{n}\rangle\] \[= \frac{\hbar q^{2}}{8\epsilon_{0}\pi^{2}m_{\mathrm{eff}}^{2}}\ln \frac{1}{\pi\alpha}\langle\Psi_{n}|\nabla^{2}V(r)|\Psi_{n}\rangle.\] Here, in line with Hans Bethe's approach [6], we have regularized the non-relativistic theory by selecting \(\hbar\omega_{\mathrm{min}}=\hbar c\pi/a_{0}\) (where \(a_{0}\) represents the Bohr radius) as the smallest energy scale and \(\hbar\omega_{\mathrm{max}}=mc^{2}\) as the largest energy scale. We remark that the derivation of the Lamb shift closely resembles Theodore A. Welton's approach in the weak-coupling limit [57]. Next, we consider the spectral shift of a two-dimensional (2D) harmonic oscillators governed by the Hamiltonian \(\hat{H}=\frac{p_{z}^{2}+p_{y}^{2}}{2m}+\frac{m}{2}\omega^{2}\left(x^{2}+y^{2}\right)\). This Hamiltonian exhibits rotational symmetry and commutes with the angular momentum operator along the z-axis, \(\hat{L}_{z}\). By introducing the annihilation operator \(\hat{a}_{R(L)}=\left[\sqrt{\frac{m_{\mathrm{eff}}}{\hbar}}(x\pm iy)+i\frac{p_{ x}\pm p_{y}}{\sqrt{m_{\mathrm{eff}}}\omega}\right]/2\), one can rewrite the Hamiltonian and angular momentum operator in terms of num ber operators \(\hat{n}_{L(R)}=\hat{a}_{R(L)}^{\dagger}\hat{a}_{R(L)}\): \[\hat{H}_{\rm HO}=\left(\hat{n}_{R}+\hat{n}_{L}+1\right)\hbar\omega;\ \hat{L}_{z}=\hbar\left(\hat{n}_{R}-\hat{n}_{L}\right). \tag{13}\] Here \(\hat{H}_{\rm HO}\) and \(\hat{L}_{z}\) share the common set of eigenstates \[|\phi_{n_{R},n_{L}}\rangle=\frac{1}{\sqrt{n_{R}!n_{L}!}}(a_{R}^{\dagger})^{n_{ R}}(a_{L}^{\dagger})^{n_{L}}|\phi_{0,0}\rangle, \tag{14}\] where \(n_{R}\) and \(n_{L}\) are integers that characterize an eigenstate. According to the Eq.(8), the AM-dependent spectral shift of the state \(|\phi_{n_{R},n_{L}}\rangle\) is given by \[\Delta E^{\rm AM}=\frac{\xi^{2}}{2}m\omega^{2}\left(n_{R}-n_{L}\right). \tag{15}\] For the ground state of the 2D quantum Harmonic oscillator, \(\langle\hat{L}_{z}\rangle_{n}=0\), and AM-dependent spectral shift vanishes. However, a spectral gap of size \(m\omega^{2}\xi^{2}/2\) emerges for the originally degenerated first excited states (i.e., \(|\phi_{1,0}\rangle\) and \(|\phi_{0,1}\rangle\)) with different angular momentum. The CL shift remains a constant due to \(\nabla^{2}V(r)=m\omega^{2}\) in this special case. In addition to the two prototypical examples, our approach is applicable to a wide range of real experimental systems [58; 59]. For instance, one could measure the spectral shift of Rydberg atoms, superconducting circuits, quantum dots or excitons in transition-metal dichalcogenides, which can be effectively described by an hydrogen atom model [60; 61; 62; 63]. _Cavity interaction picture and polaritonic vacuum oscillation.--_ Spectral shifts and spontaneous emission are interconnected consequences of quantum fluctuations. In a vacuum cavity, an excited atom can spontaneously emit and reabsorb a cavity photon, a phenomenon known as vacuum Rabi oscillation [64; 65; 66]. In this section, we investigate vacuum Rabi oscillation in a chiral cavity, examining both the weak and strong light-matter coupling regimes. To proceed, we introduce the _cavity interaction picture_: In the cavity-interaction picture, quantum states and operators are defined as follows: \(|\Psi(t)\rangle_{I}=e^{i\hat{H}_{0}t/\hbar}|\Psi(t)\rangle\) and \(\Delta\hat{V}_{\rm I}(t)=e^{i\hat{H}_{0}t}\Delta\hat{V}(t)e^{-i\hat{H}_{0}t}\), where \(|\Psi(t)\rangle\) and \(\Delta V(t)\) represent the quantum state and operator in Schrodinger picture. In sharp contrast to the transitional interaction picture, the Hamiltonian \(H_{0}\) in cavity-interaction picture includes potential \(V(r)\). The interaction picture is highly useful for studying time-dependent phenomena. The wave function in the cavity interaction picture evolves according to \(|\Psi(t)\rangle_{I}=\hat{U}_{I}(t,0)|\Psi(0)\rangle_{I}\), where the unitary evolution operator is given by \[\hat{U}_{I}(t,0)=\mathcal{T}\left\{\exp\left[-\frac{i}{\hbar}\int_{0}^{t}d \tau\Delta\hat{V}_{I}(\tau)\right]\right\} \tag{16}\] where \(\hat{U}_{I}(t,0)\) represents the time evolution operator and \(\mathcal{T}\) stands for time ordering operator. Based on the cavity interaction picture, let us consider a two-level system consisting of an excited state \(|e\rangle\) and a ground state \(|g\rangle\) within a vacuum cavity. Specifically, we focus on the lowest two levels of the combined system, which are represented by the product states \(|\Psi_{1}\rangle=|e\rangle|0\rangle_{\rm cav}\) and \(|\Psi_{2}\rangle=|g\rangle|1\rangle_{\rm cav}\), where \(|0\rangle_{\rm cav}\) and \(|1\rangle_{\rm cav}\) correspond to the cavity photon states with zero and one photon, respectively. The scattering matrix is then given by \[\Delta\hat{V}_{\rm I}(t)=\left(\begin{array}{cc}\gamma_{11}&\gamma_{12}e^{- i\tilde{\omega}t}\\ \gamma_{21}e^{i\tilde{\omega}t}&\gamma_{22}\end{array}\right) \tag{17}\] where \(\gamma_{ij}=\langle\Psi_{i}|\Delta\hat{V}|\Psi_{j}\rangle\) and \(\tilde{\omega}=\omega_{2}-\omega_{1}\) represent the spectral gap between the two states. In the first-order approximation of the scattering matrices, the unitary evolution operator is given by \[\hat{U}_{I}(t,0)=1-\frac{i}{\hbar}\left(\begin{array}{cc}\gamma_{11}t&\gamma _{12}\frac{\sin(\tilde{\omega}t)}{\tilde{\omega}}e^{-i\tilde{\omega}t}\\ \gamma_{21}\frac{\sin(\tilde{\omega}t)}{\tilde{\omega}}e^{i\tilde{\omega}t}& \gamma_{22}t\end{array}\right)+\ldots\] If the system is initially prepared in the first excited state, i.e., \(|\Psi(0)\rangle_{I}=(0,\ 1)^{T}\), then the wave function at later times is given by \(|\Psi(t)\rangle_{I}=\hat{U}_{I}(t,0)|\Psi(0)\rangle_{I}\). Therefore, the probability of finding the system in the ground state after time \(t\) is \[P_{1\to 2}\left(t,\tilde{\omega}\right)\equiv|\langle\Psi_{2}|\Psi_{I}(t) \rangle|^{2}=\gamma_{12}^{2}\frac{\sin^{2}\left(\tilde{\omega}t\right)}{\hbar^ {2}\tilde{\omega}^{2}} \tag{18}\] where the scattering matrix is represented by \[\gamma_{12}^{\rm AM} = -i\frac{\xi}{\sqrt{2}}\langle e|\left(\partial_{x}+i\partial_{y} \right)V(r)|g\rangle \tag{19}\] \[= -i\frac{\xi}{\sqrt{2}}\langle e|\frac{dV(r)}{dr}e^{i\theta}|g\rangle.\] Without relying on the rotating wave approximation, which is commonly used in the Jaynes-Cummings model, we derived a general result that is applicable to both the weak and strong light-matter coupling regimes. Eq. (19) shows that the scattering matrix connects quantum states with magnetic quantum numbers that differ by exactly one \(\hbar\). This finding indicates that a chiral photon is emitted and re-absorbed in a chiral vacuum cavity. _Concluding remarks.--_ In our analysis, we focused on examining the spectrum of a single atom. Nevertheless, it is worth noting that our approach works for many-body systems, wherein collective enhancement can be anticipated. For example, for a group of electrons subjected to a confining potential \(V(r)\), the spectral shift scales with the total angular momentum of all electrons, i.e., \(\sum_{i}\langle\hat{L}_{z}^{(i)}\frac{1}{r}\frac{dV}{dr}\rangle\), given that all electrons coherently couple to a single cavity mode. Additionally, we should address the valid range of our perturbation theory. To apply our theory to cases involving strong light-matter coupling, it is necessary for the shift parameter \(\xi\) to be considerably smaller than the typical length scale within the sys tem. For example, when applying our theory to atomic spectra, we require \(\xi/a_{\rm eff}\ll 1\), where \(a_{\rm eff}\) is the aforementioned effective Bohr radius of an electron with an effective mass, \(m_{\rm eff}\). While for an electron in a vacuum, meeting this requirement becomes challenging for strong cavity light-matter coupling due to \(\xi/a_{\rm eff}=g\alpha\sqrt{\frac{mc^{2}}{\hbar a_{c}}}\), electrons in small-band semiconductors (e.g. InSb) can have a very small mass (\(\sim 0.01m_{e}\)) and should easily satisfy this condition for \(g\sim 0.1\). In conclusion, we have successfully developed a perturbation theory applicable to both weak and strong light-matter coupling regimes, uncovering an AM-dependent chiral spectral shift in chiral cavities. We determined the AM-dependent spectral shift and CL shift for two specific examples, demonstrating that the effect is robust and detectable in experimental settings. Furthermore, we have established the foundation for cavity time-dependent perturbation theory, enabling us to calculate chiral vacuum Rabi oscillations for arbitrary central potentials in the regime of strong light-matter coupling. _Acknowledgement.--_ We gratefully acknowledge previous collaborations F. Wilczek on this subject. We also appreciate the insightful discussions and helpful comments from Hans Hansson, Jianhui Zhou and Yi-Zhuang You. Q.-D. Jiang was sponsored by Pujiang Talent Program 21PJ1405400 and TDLI starting up grant.
2310.18549
Deep Intrinsic Decomposition with Adversarial Learning for Hyperspectral Image Classification
Convolutional neural networks (CNNs) have been demonstrated their powerful ability to extract discriminative features for hyperspectral image classification. However, general deep learning methods for CNNs ignore the influence of complex environmental factor which enlarges the intra-class variance and decreases the inter-class variance. This multiplies the difficulty to extract discriminative features. To overcome this problem, this work develops a novel deep intrinsic decomposition with adversarial learning, namely AdverDecom, for hyperspectral image classification to mitigate the negative impact of environmental factors on classification performance. First, we develop a generative network for hyperspectral image (HyperNet) to extract the environmental-related feature and category-related feature from the image. Then, a discriminative network is constructed to distinguish different environmental categories. Finally, a environmental and category joint learning loss is developed for adversarial learning to make the deep model learn discriminative features. Experiments are conducted over three commonly used real-world datasets and the comparison results show the superiority of the proposed method. The implementation of the proposed method and other compared methods could be accessed at https://github.com/shendu-sw/Adversarial Learning Intrinsic Decomposition for the sake of reproducibility.
Zhiqiang Gong, Xian Zhou, Wen Yao
2023-10-28T00:41:25Z
http://arxiv.org/abs/2310.18549v1
# Deep Intrinsic Decomposition with Adversarial Learning for Hyperspectral Image Classification ###### Abstract Convolutional neural networks (CNNs) have been demonstrated their powerful ability to extract discriminative features for hyperspectral image classification. However, general deep learning methods for CNNs ignore the influence of complex environmental factor which enlarges the intra-class variance and decreases the inter-class variance. This multiplies the difficulty to extract discriminative features. To overcome this problem, this work develops a novel deep intrinsic decomposition with adversarial learning, namely AdverDecom, for hyperspectral image classification to mitigate the negative impact of environmental factors on classification performance. First, we develop a generative network for hyperspectral image (HyperNet) to extract the environmental-related feature and category-related feature from the image. Then, a discriminative network is constructed to distinguish different environmental categories. Finally, a environmental and category joint learning loss is developed for adversarial learning to make the deep model learn discriminative features. Experiments are conducted over three commonly used real-world datasets and the comparison results show the superiority of the proposed method. The implementation of the proposed method and other compared methods could be accessed at [https://github.com/shendu-sw/Adversarial_Learning_Intrinsic_Decomposition](https://github.com/shendu-sw/Adversarial_Learning_Intrinsic_Decomposition) for the sake of reproducibility. Adversarial Learning, Deep Intrinsic Decomposition, Environmental-related Feature, Category-related Feature, Hyperspectral Image Classification. ## I Introduction Hyperspectral images, which contain a multitude of spectral bands including the visible and non-visible parts of the electromagnetic spectrum [1], can provide an extensive and detailed view of the Earth's surface and play a crucial role in various fields, including agriculture, geology, ecology, and disaster management [2]. The plentiful spectral and spatial information of hyperspectral data allows for precise discrimination and characterization of materials, terrain, and environmental features, facilitating applications such as land cover mapping [3], mineral identification [4], vegetation health assessment [5], and pollution monitoring [6]. However, great spectral similarity occurs between different objects which makes difficulty to discriminate different objects. Another challenge arises from the complexity of handling a vast amount of spectral information across numerous narrow bands. The high dimensionality of the data poses difficulties in effective feature selection, model training, and computational demands. Additionally, atmospheric effects, mixed pixels, and the need for extensive, accurately labeled training data make hyperspectral classification a formidable task. Therefore, there exists huge demand to explore effective methods to extract discriminative features from the hyperspectral image. 1. A robust and effective feature extraction backbone network to understand and represent the complex spectral-spatial correlation of hyperspectral image is required. Generally, a well-designed network would have great potential to capture relevant patterns and characteristics from the training samples. 2. A proper learning strategy is imperative to truly harness the discriminative information of the hyperspectral image. Especially, by considering the unique characteristics of the hyperspectral image, the model's representational ability could be enhanced to extract valuable latent information from the complex data. Following these two fundamental considerations, there have been increasing efforts to explore impressive methods for hyperspectral image classification. Faced with the first problem, conventional methods design hand-crafted spectral features to represent the hyperspectral image. These well-established techniques usually includes spectral feature extraction (e.g. principal component analysis (PCA) [7], linear discriminant analysis (LDA) [8]), statistical classifiers [9], and dimensionality reduction (e.g. non-negative matrix factorization (NMF) [10], t-distributed stochastic neighbor embedding (t-SNE) [11], which cannot be adaptive for the complex latent correlation within the image. To pursue more representative methods, much efforts have been paid on machine learning algorithms, including support vector machines (SVM) [12], decision trees [13], k-Nearest Neighbors (k-NN) [14], and random forests [15], to optimize feature extraction and classification processes. These methods are generally "shallow" methods with only one or two layers, which limit their ability to capture the intricate patterns and spectral information embedded in hyperspectral data. Recently, deep learning with multiple hidden layers have gained prominence in hyperspectral image classification [16]. They can automatically extract hierarchical features from the data and capture complex relationships, which can further enhance classification accuracy. Generally, based on different architectural paradigms, these deep learning methods can be broadly several classes, such as recurrent neural networks (RNNs), graph convolutional networks (GCNs), CNNs, Trans formers, and others. RNNs are good at capturing temporal and spectral dependencies within the hyperspectral data. As a representative, Hang et al. designed a cascaded RNN for HS image classification by taking advantage of RNNs that can model the sequentiality to represent the relations of neighboring spectral bands effectively [17]. GCNs can effectively capture and propagate information across this spectral graph [18], allowing for the modeling of complex relationships and contextual dependencies within hyperspectral data. MiniGCN [19], which provides a feasible solution for addressing the issue of large graphs in GCNs, is a representative of this class of methods. Transformers excel at capturing long-range dependencies in the data, which is especially useful when hyperspectral information is distributed across a wide spectral range. ViT [20], Transformer in Transformer (TNT) [21], SpectralFormer [22], are typical transformers which can be applied for hyperspectral image classification. CNNs, as the most used deep architectures for hyperspectral image classification, can capture local spatial relationships while efficiently process the spectral information. The representative CNNs, such as Noise CNN [23], HybridSN [24], PResNet [25], and 3-D CNN [26], can make full use of both the spatial and spectral information, and present comparable or even better performance than other paradigms. While these architectures exhibit promising potential for hyperspectral images, they tend to overlook the intrinsic properties of hyperspectral images, thereby limiting their classification performance. Through analysis of intrinsic structure of hyperspectral image, this work will mainly propose a deep intrinsic decomposition framework for hyperspectral image classification. The framework constrains the generative network (HyperNet) and discriminative network to extract the environmental-related features and category-related features, which can mitigate the influence of environmental factors and better discriminate different objects. In order to deal with the second one, this work develops a novel adversarial learning method for deep intrinsic decomposition utilize the intrinsic physical property. Prior works mainly focus on design the specific training losses to learn a better model. The common training loss quantifies the disparity between the predicted outputs of the model and the actual ground truth labels during the training process, such as the generally used softmax loss [27]. Some other works also construct training loss functions for hyperspectral remote sensing images by incorporating inter-sample relationships [28, 29]. This approach harnesses the spectral similarities and differences between samples in the dataset to improve the performance of deep models. Furthermore, more advanced avenue of research explores the incorporation of the physical properties inherent to categories within hyperspectral data for the construction of training loss functions, such as Statistical loss [30], DMEM loss [31]. By considering the unique spectral characteristics and physical attributes of materials or objects of the same category, it becomes possible to design loss functions that promote a deeper understanding and better exploitation of these intrinsic properties. While these training loss functions for developing hyperspectral remote sensing image classification models have become capable of harnessing intra-class structural information, they still disregard the influence of environmental factors on hyperspectral imaging. When dealing with hyperspectral images, it is essential to acknowledge the significant impact that environmental factors have on classification performance. The intricate interplay of these factors can introduce variations in spectral signatures, potentially leading to misclassification or reduced accuracy in the analysis of hyperspectral data. Researchers try to isolate the unique spectral characteristics or intrinsic properties of the materials or objects within the image through hyperspectral intrinsic decomposition [32]. However, prior works on hyperspectral intrinsic decomposition predominantly relied on general spectral analysis techniques [33, 34, 35]. The classification performance is limited due to limitations in their model's ability to express complex spectral information effectively. Motivated by [36], this work endeavors to implement deep intrinsic decomposition by leveraging a dedicated adversarial learning method. The intention is to harness the power of deep neural networks to capture the intricate interplay between spectral and spatial information in hyperspectral data. By incorporating adversarial learning, which involves the training of a generator and discriminator network, the model can learn to disentangle intrinsic components more effectively. Considering the merits of both the hyperspectral deep intrinsic decomposition and adversarial learning, this work develops a new deep intrinsic decomposition with adversarial learning for hyperspectral image classification. First, we design a adversarial network which contains the hypernet and discriminative network to extract the environmental-related feature and category-related feature. Then, a environmental and category joint learning loss is developed for adversarial learning of the model. Finally, we have successfully implemented deep intrinsic decomposition through our specific adversarial learning framework. To be concluded, this paper makes the following contributions. * We revisit the intrinsic property of hyperspectral image and propose a new adversarial network comprising a hypernet and a discriminative network that jointly extract environmental-related and category-related features from hyperspectral data. This innovation enables a more comprehensive understanding of complex scenes. * We develop a new adversarial learning based on the environmental and category joint learning loss to make the model learn discriminative environmental-related features and category-related features. This loss function encourages the effective disentanglement of intrinsic components, thereby improving the model's performance in hyperspectral decomposition tasks. * We qualitatively and quantitatively evaluate the classification performance of the proposed AdverDecom on three representative hyperspectral image datasets, i.e., Pavia University data, Indian Pines data, and Houston2013 data. Comparisons with other state-of-the-art methods show that the proposed method can have a significant superiority (with an increase of at least 3% OA). The remainder of this paper is organized as follows. Section II details the proposed AdverDecom, including hyperspectral intrinsic decomposition, adversarial network and adversarial learning for deep intrinsic decomposition, and implementation details, for hyperspectral image classification. Extensive experiments are conducted over three real-world datasets for quantitative and qualitative evaluation of the proposed method in Section III. Section IV concludes the work with a brief outlook on future directions. ## II Proposed Method Given a specific hyperspectral image, the goal of classification task is to assign a unique land-cover label to each pixel of the image. Denote \(X=\{\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{N}\}\) as the set of training samples of a given hyperspectral image, where \(N\) is the number of training samples, and \(y_{i}\) is the corresponding label of \(\mathbf{x}_{i}\). \(y_{i}\in\Gamma=\{1,2,\cdots,\Lambda\}\) where \(\Lambda\) represents the class number of the image. ### _Hyperspectral Intrinsic Decomposition_ The intrinsic information coupling model is designed to model the mutual coupling process of intensity and color of light during the imaging process. This model aims to elucidate the intricate interplay between light intensity and color, providing valuable insights into the underlying dynamics of image formation. As for a natural red-green-blue (RGB) image, the intrinsic image decomposition can be described as [37, 38] \[I=R\circ S \tag{1}\] where \(I\) denotes the original image, \(R\) and \(S\) represents the reflectance component and the shading component, respectively. \(\circ\) stands for the elementwise multiplication operator. In contrast to RGB images, hyperspectral images are typically acquired using passive imaging sensors that primarily capture energy reflected from solar radiation. Due to variations in sensitivity to scene radiance changes across different spectral bands, the pixel values in different bands undergo non-proportional changes with scene radiance variations. Therefore, the shading component of hyperspectral images affects each wavelength differently. Considering the varying effects, the hyperspectral intrinsic decomposition model can be formulated as [39] \[I(\lambda)=R(\lambda)\circ S(\lambda), \tag{2}\] where \(\lambda\) denotes the wavelength, \(R(\lambda)\) and \(S(\lambda)\) represents the reflectance component and the shading component. \(R(\lambda)\) determines the spectral reflectance signature which is unique spectral response of each pixel in the image. \(S(\lambda)\) describes the influence of environmental factors on the hyperspectral image. Based on the property, we define \(R(\lambda)\) and \(S(\lambda)\) as the category-related feature and environmental-related feature, respectively. For the task at hand, the objective of hypersepctral intrinsic image decomposition is to decrease the influence of complex environmental factors, extract and represent the intrinsic spectral and spatial information of hyperspectral images accurately, so as to improve the performance of hypersepctral image classification. Following we will introduce proposed deep intrinsic decomposition method based on the assumption model. ### _Adversarial Network for Deep Intrinsic Decomposition_ As shown in Fig. 1, this work constructs a novel adversarial network to realize the deep intrinsic decomposition for hyperspectral image. The adversarial network consists of the HyperNet and the discriminative network. #### Ii-B1 HyperNet The aim of HyperNet is to decompose the learned feature into the environmental-related and category-related part. Under the assumption in subsection II-A, the original image can be divided into the category-related feature and environmental-related feature. Given a sample \(\mathbf{x}_{i}\) in the image. Define \(f_{1}(\cdot)\) as the function to extract the category-related feature and \(f_{2}(\cdot)\) as the function to extract the environmental-related feature. Then, based on Eq. 2, the problem can be formulated as \[f(\mathbf{x}_{i},\lambda)=f_{1}(\mathbf{x}_{i},\lambda)\circ f_{2}(\mathbf{x} _{i},\lambda) \tag{3}\] where \(f(\cdot)\) denotes the overall feature learned from \(\mathbf{x}_{i}\). \(f_{1}(\cdot)\) and \(f_{2}(\cdot)\) are fundamentally about learning mapping relationships and extracting features from the image. Deep neural networks are widely recognized for their exceptional nonlinear fitting capabilities, making them a prime choice for implementing functions \(f_{1}(\cdot)\) and \(f_{2}(\cdot)\) in this study. Deep learning enables the parallel processing of different spectral bands in hyperspectral remote sensing imagery, allowing deep models to model all bands simultaneously. Moreover, leveraging deep neural networks allows us to harness the complex, hierarchical representations within hyperspectral data, enabling us to capture intricate patterns and relationships. Fig. 1 introduces the framework of the developed HyperNet. \(Net_{1}\) and \(Net_{2}\) are used for the learning of \(f_{1}\) and \(f_{2}\), respectively. The first halves of \(Net_{1}\) and \(Net_{2}\) consist of a common CNN backbone network model with shared parameters, allowing them to collectively extract and learn essential hierarchical features from the image. This shared architecture ensures that both networks benefit from a shared understanding of low-level spectral-spatial features present in the data. In the latter halves of these networks, distinct MLP models are employed, which specialize in different objectives. This design enables the networks to leverage the same foundational feature representations while tailoring their respective output layers to extract the environmental-related and the category-related features. #### Ii-B2 Discriminative Network The discriminative network takes the environmental-related features as input and learns to discriminate the environmental pseudo class out of \(K\) pre-defined environmental category. This work uses a specific multi-layer perception as the discriminate network. Denote \(g(\cdot)\) as the mapping function of the discriminate network. Then, the extracted features can be formulated as \(g(f_{2}(\mathbf{x}_{i}))\), where \(g(\cdot):\mathbb{R}^{N_{1}}\rightarrow\mathbb{R}^{N_{2}}\) is the representation function, \(N_{1}\) represents the dimension of environmental-related and category-related features from the image, and \(N_{2}\) stands for the dimension of extracted features by the discriminative network. ### _Adversarial Learning for Deep Intrinsic Decomposition_ In this subsection, we will present the methodology and details of learning the representation function \(f(\cdot)\) of the image. In particular, we will first introduct the goal of deep intrinsic decomposition, motivate the proposed AdverDecom by general adversarial learning, and finally discuss key algorithmic details. #### Iii-A1 Deep Intrinsic Decomposition Goal Given sample \(\mathbf{x}_{i}\) from the hyperspectral image, the goal of Deep Intrinsic Decomposition is to learn a representation \(f(\cdot)\), such that for any environmental factors, there exists a latent mapping function \(f_{2}\) which allows \(f_{1}(\mathbf{x}_{i})\circ f_{2}(\mathbf{x}_{i})\) sufficiently distinctive to distinguish different land-cover classes. Formally, an optimal representation, \(f_{1}\), solves the following optimization problem: \[\min_{f_{1},f_{2}}\sum_{i=1}^{N}C_{1}(h(f_{1}(\mathbf{x}_{i})\circ f_{2}( \mathbf{x}_{i})),y_{i}) \tag{4}\] where \(f_{1},f_{2}:\mathbb{R}^{s\times s\times d}\rightarrow\mathbb{R}^{N_{1}}\) is the representation function, \(h(\cdot):\mathbb{R}^{N_{1}}\rightarrow\mathbb{R}^{\Lambda}\) is the mapping from the features to classification probabilities and \(C_{1}\) denotes a classification loss function. \(s\) represents the spatial size used for better classiffing performance and \(d\) stands for the channels of the hyperspectral image. As showed in prior subsection, we use specific deep neural networks (DNNs) to represent \(f_{1}\) and \(f_{2}\), respectively. #### Iii-A2 Adversarial Learning One challenge of the optimization in Eq. 4 is the intraspecies spectral variability caused by the environmental factors. These fluctuations can lead to great differences in the spectral signatures of objects or materials belonging to the same category, and multiplies the difficulty to learn \(f_{1}\) and \(f_{2}\). Due to the good representational ability, the DNN may memorize the distributions of samples from a specific class under different environmental factors. The optimization in Eq. 4 may lead to over-fitting and may not properly find an environmentally invariant representation. To address the aforementioned issue, this work employs adversarial learning framework to separately acquire environmental-related and category-related features. First, based on the training samples, we develop to construct environmental pseudo classes unsupervisedly. Then, based on the environmental pseudo classes, we construct the adversarial optimization problem. **Construction of Environmental Pseudo Classes** Given the training samples of the hyperspectral image, we group them Fig. 1: Flowchart of Deep Intrinsic Decomposition with Adversarial Learning (AdverDecom) method for hyperspectral image classification. (a) The adversarial network for hyperspectral image classification. Our network decompose the samples into the environmental-related representation and category-related representation to decrease the influence of complex environmental factors and emphasize the most distinctive and informative spectral signatures in the data for better classification performance. (b) The illustration of our Adversarial Learning for Deep Intrinsic Decomposition. We construct the environmental class labels under clustering and applied Algorithm 1 to train the proposed adversarial network. into a number of environmental pseudo classes unsupervisedly. In hyperspectral imaging, there exists the phenomenon that distinct objects or materials exhibit similar spectral signatures in the hyperspectral data. This intriguing occurrence generally arise due to the presence of the effects of environmental factors. Therefore, directly conducting clustering analysis on hyperspectral pixels can yield valuable insights into the influence of environmental variables on classification. This approach involves grouping pixels with similar spectral properties, potentially revealing patterns of spectral variability driven by environmental factors. Denote \(K\) as the number of predefined environmental pseudo classes. Denote \(P_{k}(k=1,2,\cdots,K)\) as the centers of different environmental factors. Iteratively, we calculates the centers of the \(K\) groups, optimizing the error as follows: \[\min_{P_{1},P_{2},\cdots,P_{K}}\sum_{i=1}^{N}\sum_{k=1}^{K}I(k=\arg\min_{1,2, \cdots,K}\|\mathbf{x}_{i}-P_{k}\|^{2})\|\mathbf{x}_{i}-P_{k}\|^{2} \tag{5}\] where \(I(condition)\) denotes the indicative function where \(I(\cdot)=1\) if condition is true \(I(\cdot)=0\) otherwise. Given a training sample \(\mathbf{x}_{i}\) in the hyperspectral image, denote \(z_{i}\) as the corresponding environmental pseudo class of \(\mathbf{x}_{i}\), then the environmental pseudo class \(z_{i}\) can be obtained by calculating \[z_{i}=\arg\min_{1,2,\cdots,K}\|\mathbf{x}_{i}-P_{k}\|^{2}. \tag{6}\] For convenience, following we will use \(\mathbf{x}_{i}^{(z_{i})}\) to represent the sample \(\mathbf{x}_{i}\) with environment index \(z_{i}\). **Adversarial Optimization** To solve the intraspecies spectral variability problem, we propose the following adversarial optimization framework: \[\max_{g}\min_{f_{1}}\sum_{i=1}^{N}(C_{1}(h(f_{1}(\mathbf{x}_{i})\circ f_{2}( \mathbf{x}_{i})),y_{i})-\alpha\cdot C_{2}(g(f_{2}(\mathbf{x}_{i}^{(z_{i})})),z _{i})) \tag{7}\] where \(g\) represents the DNN as the discriminator in subsection II-B2 to predict the environment index out of \(K\) environmental pseudo classes. \(C_{1}(\cdot)\) and \(C_{2}(\cdot)\) represent the classification loss functions (e.g., the cross entropy loss), respectively, \(\alpha\geq 0\) is a hyperparameter to control the degree of regularization. Intuitively, \(g\), and \(f_{1}\) play a zero-sum max-min game: the goal of \(g\) is to predict the environmental index \(z_{i}\) directly from \(f_{1}(\mathbf{x}_{i})\) (achieved by the outer \(\min\); the goal of \(f_{1}\) is to approximate the label \(y_{i}\) while making the job of \(g\) harder (achieved by the inner \(\max\)). In other words, \(g\) is a learned regularizer to remove the environmental information contained in \(f_{1}\). In our experiments, the output of \(h\) is a \(\Lambda\)-dimensional vector for the class probabilities of \(\Lambda\) land-cover classes, and we use the cross entropy loss for \(C_{1}(\cdot)\), which is given as \[C_{1}(h(f_{1}(\mathbf{x}_{i})\circ f_{2}(\mathbf{x}_{i})),y_{i})=-\sum_{j=1} ^{\Lambda}\delta_{jy_{i}}\log(h(f_{1}(\mathbf{x}_{i})\circ f_{2}(\mathbf{x}_ {i}))^{T}e_{j}) \tag{8}\] where \(\delta_{jy_{i}}=1\) if \(j=y_{i}\) and \(\delta_{jy_{i}}=0\) otherwise and \(e_{j}\in\mathbb{R}^{\Lambda}\) stands for the standard basis vector. Similarly, \(g\) represents a \(K\)-dimensional vector for the class probabilities of \(K\) land-cover classes, and we also use the cross entropy loss for \(C_{2}(\cdot)\), which is given as \[C_{2}(g(f_{2}(\mathbf{x}_{i}^{(z_{i})})),z_{i})=-\sum_{j=1}^{K}\delta_{jz_{i} }\log(g(f_{2}(\mathbf{x}_{i}^{(z_{i})}))^{T}e_{j}) \tag{9}\] where \(\delta_{jz_{i}}=1\) if \(j=z_{i}\) and \(\delta_{jz_{i}}=0\) otherwise and \(e_{j}\in\mathbb{R}^{K}\) stands for the standard basis vector. ### _Implementation of Deep Intrinsic Decomposition with Adversarial Learning_ Finally, we solve the optimization problem in 7 by the proposed AdverDecom (described in Algorithm 1 and Fig. 1). As shows in the algorithm, the AdverDecom contains three steps: (1) Construct the environmental pseudo classes of different samples (Line 4); (2) update the category-related representation based on the training batch \(B\) (Line 5); (3) update the discriminator \(g\) on the training batch \(B\) (Line 6). Under iteratively learning from (2) to (3), we can obtain the final environmental-invariant features. ``` 0:\((\mathbf{x}_{i},\,y_{i})(i=1,2,\cdots,N)\), \(\alpha\), \(K\) 0: Deep neural networks \(f_{1}\), \(f_{2}\), \(g\), \(h\). 1:repeat 2: Randomly sample training batch B. 3: Initialize the deep neural networks \(f_{1}\), \(f_{2}\), \(g\), \(h\). 4: Compute the environmental pseudo classes \(z_{i}\) of different samples \(\mathbf{x}_{i}\) using Eqs. 5 and 6. 5: Train \(f_{1}\) using stochastic gradient descent (SGD) with training loss \(L_{1}\) \[L_{1}=\sum_{\mathbf{x}_{i}^{(z_{i})}\in B}(C_{1}(h(f_{1}(\mathbf{x}_{i})\circ f _{2}(\mathbf{x}_{i})),y_{i})-\alpha\cdot C_{2}(g(f_{2}(\mathbf{x}_{i}^{(z_{i})})), z_{i}))\] (10) 6: Train \(g\) using stochastic gradient descent (SGD) with training loss \(L_{2}\) \[L_{2}=\sum_{\mathbf{x}_{i}^{(z_{i})}\in B}C_{2}(g(f_{2}(\mathbf{x}_{i}^{(z_{i})})),z_{i}))\] (11) 7:until Convergence ``` **Algorithm 1** Deep Intrinsic Decomposition with Adversarial Learning (AdverDecom) ## III Experimental Results ### _Experimental Datasets_ The classification performance of the proposed AdverDecomCNN is evaluated on three datasets, i.e., the Pavia University dataset [40], the Indian Pines dataset [40], and the Houston2013 dataset [41]. **Pavia University (PU) data** was obtained by the reflective optics system imaging spectrometer (ROSIS-3) over the city of Pavia, Italy with a spatial resolution of \(1.3m\times 1.3m\). It consists of \(610\times 340\) pixels and each pixel possesses 115 bands with a spectral coverage ranging from 0.43 to 0.86 \(\mu\)m. 12 spectral bands are abandoned due to the water absorption and noise, and the remaining 103 channels are used. A total of 43923 labeled sampels divided into nine classes have been chosen for experiments (seen table I for details). The number of training and testing samples per class are also listed in the table. **Indian Pines (IP) data** was gathered by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over the Indian Plnes test set in Northwestern Indiana at a ground sampling distance (GSD) of 20m. It consists of \(145\times 145\) pixels with spectral bands ranging from 0.4 to 2.5 \(\mu\)m. 24 bands covering the region of water absorption are removed and the remaining 200 spectral bands are used. 16 land cover classes with a total of 10366 labeled samples are selected for experiments. Table II shows the detailed training and testing samples in the experiments. **Houston 2013 (HS) data** was collected by the National Center for Airborne Laser Mapping (NCALM) over the University of Houston campus and the neighboring urban area throuth ITRES CASI 1500 sensor at the spatial resolution of 2.5m. The cube consists of \(349\times 1905\) pixels with 144 spectral bands ranging from 380 nm to 1050 nm. 15 land cover classes with a total of 15029 labeled samples are selected for experiments. Table III presents the details of the training and testing samples of the dataset for experiments. ### _Experimental Setups_ All the experiments in this paper are implemented under Pytorch 1.9.1, Cuda 11.2. The learning rate, epoch iteration, and training batch are set to 0.01, 500, and 64, respectively. The dimension of extracted features is set to 128. The structures of discriminative network in the experiments are set as 128-64-64-\(\Lambda\) where \(\Lambda\) denotes the number of pseudo classes. If not specific, \(5\times 5\) neighbors is used to incorporate the spatial information. We adopt the stochastic gradient descent (SGD) as optimizer of deep model. The codes will be publicly available soon for easily replication at [https://github.com/shendu-sw/Adversarial_Learning_Intrinsic_Decomposition](https://github.com/shendu-sw/Adversarial_Learning_Intrinsic_Decomposition). #### Iv-B1 Evaluation Metrics We use the overall accuracy (OA), average accuracy (AA), Kappa coefficient (\(\kappa\)) as measurements to evaluate performance. Furthermore, classification accuracy per class is also used to provide a thorough comparison. Besides, the visualization of classification maps is also provided to make a qualitative comparison. #### Iv-B2 Baseline Methods Several representative baselines and backbone networks are selected for comparison. These methods denote the state-of-the-art CNNs (e.g., 3-D CNN [26], PResNet [25], HybridSN [24]), RNNs (e.g., RNN [17]), GCNs (e.g., miniGCN [19]), and Transformers (e.g., ViT [20], SpectralFormer [22], SSFITNet [42]), for hyperspectral image classification. * Support vector machine (SVM) is implemented through sklearn package and It is performed with radial basis function (RBF) kernel. SVM is chosen as the representative of such methods of non-machine learning. * The 3-D CNN [26] consists of four subsequent convolutional block and each block accompanied with a ReLU activation function. Softmax layer and cross entropy classifier are finally added on the top layer of the 3-D CNN to classify different samples. * PResNet [25] is composed by several blocks of stacked convolutional layers, which have a bottleneck architecture (pyramidal bottleneck residual units) in which the output layer is larger than the input layer. * HybridSN was developed in [24], and in the implementation, the architecture comprises of three three-dimensional convolution layers, one two-dimensional convolution layer and two fully connected layers. Each convolutional layer is accompanied with batch normalization and ReLU layer. * RNN [17] consists of two recurrent layers with gated recurrent unit (GRU) where each layer has 128 neural units. * The miniGCN follows the implementation in [19], which successively contains a BN layer, a graph convolutional layer with 128 neuron units, and a ReLU layer. * The implementation of ViT [20] follow that in [22]. Only transformer encoders are used for classification task and five successive encoder blocks are used in the model's architecture. * As the former ViT, SpectralFormer [22] consists of five encoder blocks. Each encoder block consists of a four-head SA layer, a MLP with 8 hidden dimensions, and a GELU nonlinear activation layer. Specifically, the SpectralFormer contains the group-wise spectral embedding (GSE) and cross-layer adaptive fusion (CAF). * SSFTNT [42] adopts the architecture as the released code at [https://github.com/zgr6010/HSI_SSFTT](https://github.com/zgr6010/HSI_SSFTT). ### _Evaluation of the Computational Performance_ At first, we test the computational performance of the proposed method compared with other methods. In this set of experiments, the HybridSN is chosen as the backbone CNN to extract features. In order to demonstrate the general usability of the proposed method, a common machine with a Intel@Xeon(R) Gold 6226R GPU, 128GB RAM and Quadro RTX 6000 24GB GPU is used to evaluate the classification performance. The training and testing cost of 3-D CNN, PResNet, HybridSN, SpectralFormer are selected for comparison. Table IV shows the computational performance over the three datasets. From the table, we can find that the training of proposed AdverDecom took about 618.9s, 141.5s, and 382.6s over Pavia University data, Indian Pines data, and Houston2013 data, respectively. The proposed method took a comparable computational efficiency when compared with 3-D CNN and HybridSN while presented a better computational efficiency than PResNet and SpectralFormer. Furthermore, the testing of the proposed AdverDecom cost about 1.87s, 0.58s, and 0.61s separately which could satisfy the computational efficiency requirements of most applications. ### _Evaluation of the Models Trained with Different Backbone CNNs_ The backbone CNN influences the quality of extracted environmental-related and category-related features, and thus shows significant effect on the classification performance of the hyperspectral image. In this set of experiments, we test the performance of the proposed method with different backbone CNNs, s.t., 3-D CNN, PResNet, and HybridSN. The structures of these backbone CNNs are set as the setups in subsection III-B. Table V, VI, and VII shows the comparison results of the proposed method and the Vanilla CNNs over the three datasets. Inspect the comparison results in these tables and it can be noted that the following hold. First, the performance based on PResNet and HybridSN is better than that based on 3-D CNN. For Pavia University data, the proposed method can obtain 93.94%, 94.13% with PResNet and HybridSN as backbone which is better than that with 3-D CNN (88.64%). For Indian Pines data, the proposed method can obtain 88.03%, 88.50%, 91.07% with 3-D CNN, PResNet, HybridSN as backbones separately. As for Houston2013, the proposed method can obtain 86.30%, 88.60%, and 90.03% with 3-D CNN, PResNet, HybridSN as backbones, respectively. Then, the proposed deep intrinsic decomposition with adversarial learning can remarkably improve the performance of vanilla CNN. For Pavia University, the proposed method can improve the performance by 1.12%, 3.83%, 3.86% with 3-D CNN, PResNet, HybridSN as backbone model, respectively. As for Indian Pines, the proposed method can obtain an improvement by 10.81%, 5.53%, 12.35% with the three different backbones. While for Houston2013 data, the proposed method can improve the performance by 1.59%, 3.01%, and 3.14%, respectively. ### _Evaluation of the Models Trained with Different Number of Pseudo Classes \(K\)_ The construction of pseudo classes is an important factor for the learning of the discriminative network and therefore, it can also influence the classification performance. The number of pseudo classes defines the class of environmental factors. When \(K\) is set to 1, it means that all the samples possess the same environmental factor.In the experiments, the \(k\) is set to \(\{1,2,3,4,5,6,7,8,9,10,20,30\}\). Table VIII presents the results of the proposed method with different number of pseudo classes over the three datasets, respectively. From the figure, we can conclude that a proper \(K\) can guarantee a good performance of the proposed method. For Pavia Unviersity data, the performance achieve the best when the \(K\) is set to 5. For Indian Pines data, the performance achieve an accuracy of 91.07% OA which is the best when \(K\) is set to 2. While for Houston2013 data, the performance achieve the best (90.03%) when \(K\) is set to 4. Even though the proposed method performs different with different \(K\), one has a large range to select \(K\) since some \(K\) performs similar. That is, within a certain range, the \(K\) is not sensitive to the classification performance. For example, for Pavia University data, the proposed method can achieve similar performance when \(K\) is set to 3 (93.58%), 4(93.91%), 5(94.13%), 6(93.37%), 7(93%). If there is a specific requirement for high accuracy, one can use cross validation for a proper \(K\). ### _Evaluation of the Models Trained with Different \(\alpha\)_ As mentioned in Section II, \(\alpha\) denotes the tradeoff between the adversarial error and classification error. It can significantly affect the learning process of environmental-related features and category-related features, and thus influence the classification performance. Generally, a larger \(\alpha\) value leads to a better performance. However, excessively large \(\alpha\) values decrease the classification performance and even bring about the non-converge of the deep model. The reason is that a larger \(\alpha\) means a higher weight for adversarial learning and reduces the excessive intra-class variation caused by environmental factors. As a result, the learned features can be easily to discriminate different classes and thus increase the classification performance. However, excessively large \(\alpha\) focuses on too much attention on the environmental-related features and ignores the category-related features, which in turn decreases the classification performance. Fig. 2 shows the tendencies of the performance with different \(\alpha\) over the three datasets. Here, we choose the value of \(\alpha\) from \(\{0,0.001,0.01,0.1,1,2,5\}\). It should be noted that when we conduct the experiments when \(\alpha\) is set to 10, the deep model cannot converge over the three datasets. As the figure shows, a larger \(\alpha\) provide a better performance while excessively \(\alpha\) values decrease the performanc. Since \(\alpha\) value can significantly affect the performance of the model, a proper \(\alpha\) is essential for current task. Cross-validation can be used to choose a proper \(\alpha\) faced with different tasks. Besides, we can conclude from Fig. 2 that over Pavia University data and Houston2013 data, the proposed method can achieve the best when \(\alpha\) is set to 1. While over Indian Pines data, the proposed method can achieve 90.90% which performs the best when \(\alpha\) is set to 0.1. ### _Evaluation of the Models Trained with Different Size of Spatial Neighbors_ It is obvious that the size of spatial neighbors can significantly affect the classification performance of the hyperspectral images. Therefore, in this subsection, we further investigate the effects of the neighbor size on the classification performance. The neighbor sizes are chosen from \(\{3\times 3\), \(5\times 5\), \(7\times 7\), \(9\times 9\), \(11\times 11\}\). Fig. 3 shows the tendencies of classification accuracies under different neighbor sizes. As shown in Fig. 3, the classification accuracy of the proposed method can provide an improvement of performance under different size of neighbors. For Pavia University data, the proposed method achieves the best (94.46%) with \(7\times 7\) size of neighbors and we can obtain a 2.54% improvement when compared with vanilla CNN. For Houston2013, the proposed method also achieve the best performance (90.68%) under \(7\times 7\) neighbors. As for Indian Pines data, the performance increases with the increase of the neighbor size and the accuracy can obtain 95.36% with \(11\times 11\) neighbor size. Generally, samples with larger neighbor sizes contain more spatial information and thus can provide a better classification performance, just as Indian Pines data. However, larger neighbor sizes imply a more complex physical model, which increases the difficulty of model training. Therefore, for Pavia University and Houston2013 data, samples with \(7\times 7\) neighbors can provide a better classification accuracy than samples with \(11\times 11\) neighbors. ### _Evaluation of the Models Trained with Different Number of Samples_ Prior subsections mainly conduct the experiments over a given training and testing samples divided as Table I, II, and III list. This subsection will further evaluate the performance of the developed method under a different number of training samples. As shows in Table I-III, 3921 training and 40002 testing samples for Pavia University data, 695 training and 9671 testing samples for Indian Pines data, 2832 training and 12197 testing samples for Houston2013 data, are used for experiments, respectively. While in this set of experiments, 6.25%, 12.5%, 25%, 50%, and 100% samples are selected from the original training samples over these datasets to evaluate the performance with different number of training samples. That is, over Pavia University data, 245, 490, 980, 1960, 3921 training samples are selected. Over Indian Pines data, 43, 86, 173, 347, 695 samples are selected and 177, 354, 708, 1416, 2832 samples are chosen for Houston2013 data. Fig. 4 shows the tendencies of classification performance with different number of training samples over the three datasets, respectively. We can find that the accuracies by the proposed method can be remarkably improved compared with the vanilla CNN. For Pavia University data, the accuracy can be increased by about 3%-4%. For Houston2013 data, the accuracy can be increased by about 1.5%-3%. Specifically, for Indian Pines data, the accuracy can be even increased by more than 10%. This is because the proposed method decomposes the environmental-related features and the category-related features, and improves the discrimination of category-related features and reduces the impact of environmental factors on hyperspectral image classification. Besides, the classification performance of the Fig. 3: Classification performance with different size of spatial neighbors over (a) Pavia University; (b) Indian Pines; (c) Houston2013. Fig. 2: Classification performance with different \(\lambda\) over (a) Pavia University; (b) Indian Pines; (c) Houston2013. learned model is significantly improved with the increase of training samples. More training samples provides additional information for the deep model to learn, allowing it to better extract discriminative features for hyperspectral image classification. Furthermore, we show the classification maps over different datasets in Figs. 5-7 under setups in Table I-III, respectively. Compare Fig. 5(f) with 5(l), 6(f) with 6(l), and 7(f) with 7(l), and we can find that the classification error can be significantly decreased under the proposed AdverDecom method. This also indicates that the proposed method with deep intrinsic decomposition through adversarial learning can provide a more discriminative feature by decompose the environmental-related and category-related features. ### _Comparison with State-of-the-art Methods_ To further validate the effectiveness of the proposed method for hyperspectral image classification, we compare classification results of the proposed method with the state-of-the-art methods. Table IX, X, and XI present the comparisons over the three datasets, respectively. All the experimental results in these tables come from the same experimental setups. From Table IX, we can obtain that the proposed method can obtain 94.13% OA that outperforms the CNNs (e.g. 3-D CNN (87.52%), PResNet (90.11%), HybridSN (90.27%)), RNN RNN (80.61%), miniGCN (83.23%), and Transformers (e.g. ViT (86.27%), SpectralFormer (90.04%), SSFTTNet(82.56%)) over Pavia University data. As listed in Table X, for Indian Pines data, the proposed method can provide an accuracy of 91.07% outperforms that of the CNNs (e.g. 3-D CNN (77.22%), PResNet (82.97%), HybridSN (78.72%)), RNN (81.11%), miniGCN (74.71%), and Transformers (e.g. ViT (65.16%), SpectralFormer (83.38%), SSFTTNet (80.29%)). Furthermore, for Houston2013 data, the proposed method can also provide a better classification performance when compared with other state-of-the-art methods (see Table XI for details). These comparison results show the effectiveness of the proposed method for current task. Besides, from classification maps in Figs 5-7, we can also find that the classification error of the proposed method can be decreased by the proposed AdverDecom and thus the accuracy can be obviously improved. In particular, the results of our proposed methods have less noisy points compared to other state-of-the-art methods. To sum up, the proposed method can significantly improve the representational ability of the deep model and significantly improve the classification accuracy when compared with not only other handcrafted methods and CNNs-based deep models, but also other state-of-the-art deep methods. ## IV Conclusions In this work, based on intrinsic property of hyperspectral image, we develop a deep intrinsic decomposition with adversarial learning for hyperspectral image classification. We develop the adversarial network to decompose the learned feature into the category-related and environmental-related features. Then, based on the proposed adversarial learning methods, the network can be adversarially learned and provide discriminative features of the hyperspectral image. Experimental results over different CNN backbone shows that the proposed method can remarkably improve the classification performance. Besides, the comparison results between other state-of-the-art methods also show the superiority of the proposed method. In future works, it would be interesting to investigate the effects of the proposed AdverDecom on the applications of other tasks, such as anomaly detection, target identification. Besides, exploring the performance of AdverDecom by integrating other training strategies, such as metric learning, is another interesting future topic.
2310.16564
Application of entropy analysis in the prediction of flow distribution in parallel channels
Multiphase flow in parallel channels is often an efficient approach to manage heat and energy distribution in engineering systems. However, two-phase flow with heating in parallel channels is prone to maldistribution, resulting in sub-optimal performance and in some cases, permanent damage. This challenge requires accurate flow modeling in parallel channels to mitigate or design against the adverse effect of two-phase flow maldistribution. The nonlinear nature of multiphase flow results in a multiplicity of predicted solutions for the same condition, thereby creating significant challenges in modeling flow distribution. Therefore, this study focuses on solving this challenge by applying entropy generation analysis and the conservation of mass, momentum balance, and energy balance to predict two-phase flow distribution in a two-parallel-channel assembly with a numerical model. Both model predictions and experimental data show that equally distributed flow becomes severely maldistributed with a decrease in flow rate, resulting in significant change (>30%) in the entropy generation rate. We show that the entropy analysis can be applied in distinguishing between stable and unstable flow distribution, like the linear stability analysis used in previous studies. We also surpass the limit of applying linear stability analysis by using entropy analysis to identify the most feasible end state in a maldistribution process.
Toochukwu Aka, Shankar Narayan
2023-10-25T11:36:11Z
http://arxiv.org/abs/2310.16564v1
## Application of entropy analysis in the prediction of flow distribution in parallel channels ## Abstract Multiphase flow in parallel channels is often an efficient approach to manage heat and energy distribution in engineering systems. However, two-phase flow with heating in parallel channels is prone to maldistribution, resulting in sub-optimal performance and in some cases, permanent damage. This challenge requires accurate flow modeling in parallel channels to mitigate or design against the adverse effect of two-phase flow maldistribution. The nonlinear nature of multiphase flow results in a multiplicity of predicted solutions for the same condition, thereby creating significant challenges in modeling flow distribution. Therefore, this study focuses on solving this challenge by applying entropy generation analysis and the conservation of mass, momentum balance, and energy balance to predict two-phase flow distribution in a two-parallel-channel assembly with a numerical model. Both model predictions and experimental data show that equally distributed flow becomes severely maldistributed with a decrease in flow rate, resulting in significant change (>30%) in the entropy generation rate. We show that the entropy analysis can be applied in distinguishing between stable and unstable flow distribution, like the linear stability analysis used in previous studies. We also surpass the limit of applying linear stability analysis by using entropy analysis to identify the most feasible end state in a maldistribution process. **Keywords**: Entropy generation, flow distribution, parallel channels, stability, two-phase flow. ## 1 Introduction Flow distribution is critical to multi-channel engineering systems, ranging from heat exchangers and cooling systems to microfluidics and fuel cells. In multi-channel heat exchangers, flow distribution influences the contribution of each channel to heat transfer and the overall heat transfer efficiency[1][2] In microfluidics, precise flow distribution is vital for sample manipulation, precise dosing, and efficient reactions[3]. In fuel cells, the distribution of the reactants among parallel flow channels affects electrochemical efficiency and cell lifetime[4]. However, accurately predicting and understanding two-phase flow distribution in parallel channels presents significant challenges. Several prior studies have been dedicated to analyzing and controlling flow distribution in parallel channels. In our previous computational study[5], we showed that the thermophysical properties of the channel walls can significantly influence flow maldistribution in two parallel channels. Zhang et al.[6] presented a linear stability analysis to distinguish between stable and unstable flow distributions in a multi-channel evaporator. In Zhang's study, a feedback control strategy was developed to maintain near-equal fluid distribution in a three-parallel channel assembly. Taitel et al. [7] introduced finite disturbances to demonstrate the stability of transient flow distribution solutions. Minzer et al.[8] also performed a linear stability analysis on static flow distribution solutions and showed that flow distribution in a parallel channel assembly depends on the history of the inlet flow rate. Linear stability analysis was commonly applied in previous studies to determine the stability of flow distributions[6], [8]. However, it provides no physical insight into why a stable flow distribution is preferred over other "mathematically feasible" distributions. Also, linear stability analysis cannot be applied when multiple stable distributions correspond to a given operating condition. We address these limitations by adopting a thermodynamic approach to analyzing flow distribution in a parallel channel system. Entropy analysis provides valuable insight into the direction and inefficiencies of physical processes in a system. Based on the second law of thermodynamics, entropy generation quantifies the rate at which entropy is produced during a physical process. Hence, previous studies have applied entropy analysis in design optimization[9, 10, 11, 12], flow regime identification[13] and alternative approaches to known phenomena[14]. In this study, we conduct an entropy analysis for a two-parallel-channel assembly to show the relationship between flow distribution and the entropy production rate. We use entropy generation to explain the preference for stable over unstable flow distributions. Based on the characteristics of entropy generation in individual channels and common headers of the assembly, we show that this approach can be used in predicting the most feasible stable final states in processes prone to maldistribution. ## 2 Analysis ### Physical system This study focuses on a two-parallel-channel assembly with a common inlet and exit, as shown in Figure 1. Each channel branch consists of a valve and a long steel tube (0.3048 m) with steady and uniform heating. Each valve has a flow coefficient, \(K_{\nu}\) of \(10^{-8}\), with an orifice opening, \(A_{\nu}\) ranging from 0 to 100%. Subcooled water (working fluid) enters through the common inlet at \(T_{i}=\)19 \({}^{\circ}\)C and exits the parallel channel assembly as either liquid, liquid-vapor mixture, or superheated vapor at \(P_{e}=20\) kPa, while heat is transferred from the heaters to the working fluid. ### Governing Equations The evolution of multiphase flow within a heated channel (Figure 1) can be described by the lumped form of the unsteady mass (Eq. (1)), momentum (Eq. (2)), and energy balance (Eq. (3) and Eq. (4)) equations. \[A_{cs}\left(L\frac{d\rho}{dt}\right)_{ph}=\left(\dot{m}_{i}-\dot{m}_{e}\right) \;_{ph} \tag{1}\] \[\frac{L}{A_{cs}}\frac{d\dot{m}}{dt}=P_{i}-P_{e}-\Delta P \tag{2}\] \[A_{cs}\frac{d(\rho h-P)_{ph}}{dt}=pH_{ph}(T_{w}-T)_{ph}\;\;\;+\;\;\left(\frac{ \dot{m}_{i}h_{i}-\dot{m}_{e}h_{e}}{l}\right)_{ph} \tag{3}\] \[\rho_{w}c_{p,w}\left(V_{w}\frac{dT_{w}}{dt}\right)_{ph}=\left(\dot{Q}_{h}-Hpl( T_{w}-T)-\dot{Q}_{loss}\right)_{ph} \tag{4}\] \(\dot{m}_{ph}\) is the average mass flow rate across each fluid phase region (subcooled liquid, liquid-vapor mixture, superheated vapor) in the channel with subscript \(i\) denoting inlet and \(e\) denoting exit of the region. \(\rho_{ph}\), \(h_{ph}\), \(P_{ph}\), \(T_{ph}\), \(H_{ph}\), and \(l_{ph}\) are the average fluid density, enthalpy, pressure, temperature, and convective heat transfer coefficient for each phase, respectively. \(T_{w,ph}\) and \(V_{w,ph}\) describe the average temperature and volume of the wall corresponding to each phase in the channel. Flow properties related to channel geometry, specifically \(L\), \(p\) and \(A_{cs}\) are the channel Figure 1: Two thermally isolated parallel channels sharing a common inlet and exit. length, wetted perimeter, and flow cross-sectional area, respectively. Thermophysical properties of the channel wall \(\rho_{w}\) and \(c_{p,w}\) are the density and specific heat capacity, respectively. The pressure drop, \(P_{d}\) across a channel branch consists of the valve (\(\Delta P_{v}\)), flow acceleration (\(\Delta P_{a}\)) and frictional (\(\Delta P_{f}\)) components. \[\Delta P=\Delta P_{v}+\Delta P_{a}+\Delta P_{f,liq}+\Delta P_{f,tp} \tag{5}\] \[\Delta P_{v}=\frac{1}{\rho_{i}}\left(\frac{\dot{m}}{K_{\nu}A_{\nu}}\right)^{2} \tag{6}\] \[\Delta P_{a}=\dot{m}^{2}\left(\frac{1}{\rho_{e}}-\frac{1}{\rho_{i}}\right) \tag{7}\] where \(A_{v}\) is the valve opening, \(\rho_{i}\) and \(\rho_{e}\) are the average fluid density at the inlet and exit, respectively. The pressure drop for the liquid phase region \(\Delta P_{f,liq}\) is given by the Darcy-Weisbach equation [5] with friction factor obtained from correlation in previois study[15], while the pressure drop in the two-phase region \(\Delta P_{f,tp}\) is computed using the Lockhart and Martinelli correlation [8]. The average heat transfer coefficient for the liquid phase region is given by \[H_{liq}=\frac{k_{liq}\,Nu_{liq}}{D} \tag{8}\] where \(k_{liq}\), \(Nu_{liq}\) and \(D\) are the average fluid thermal conductivity, Nusselt number, and channel internal diameter, respectively. \(Nu_{liq}\) is given based on the assumption of a uniform heat flux [16]. The heat transfer coefficient in the two-phase flow regions and the critical heat flux (CHF) are computed using correlations from prior publications[17], [18]. Apart from the added simplification of a uniform channel heat flux, we adopt \(P_{e}\) as the reference pressure for computing saturated fluid properties. Accordingly, the phase length of the liquid region in a channel, \(l_{liq}\) is given by. \[l_{liq}=\left(\frac{h_{liq,sat}(P_{e})-h_{i}}{h_{e}-h_{i}}\right)L \tag{9}\] The heat loss to the ambient \(\dot{Q}_{loss}\) is obtained from experimental data as a function of \(T_{w}-T_{\infty}\), and the outer surface area (\(A\)), as shown in Figure 2. For model validation, heat loss to the surroundings takes place at an ambient temperature of \(T_{\infty}=24\)\({}^{\circ}\)C. #### 2.2.1 Static Model The unsteady terms in Eqs. (1) to (4) can be eliminated at steady operating conditions, resulting in the steady forms of mass conservation, momentum balance, and energy conservation equations. \[\dot{m}_{i}=\ \dot{m}_{e}=\dot{m} \tag{10}\] \[P_{i}-P_{e}=\Delta P \tag{11}\] \[p\big{[}l_{liq}H_{liq}(T_{w}-T)_{liq}+l_{tp}H_{tp}(T_{w}-T)_{tp}\big{]}=\dot{m} (h_{e}-h_{i}) \tag{12}\] \[\dot{m}(h_{e}-h_{i})=\dot{Q}_{h}-\dot{Q}_{loss} \tag{13}\] And the steady rate of heat transfer \(\dot{Q}_{i}\) into the fluid may be expressed as \[\dot{Q}_{i}=\ \dot{m}(h_{e}-h_{i}) \tag{14}\] The average wall temperature \(T_{w}\) and fluid temperature \(T\) is given by the following equations. Figure 2: Heat loss characterization. \[T_{w}=\left(T_{liq}+\frac{\partial_{i}}{H_{liq}pL}\right)\frac{li_{q}}{L}+\left(T_ {tp}+\frac{\partial_{i}}{H_{tp}pL}\right)\frac{t_{tp}}{L} \tag{15}\] \[T=\frac{T_{liq}l_{liq}+T_{tp}l_{tp}}{L} \tag{16}\] Eqs. (10) to (13) are solved by posing them as a constrained multivariable function \(Y(X)\) and solving it iteratively to find the set of variables \(X^{*}\) that minimizes \(Y\). This minimization problem is generally expressed as follows. \[X^{*}=arg\min_{Fmin\leq F(X)\leq F_{max}}Y(X) \tag{17}\] where \(X\) is a vector of variables that is updated in each iteration to minimize \(Y\), \(F(X)\) is a vector of functions describing the range in which \(X^{*}\) can be found, \(F_{min}\) and \(F_{max}\) are constraints describing the lower bound and upper bound of \(F(X)\) respectively. For a given \(\dot{m}\), \(Q_{h}\) and \(A_{v}\), the steady flow characteristics in a heated channel is obtained by solving the minimization problem with the following parameters and constraints. \[X=\left[P_{i},\dot{Q}_{i}\right] \tag{18}\] \[Y=\left(\left|\frac{P_{i}-P_{e}-\Delta P}{\Delta P}\right|+\left|\frac{\dot{ Q}_{h}-\dot{Q}_{i}-\dot{Q}_{loss}}{\dot{Q}_{loss}}\right|\right) \tag{19}\] \[P_{i}\geq P_{e},\ \dot{Q}_{i}\leq\dot{Q}_{h},\ Y\leq 10^{-4} \tag{20}\] The iterative procedure shown in Figure 3, begins with a first guess of the input variables \(P_{i}\) and \(\dot{Q}_{i}\). These values are used to compute \(h_{e}\), \(l_{ph},T_{ph}\), \(H_{ph}\) and \(\Delta P\), which are then used to compute \(T_{w}\), \(\dot{Q}_{loss}\), and consequently \(Y\). This procedure is repeated until an iteration step is smaller than the step size tolerance of the optimization tool and \(Y\leq 10^{-4}\), while the input variables are updated with each iteration. In the case of a two parallel channel system with given heat loads \(Q_{h1}\) and \(Q_{h2}\), valve openings \(A_{\nu_{1}}\) and \(A_{\nu_{2}}\), and flow rate \(\dot{m}\), the steady flow characteristics are obtained by solving the following minimization problem. \[X=\left[P_{i},\dot{Q}_{i,1},\dot{Q}_{i,2},\dot{m}_{1}^{*}\right] \tag{21}\] \[\dot{m}_{2}^{*}=1-\ \dot{m}_{1}^{*} \tag{22}\] \[Y=\frac{1}{min(\Delta P_{1}\Delta P_{2})}\sum_{j=1}^{2}\left|P_{i}-P_{e}- \Delta P_{j}\right|+\frac{1}{min(\dot{Q}_{i1},\dot{Q}_{i2})}\sum_{j=1}^{2} \left|\dot{Q}_{h,j}-\dot{Q}_{i,j}-\dot{Q}_{loss,j}\right| \tag{23}\] \[P_{i}\geq P_{e},\ \dot{Q}_{i,1}\leq\dot{Q}_{h,1},\ \dot{Q}_{i,2}\leq\dot{Q}_{h,2},\ \dot{m}_{1}^{*}\leq 1,\ \lambda\leq\lambda_{max},\ Y\leq 10^{-4} \tag{24}\] Here \(\dot{m}_{1}^{*}\) and \(\dot{m}_{2}^{*}\) are the flow fractions in channels 1 and 2, respectively. \(\lambda\) is a linear stability criterion obtained from the maximum real part of the eigenvalues of the Jacobian matrix of \(\frac{d}{dt}\begin{bmatrix}\dot{m}_{1}^{*}\\ \dot{m}_{2}^{*}\end{bmatrix}\)[19], and \(\lambda_{max}\) is the upper bound on \(\lambda\). If \(\lambda<0\), the static solution is stable, and if \(\lambda>0\) the Figure 3: Iteration process for steady solution in a single channel. solution is unstable. Therefore, to obtain only stable solutions \(\lambda_{max}=0\). The iterative procedure for a two-parallel-channel system is similar to the single channel. However, there is an additional stability constraint (\(\lambda\)), as shown in Figure 4. #### 2.2.2 Transient Model The unsteady momentum balance equation (Eq. (2)) applied to a two-parallel-channel system allows for predicting the evolution of flow fractions, \(\dot{m}_{1}^{*}\) and \(\dot{m}_{2}^{*}\). Figure 4: Iteration process for steady solution in two parallel channels. \[\frac{d}{dt}\begin{bmatrix}\dot{m}_{1}^{*}\\ \dot{m}_{2}^{*}\end{bmatrix}=\frac{L}{\dot{m}}\begin{bmatrix}\frac{1}{A_{cs,1}} \left(P_{i}-P_{e}-\Delta P_{1}\right)\\ \frac{1}{A_{cs,2}}\left(P_{i}-P_{e}-\Delta P_{2}\right)\end{bmatrix} \tag{25}\] Here \(A_{cs,1}\) and \(A_{cs,2}\) are the cross-sectional areas of channels 1 and 2 respectively. \(P_{i}\) is obtained from the unsteady momentum balance equation for the whole assembly. \[\frac{\dot{m}_{t+\Delta t-\dot{m}_{t}}}{\Delta t}=\sum_{j=1}^{2}\frac{L}{A_{cs,j}}\left(P_{i}-P_{e}-\Delta P_{j}\right) \tag{26}\] Here \(\Delta t\) is the time step applied for estimating \(d\dot{m}/dt\) numerically. The unsteady momentum balance equation has the most significant influence on the transient evolution of flow distribution in a two-parallel-channel assembly relative to the unsteady mass and energy conservation equations. Hence, in this study, the transient model consists of solving the Eqs. (10), (12), (13) and (25). ### Flow Distribution and Entropy Generation The static solution for different flow rates in a heated channel produces characteristic 'N' curves (red and black lines), as shown in Figure 5. For a given \(\dot{m}\) in two parallel channels, the steady flow distribution solution \(\dot{m}_{1}\) and \(\dot{m}_{2}\) each lie on the characteristic curve corresponding to channel 1 and channel 2, respectively. From Figure 5, a fixed \(\dot{m}\) may yield multiple flow distributions (1, 2 and 3). Linear stability analysis of these solutions[6], [8] would indicate that 1 and 3 are stable and feasible, while 2 is unstable, leaving us with two stable maldistributed flow solutions. We conduct an entropy analysis of each solution to identify the most feasible solution from these "stable" flow distributions. Entropy analysis is an effective tool for determining the direction of physical processes. For a process to be feasible, the rate of entropy generated (\(\dot{S}_{gen}\)) during that process must be greater than 0. In a system of \(N\) parallel channels, the rate of entropy generation \(\dot{S}_{gen}\) at steady state is given by the following equation. \[\dot{S}_{gen}=\dot{m}\big{(}s_{e}(\ P_{e},h_{e})-s_{i}(\ P_{i},T_{i})\big{)}- \sum_{j=1}^{N}\frac{Q_{i,j}}{T_{w,j}} \tag{27}\] The specific entropies at the inlet (\(s_{l}\)) and outlet (\(s_{e}\)) of the channel assembly are functions of inlet pressure \(P_{i}\) and temperature \(T_{i}\), and exit pressure \(P_{e}\) and specific enthalpy \(h_{e}\), respectively. Entropy generated within a heated parallel channel assembly consists of entropy generated within each channel flow stream, entropy generated from splitting the flow at the common inlet, and entropy generated from mixing the flow at the common exit of the network. \(\dot{S}_{gen}\) can be expressed using the following equation. \[\dot{S}_{gen}=\dot{S}_{gen,mix}+\sum_{j=1}^{N}\dot{S}_{gen,j} \tag{28}\] Figure 5: Steady flow distribution solutions in a two-parallel-channel assembly. Lines represent single-channel characteristic pressure curves for channels 1 (diameter \(1.4\ \times 10^{-3}\) m and heat load 60 W) and 2 (diameter \(1.5\ \times 10^{-3}\) m and heat load 60 W), respectively. The markers represent different mathematical solutions for flow distribution with a total flow rate of \(5.5\ \times 10^{-4}\) kg/s. Here \(\dot{S}_{gen,j}\) is the entropy generation rate within each channel of the parallel network. \(\dot{S}_{gen,mix}\) is the rate of entropy generated by heat transfer and expansion corresponding to fluid emerging from each channel and mixing at the common exit. For an adiabatic mixing process at the common header, \(\dot{S}_{gen,mix}\) is a function of the flow rate distribution \(\dot{m}_{j}^{*}=\frac{\dot{m}_{j}}{\dot{m}}\) and the heat load distribution \(\dot{Q}_{h,j}^{*}=\frac{\dot{o}_{h,j}}{\dot{o}_{h}}\). \[\dot{S}_{gen,mix}=\dot{m}\Bigg{(}s_{mix}-\sum_{j=1}^{N}\dot{m}_{j}^{*}s_{e,j} \big{(}P_{e},h_{e,j}\big{)}\Bigg{)} \tag{29}\] \[h_{e,j}=h_{i}+\frac{\dot{Q}_{h,j}^{*}\dot{Q}_{h,j}}{\dot{m}_{j}^{*}\dot{m}} \tag{30}\] ## 3 Experimental Testbed An experimental testbed consisting of a heated tank, gear pump, electronic valve, and evaporator assembly was constructed (Figure 6) to validate the static and dynamic models. The evaporator assembly consists of two capillary steel tubes with an internal diameter of \(1.4\times 10^{-3}\) m, outer diameter \(3.175\times 10^{-3}\) m and length \(3.048\times 10^{-1}\) m, wrapped with \(125\) W rope heaters and an outer layer of fiberglass insulation. Coupled to the ends of each steel tube are stop valves and flow meters (Omega FLR-1008ST, \(\pm 3.33\times 10^{-5}\) kg/s). Four thermocouples (Omega T-type, \(\pm 1^{\circ}\)C) are attached to the wall of each tube at equidistant locations to monitor the wall temperature. Pressure sensors (Omega PX309-030A5V, \(\pm 0.52\)kPa) and additional thermocouples are positioned at the inlet and exit of the assembly to monitor flow properties at these locations. ## 4 Results ### Model Validation In Figure 7a, a heat load of 70 W is supplied to each channel while fluid is supplied through the inlet header to the parallel channel assembly. Initially, at a high flow rate, the flow is uniformly distributed among each channel (Figure 7a-i) while \(\dot{S}_{gen}\) decreases with \(\dot{m}\) (Figure 7a-ii). However, as \(\dot{m}\) is gradually decreased, the flow becomes severely maldistributed and \(\dot{S}_{gen}\) suddenly increases. After flow maldistribution, \(\dot{S}_{gen}\) decreases with subsequent decrease in \(\dot{m}\). For the transient experiment, channel 1 and channel 2 are supplied with a steady heat load of 40 W and 60 W, respectively, while the assembly is initially supplied with a flow rate \(\dot{m}\) of about \(10^{-3}\) kg/s as shown in Figure 7b-i. Like the static experiment, initially, the flow is equally distributed in the two channels. However, when the \(\dot{m}\) is abruptly decreased to \(4.6\,\times 10^{-3}\) kg/s, the flow becomes severely maldistributed, with channel 1, in this case, receiving the bulk of the flow and channel 2 experiencing almost no flow, as shown in Figure 7b-ii. Our static and transient Figure 6: The experimental testbed to study static and dynamic characteristics of flow maldistribution. model predictions generally agree with experimental data[20].It should be noted that \(\dot{Q}_{loss}\) is neglected in further application of this model. ### Entropy generation in a single channel Flow boiling in a single channel is characterized by thermal and hydraulic resistances. These resistances are sources of irreversibility and contribute significantly to entropy production within the channel. Hydraulic resistance, primarily due to frictional forces and fluid expansion, causes a pressure drop \(\Delta P=P_{t}-P_{e}\) as flows takes place through the channel. Likewise, thermal resistance requires a temperature difference, \(\Delta T_{w}=T_{w}-T_{f}\) for heat transfer between the wall and the fluid. Hence, \(\Delta P\) and \(\Delta T_{w}\) denote a departure from ideal flow and heat transfer with no entropy generation and is a measure of irreversibility from the heat transfer and flow process, respectively. Figure 7: Model validation for both static and dynamic conditions Figure 8 shows the variation of \(\Delta P\), \(\Delta T_{w}\), and \(\hat{S}_{gen}\) with \(\dot{m}\), for different combinations of heat loads \(\hat{Q}_{h}\) and channel internal diameter \(D\) for a single channel. A large \(\dot{m}\) generally corresponds to single-phase liquid at the channel exit, indicated by a positive slope of curves on the far right of each plot. As \(\dot{m}\) decreases, the fluid at the channel exit becomes a two-phase vapor-liquid mixture, resulting in a change in slope from positive to negative in the \(\Delta P\) versus \(\dot{m}\) curves. The slope of the curves for other parameters remains positive but becomes steeper. Very small \(\dot{m}\) on the far left of each plot corresponds to superheated vapor flow at the channel exit. In this region, the slope of the \(\Delta P\) versus \(\dot{m}\) curves become positive again, while the slope of the other curves changes from positive to a very steep negative value due to the occurrence of critical heat flux (CHF). Figure 8: Variation of single-channel steady-state flow properties with (a) varying \(\dot{m}\) and \(\dot{Q}_{h}\) at \(D=1.4\times 10^{-3}\) m The variation in \(\dot{Q}_{h}\) has no significant impact on \(\Delta P\) for single-phase liquid at the channel exit (Figure 8a-ii). However, \(\Delta P\) increases with an increase in \(\dot{Q}_{h}\) when two-phase mixtures and superheated vapor exit the channels. The variation in \(D\) results in significant variation in \(\Delta P\), with \(\Delta P\) increasing for smaller \(D\) (Figure 8a-ii). An increase in \(\dot{Q}_{h}\) increases \(\Delta T_{w}\) and \(\dot{S}_{gen}\) (Figure 8a-i and Figure 8a-iii). while variations in \(D\) has no significant impact on \(\Delta T_{w}\) and \(\dot{S}_{gen}\) (Figure 8b-i and Figure 8b-iii). Generally, the variation in \(\Delta T_{w}\) and \(\dot{S}_{gen}\) versus \(\dot{m}\) are similar and differ from the trends in \(\Delta P\) versus \(\dot{m}\), indicating that the dominant contribution towards irreversibility is associated more with heat transfer than flow characteristics. ### Entropy generation due to adiabatic mixing of fluid The fraction of flow and heat load in each channel influences \(\dot{S}_{gen,mix}\). Figure 9 describes the variation of \(\dot{S}_{gen,mix}\) with the variation in the fraction \(\dot{m}_{j}^{*}=\frac{\dot{m}_{j}}{\dot{m}}\) of the total flow rate \(\dot{m}\) in channel \(j\), and fraction \(\dot{Q}_{h,j}^{*}=\frac{\dot{Q}_{h,j}}{\dot{Q}_{h}}\) of the total heating \(\dot{Q}_{h}\) on channel \(j\) of a two parallel channel assembly. Therefore, for uniformly distributed flow and heating power, \(\dot{m}_{j}^{*}=\dot{Q}_{h,j}^{*}=0.5\). The \(\dot{S}_{gen,mix}\) corresponding to a given \(\dot{Q}_{h,j}^{*}\) is typically a parabola with the minimum \(\dot{S}_{gen,mix}\) at \(\dot{m}_{j}^{*}=\dot{Q}_{h,j}^{*}\). At this point, the thermal energy content of both fluid streams is the same, and hence, there is no entropy generation due to the irreversible fluid-to-fluid heat transfer during the mixing process. At points where \(\dot{m}_{j}^{*}<\dot{Q}_{h,j}^{*}\), \(\dot{S}_{gen,mix}\) increases with an increase in \(\dot{Q}_{h,j}^{*}\), and at points where \(\dot{m}_{j}^{*}>\dot{Q}_{h,j}^{*}\), \(\dot{S}_{gen,mix}\) increases with a decrease in \(\dot{Q}_{h,j}^{*}\). This variation shows that the maximum \(\dot{S}_{gen,mix}\) occurs when the flow is highly maldistributed (far right or far left regions of the plot), corresponding to a high heat load with a low flow rate or low heat load with a high flow rate. ### Entropy generation in a two-parallel channel assembly Figure 10 describes the variation of \(\Delta P\), \(\Delta T_{w,1}\), \(\Delta T_{w,2}\), \(\dot{m}_{1}^{*}\), \(\dot{m}_{2}^{*}\), and \(\dot{S}_{gen}\) with \(\dot{m}\) in the two-channel assembly. Initially, \(\dot{m}\) is almost uniformly distributed among the channels, \(\Delta T_{w,1}\)and \(\Delta T_{w,2}\) are constant while \(\Delta P\) and \(\dot{S}_{gen}\) decrease as \(\dot{m}\) decreases. With a further decrease in \(\dot{m}\), phase change in the working fluid triggers severe flow maldistribution, characterized by increased flow in channel 1 and decreased flow in channel 2. This results in a jump in \(\Delta T_{w,2}\) and \(\Delta P\) across the assembly, with no change in \(\Delta T_{w,1}\). The onset of CHF in channel 2 and increased hydraulic resistance in channel 1, causes \(\dot{S}_{gen}\) to suddenly increase with flow maldistribution. After the onset of flow maldistribution, the influence of increasing \(\Delta T_{w,2}\) causes \(\dot{S}_{gen}\) to increase until the influence of decreasing \(\dot{m}\) becomes dominant again causing \(\dot{S}_{gen}\) to decrease. ### Entropy generation and stability of flow distribution Typically, multiphase flow distribution solutions in parallel channels are not unique (Figure 5). Modeling could indicate multiple solutions with both stable and unstable flow distribution for the same operating condition. As described earlier, a stability criterion \(\lambda\) derived from the linear perturbation theory, is used to distinguish between stable (\(\lambda<0\)) and unstable (\(\lambda>0\)) flow distributions as shown in Figure 11a-i and Figure 11b-i. To provide a thermodynamic perspective to stability of flow distributions, Figure 11 compare the \(\dot{S}_{gen}\) for stable flow distribution (black) with an unstable flow distribution profile (green). Figure 10: Effect of flow maldistribution on flow properties of a two parallel channel assembly with channel 1 characteristics corresponding to \(D_{1}=1.4\times 10^{-3}\) m, \(\dot{Q}_{h,1}=60\) W, and \(A_{v_{1}}=100\) % and channel 2 characteristics corresponding to \(D_{2}=1.4\times 10^{-3}\) m, \(\dot{Q}_{h,2}=60\) W, and \(A_{v_{2}}=50\) %. In Figure 11a, the channel properties are identical except for the heat loads with \(\dot{Q}_{h,1}=50W\) and \(\dot{Q}_{h,2}=70W\). For large \(\dot{m}\), flow solutions are uniformly distributed and stable while for small \(\dot{m}\), the predicted flow distributions can be either severely maldistributed and stable or moderately non-uniform and unstable as shown in Figure 11a-i and Figure 11a-ii. In Figure 11b, the channel properties are identical except for the valve openings that are set at \(A_{\nu_{1}}=100\%\) and \(A_{\nu_{2}}=50\%\). For large \(\dot{m}\), the flow solutions are stable and slightly unequally distributed while for small \(\dot{m}\) the flow distribution is either severely maldistributed and stable or is uniform and unstable as shown in Figure 11b-i and Figure 11b-ii. In both Figure 11a-iii and Figure 11b-iii, the \(\dot{S}_{gen}\) associated with the stable maldistributed flow solutions is greater than the \(\dot{S}_{gen}\) associated with the moderately non-uniform or the uniformly distributed unstable flow solutions. Therefore, a severely maldistributed flow is thermodynamically preferred over other flow distributions satisfying the system constraints. Such severe maldistributions have been observed to occur, as shown in Figure \(Av_{1}=Av_{2}=100\%\), \(\dot{Q}_{h,1}=50W\) and \(\dot{Q}_{h,2}=70W\). (b) \(D_{1}=D_{2}=1.4\times 10^{-3}m\), \(\dot{Q}_{h,1}=\dot{Q}_{h,2}=60W\), \(Av_{1}=100\%\) and \(Av_{2}=50\%\) ### Entropy generation and maldistributed flow solutions Based on linear stability characteristics described in Figure 11, maldistributed flow solutions are inherently stable but could be non-unique as shown in Figure 5. This limits the application of the linear stability analysis since it only distinguishes between stable and unstable flow distribution solutions. For a given condition in a two-parallel channel system with perfectly identical individual channels, maldistributed flow solutions are mirror images of each other, with the flow magnitudes reversed compared with each other. These solutions are also indistinguishable when considering extensive thermodynamic properties (like \(\dot{S}_{gen}\)) associated with each solution. With the introduction of non-uniformities to the properties of the individual channels, as expected in Figure 11: Comparison between stable and unstable flow distribution profile for : (a) \(D_{1}=D_{2}=1.4\times 10^{-3}m\), practical applications, entropy analysis can determine the feasibility of each solution as the final state of a simple flow maldistribution process. Let us consider a flow maldistribution process in a perfectly insulated \(\left(\dot{Q}_{i}=\dot{Q}_{h}\right)\) two-parallel-channel system, with the following steady properties: flowrate \(\left(\dot{m}\right)\), identical heat loads \(\left(\dot{Q}_{h,1}=\dot{Q}_{h,2}=\dot{Q}_{h}/2\right)\), and identical internal diameters \(\left(D_{1}=D_{2}\right)\). If the entropy generation rate of the final maldistributed state is \(\dot{S}_{gen,0}\), since the individual channels are identical, flow maldistributions \(\dot{m}_{1}<\dot{m}_{2}\) and \(\dot{m}_{1}>\dot{m}_{2}\) are equivalent with respect to \(\dot{S}_{gen,0}\). From Eq. 28, \[\dot{S}_{gen,0}=\dot{S}_{gen,1}+\dot{S}_{gen,2}+\dot{S}_{gen,mix} \tag{31}\] where \(\dot{S}_{gen,1}\) and \(\dot{S}_{gen,2}\) are the rate of entropy generation in channels 1 and 2, respectively. Introducing non-uniformities in channel heating rates and dimensions while maintaining a constant flow rate and total heat load will result in a change in the entropy generation rate by \(d\dot{S}_{gen,0}\). The final rate of entropy generation \(\dot{S}_{gen}\) is given by. \[\dot{S}_{gen}=\dot{S}_{gen,0}+d\dot{S}_{gen,0} \tag{32}\] \(d\dot{S}_{gen,0}\) can be expressed using the chain rule. \[d\dot{S}_{gen,0}=\sum_{j=1}^{2}\left(\frac{\partial\dot{S}_{gen,j}}{\partial m _{j}}d\dot{m}_{j}+\frac{\partial\dot{S}_{gen,j}}{\partial\dot{Q}_{h,j}}d\dot{ Q}_{h,j}+\frac{\partial\dot{S}_{gen,j}}{\partial D_{j}}dD_{j}+\frac{\partial \dot{S}_{gen,mix}}{\partial\dot{Q}_{h,j}^{*}}d\dot{Q}_{h,j}^{*}+\frac{\partial \dot{S}_{gen,mix}}{\partial m_{j}^{*}}d\dot{m}_{j}^{*}\right) \tag{33}\] The conservation of mass and energy equations become. \[d\dot{m}_{1}=-d\dot{m}_{2} \tag{34}\] \[d\dot{Q}_{h,1}=-d\dot{Q}_{h,2} \tag{35}\] From the momentum balance equation, \(\dot{m}_{j}\) is a function of \(\dot{Q}_{h,j}\), \(D_{j}\) and the common pressure drop \(\Delta P\) as shown in Figure 8a-i and Figure 8b-i. \(d\dot{m}_{j}\) can be expressed using the chain rule. \[d\dot{m}_{j}=\frac{\partial\dot{m}_{j}}{\partial\Delta P}\,d\Delta P +\frac{\partial\dot{m}_{j}}{\partial\dot{Q}_{h,j}}\,d\dot{Q}_{h,j}+\frac{ \partial\dot{m}_{j}}{\partial D_{j}}\,dD_{j} \tag{36}\] Based on the trends in Figure 8 and Figure 9 the characteristics of the partial derivative terms in Eq. 33 and Eq. 36 for a maldistributed flow in a two parallel channel is summarized in Table 1. #### 4.6.1 Variation in diameter Let non-uniformity be introduced by slightly changing the diameter of one channel, Eq. 33 reduces to \[d\dot{S}_{gen,0}=\sum_{j=1}^{2}\left(\frac{\partial\dot{s}_{gen,j} }{\partial\dot{m}_{j}}\,d\dot{m}_{j}+\frac{\partial\dot{s}_{gen,j}}{\partial D _{j}}\,dD_{j}+\frac{\partial\dot{s}_{gen,mix}}{\partial\dot{m}_{j}^{*}}\,d \dot{m}_{j}^{*}\right) \tag{37}\] The conservation of mass in Eq. 34 based on Eq. 36 becomes. \[\frac{\partial\dot{m}_{1}}{\partial\Delta P}\,d\Delta P+\frac{ \partial\dot{m}_{1}}{\partial D_{1}}\,dD_{1}=-\left(\frac{\partial\dot{m}_{2 }}{\partial\Delta P}\,d\Delta P+\frac{\partial\dot{m}_{2}}{\partial D_{2}}\, dD_{2}\right) \tag{38}\] If \(D_{1}\) is increased by \(dD\) and the maldistributed flow is such that \(\dot{m}_{1}\approx\dot{m}\) and \(\dot{m}_{2}\approx 0\), then \(dD_{1}>0\), \(dD_{2}=0\). From Table 1, Eq.38 is only satisfied if \(d\Delta P<0\) in this case, which implies \(d\dot{m}_{1}>0\) and \(d\dot{m}_{1}^{*}>0\) while \(d\dot{m}_{2}<0\) and \(d\dot{m}_{2}^{*}<0\). Based on the magnitude of the partial derivative terms in Table 1 with respect to channel 1 and channel 2, we can deduce that all the \(\frac{\partial}{\partial}d\) terms in Eq. 37 are \(\geq~{}0\) for this case, implying \(d\dot{S}_{gen,0}>0\). If \(D_{2}\) is decreased by \(dD\) and the maldistributed flow is such that \(\dot{m}_{1}\approx 0\) and \(\dot{m}_{2}\approx\dot{m}\), then \(dD_{1}=0\), \(dD_{2}<0\). From Table 1, Eq.38 is only satisfied if \(d\Delta P>0\) in this case, which implies \(d\dot{m}_{1}>0\) and \(d\dot{m}_{1}^{*}>0\) while \(d\dot{m}_{2}<0\) and \(d\dot{m}_{2}^{*}<0\). Based on the magnitude of the partial derivative terms in Table 1 with respect to channel 1 and channel 2, we can deduce that all the \(\frac{\partial}{\partial}d\) terms in Eq. 37 are \(\leq~{}0\) for this case, implying \(d\dot{S}_{gen,0}<0\). Therefore, for a case of a maldistributed flow in a two parallel channel system, where \(\dot{Q}_{h,1}=\dot{Q}_{h,2}\) and \(D_{1}>D_{2}\), \(\dot{S}_{gen}\) corresponding to \(\dot{m}_{1}>\dot{m}_{2}\) is greater than \(\dot{S}_{gen}\) corresponding to \(\dot{m}_{1}<\dot{m}_{2}\). Based on the magnitude of \(\dot{S}_{gen}\) for the given system conditions \(\dot{m}_{1}>\dot{m}_{2}\) is thermodynamically more favorable and is likely the final maldistributed state in the process of flow maldistribution. This implies that when flow maldistribution occurs in a two parallel channel assembly with varying internal diameters and identical heat loads, flow is concentrated in the channel with larger diameter while that with the smaller diameter is starved of fluid. This outcome is corroborated in previous studies[5], [21]. #### 4.6.2 Variations in heat load If non-uniformity is introduced by adding \(d\dot{Q}_{h}\) to channel 1 and removing \(d\dot{Q}_{h}\) from channel 2. Therefore \(d\dot{Q}_{h,1}>0\) and \(d\dot{Q}_{h,1}^{*}>0\) while \(d\dot{Q}_{h,2}<0\) and \(d\dot{Q}_{h,2}^{*}<0\). Eq. 33 reduces to \[d\dot{S}_{gen,0}=\sum_{j=1}^{2}\left(\frac{\partial\dot{S}_{gen,j}}{\partial\dot{m }_{j}}\,d\dot{m}_{j}+\frac{\partial\dot{S}_{gen,j}}{\partial\dot{\partial}_{h,j }}\,d\dot{Q}_{h,j}+\frac{\partial\dot{S}_{gen,mix}}{\partial\dot{\partial}_{h,j }^{*}}\,d\dot{Q}_{h,j}^{*}+\frac{\partial\dot{S}_{gen,mix}}{\partial\dot{m}_{j} ^{*}}\,d\dot{m}_{j}^{*}\right) \tag{39}\] The conservation of mass in Eq. 34 based on Eq. 36 becomes. \[\frac{\partial\dot{m}_{1}}{\partial\Delta P}\,d\Delta P+\frac{\partial\dot{m}_ {1}}{\partial\dot{Q}_{h,1}}\,d\dot{Q}_{h,1}=-\left(\frac{\partial\dot{m}_{2}} {\partial\Delta P}\,d\Delta P+\frac{\partial\dot{m}_{2}}{\partial\dot{Q}_{h,2 }}\,d\dot{Q}_{h,2}\right) \tag{40}\] If the maldistributed flow is such that \(\dot{m}_{1}\approx 0\) and \(\dot{m}_{2}\approx\dot{m}\). From Table 1, Eq. 40 is only satisfied if \(\ d\Delta P>0\) in this case, which implies \(\ d\dot{m}_{1}<0\) and \(d\dot{m}_{1}^{*}<0\) while \(d\dot{m}_{2}>0\) and \(d\dot{m}_{2}^{*}>0\). Based on the magnitude of the partial derivative terms in Table 1 with respect to channel 1 and channel 2, we can deduce that all the \(\frac{\partial}{\partial}\,d\) terms in Eq. 39 are \(>\ 0\) except \(\frac{\partial\dot{S}_{gen,2}}{\partial\dot{Q}_{h,2}}\,d\dot{Q}_{h,2}\). However, from Table 1\(\left|\frac{\partial\dot{S}_{gen,1}}{\partial\dot{m}_{1}}\right|\gg\left| \frac{\partial\dot{S}_{gen,2}}{\partial\dot{Q}_{h,2}}\right|\), implying \(\ dS_{gen,0}>0\). If the maldistributed flow is such that \(\dot{m}_{1}\approx\dot{m}\) and \(\dot{m}_{2}\approx 0\). From Table 1, Eq. 40 is only satisfied if \(d\Delta P<0\) in this case, which implies \(\ d\dot{m}_{1}>0\) and \(d\dot{m}_{1}^{*}>0\) while \(d\dot{m}_{2}<0\) and \(d\dot{m}_{2}^{*}<0\). Based on the magnitude of the partial derivative terms in Table 1 with respect to channel 1 and channel 2, we can deduce that all the \(\frac{\partial}{\partial}\,d\) terms in Eq. 39 are \(<\ 0\) except \(\frac{\partial\dot{S}_{gen,1}}{\partial\dot{Q}_{h,1}}\,d\dot{Q}_{h,1}\). However, from Table 1\(\left|\frac{\partial\dot{S}_{gen,2}}{\partial\dot{m}_{2}}\right|\gg\left| \frac{\partial\dot{S}_{gen,1}}{\partial\dot{Q}_{h,1}}\right|\), implying \(\ dS_{gen,0}<0\). Therefore, for a case of a maldistributed flow in a two parallel channel system where \(\dot{Q}_{h,1}>\dot{Q}_{h,2}\) and \(D_{1}=D_{2}\), \(\dot{S}_{gen}\) corresponding to \(\dot{m}_{1}<\dot{m}_{2}\) is greater than \(\dot{S}_{gen}\) corresponding to \(\dot{m}_{1}>\dot{m}_{2}\). Based on the magnitude of \(\dot{S}_{gen}\) for the given system conditions \(\dot{m}_{1}>\dot{m}_{2}\) is thermodynamically more favorable and is likely the final maldistributed state in the process of flow maldistribution. This implies that when flow maldistribution occurs in a two parallel channel assembly with varying heat loads and identical geometry, flow is concentrated in the channel with smaller heat load while that with the larger heat load is starved of fluid. This outcome is corroborated in previous studies[5, 21]. Based on conclusions from both scenarios, Figure 12 compares \(\dot{S}_{gen}\) for expected maldistributed flow states (black) with \(\dot{S}_{gen}\) for unlikely maldistributed flow states (green). All conditions are identical for both channels except the channel heat loads in Figure 12a or channel internal diameter in Figure 12b. Before the occurrence of flow maldistribution, the flow solutions (Figure 12a-i and Figure 12b-i) and their corresponding \(\dot{S}_{gen}\) (Figure 12a-ii and Figure 12b-ii) are identical. However, after the occurrence of flow maldistribution, the expected flow distribution is characterized by small flow fraction in channel 1 and large flow fraction in channel 2 while the unlikely flow distribution is characterized by large flow fraction in channel 1 and small flow fraction in channel 2. A comparison of \(\dot{S}_{gen}\) associated with both flow solutions shows that the \(\dot{S}_{gen}\) from the expected stable flow distribution is greater than \(\dot{S}_{gen}\) from the unlikely but stable flow distribution. Figure 12. Comparison between expected and unlikely maldistributed flow states in a two parallel channel assembly for : (a) \(D_{1}=1.3\times 10^{-3}\) m, \(D_{2}=1.3\times 10^{-3}\) m and \(\dot{Q}_{h,1}=\dot{Q}_{h,2}=60\) W (b) \(\dot{Q}_{h,1}=60\) W, \(\dot{Q}_{h,2}=50\) W and \[D_{1}=D_{2}=1.4\times 10^{-3}\;\mbox{m}\] Figure 13 shows a parametric study to compare \(\dot{S}_{gen}\) for maldistributed flow solutions with \(\dot{m}_{1}>\dot{m}_{2}\) (blue) and \(\dot{m}_{1}<\dot{m}_{2}\) (red) as a function of \(D_{1}^{*}=\frac{D_{1}}{D_{1}+D_{2}}\) (left) and \(\dot{Q}_{h,1}^{*}=\frac{q_{h,1}}{q_{h,1}+q_{h,2}}\) (right). In Figure 13 (left), for solutions with \(\dot{m}_{1}>\dot{m}_{2}\), \(\dot{S}_{gen}\) increases with increasing \(D_{1}^{*}\). For solutions with \(\dot{m}_{1}<\dot{m}_{2}\), \(\dot{S}_{gen}\) increases with decreasing \(D_{1}^{*}\). This behavior is due to the increase in disparity between flow rates \(|\dot{m}_{1}-\dot{m}_{2}|\) and consequently CHF with increase in \(D_{1}^{*}\) when \(\dot{m}_{1}>\dot{m}_{2}\) or decrease in \(D_{1}^{*}\) when \(\dot{m}_{1}<\dot{m}_{2}\). At \(D_{1}^{*}<0.5\), solutions with \(\dot{m}_{1}<\dot{m}_{2}\) have a larger \(\dot{S}_{gen}\) compared to solutions with \(\dot{m}_{1}>\dot{m}_{2}\). While, at \(D_{1}^{*}>0.5\), solutions with \(\dot{m}_{1}>\dot{m}_{2}\) have a larger \(\dot{S}_{gen}\) compared to solutions with \(\dot{m}_{1}<\dot{m}_{2}\). In Figure 13 (right), \(\dot{S}_{gen}\) corresponding to both solutions with \(\dot{m}_{1}>\dot{m}_{2}\) and \(\dot{m}_{1}<\dot{m}_{2}\) increase with an increase in \(\dot{Q}_{h1}^{*}\). This is caused by the increase in irreversibility associated with an increase in heat load. Also, as \(\dot{Q}_{h1}^{*}\) increases the influence of \(\dot{S}_{gen,mix}\) (Figure 9) becomes more dominant in solutions with \(\dot{m}_{1}<\dot{m}_{2}\) when compared to solutions with \(\dot{m}_{1}>\dot{m}_{2}\). As a result, at \(\dot{Q}_{h1}^{*}<0.5\), solutions with \(\dot{m}_{1}>\dot{m}_{2}\) have a larger \(\dot{S}_{gen}\) compared to solutions with \(\dot{m}_{1}<\dot{m}_{2}\). While, at \(\dot{Q}_{h1}^{*}>0.5\), solutions with \(\dot{m}_{1}<\dot{m}_{2}\) have a larger \(\dot{S}_{gen}\) compared to solutions with \(\dot{m}_{1}>\dot{m}_{2}\). Based on the solution corresponding to the maximum \(\dot{S}_{gen}\) in Figure 13 (left), \(\dot{m}_{1}<\dot{m}_{2}\) if \(D_{1}^{*}<0.5\) and \(\dot{Q}_{h1}^{*}=0.5\), while \(\dot{m}_{1}>\dot{m}_{2}\) if \(D_{1}^{*}>0.5\) and \(\dot{Q}_{h1}^{*}=0.5\). In Figure 13 (right), \(\dot{m}_{1}>\dot{m}_{2}\) if \(\dot{Q}_{h1}^{*}<0.5\) and \(D_{1}^{*}=0.5\), while \(\dot{m}_{1}<\dot{m}_{2}\) if \(\dot{Q}_{h1}^{*}>0.5\) and \(D_{1}^{*}=0.5\). ## I Conclusion This study analyzes the relationship between two-phase flow distribution and entropy generation rate in a parallel channel assembly to address the challenge of a multiplicity of flow distribution solutions associated with the same conditions. The nonlinearity of the characteristic curves associated with two-phase flow in single channels indicates that stable theoretical solutions to flow distribution in a multi-channel network are often non-unique. In order to solve this challenge, previous studies applied linear stability analysis to determine the feasibility of a solution. However, this approach provides no underlying reason why a flow distribution is preferred over others, and it is limited in its applicability to distinguishing between stable and unstable flow distributions. Therefore, we explore using an entropy analysis to predict the flow distribution in a two-parallel-channel network. In this study, entropy generation in parallel channel networks is divided into entropy generation within individual channels and entropy generation during the mixing of fluids at the common headers of the parallel channel network. The entropy analysis in a single channel with a constant heat load shows that hydraulic sources of irreversibility mainly drive entropy generation before the occurrence of CHF, while thermal sources of irreversibility become dominant after the occurrence of CHF. Also, entropy generation from mixing fluid at the common exit is a function of the disparity in the thermal content of each channel fluid stream. We show that entropy generation in a maldistributed flow is greater than any unstable flow distribution under the same conditions. Therefore, during phase change and within given system constraints, maldistributed flow is thermodynamically favored over other forms of flow distribution. However, this doesn't imply that system constraints may not be varied to ensure a more evenly distributed flow. Although maldistributed flow solutions are stable, these solutions are also non-unique. To distinguish between non-unique stable maldistributed flow solutions, we apply the trends observed from flow analyses in single channel and in common header of the parallel channel network to differential equations describing the change in entropy generation rate. Through this, we show that for a process of flow maldistribution under certain conditions, the final stable distribution with the highest rate of entropy generation is thermodynamically favored and will spontaneously occur. This is fundamental in understanding flow distribution in parallel channels and is applicable in optimizing the design of robust thermal systems against flow maldistribution.
2308.14139
Reinforcement Learning-based Optimal Control and Software Rejuvenation for Safe and Efficient UAV Navigation
Unmanned autonomous vehicles (UAVs) rely on effective path planning and tracking control to accomplish complex tasks in various domains. Reinforcement Learning (RL) methods are becoming increasingly popular in control applications, as they can learn from data and deal with unmodelled dynamics. Cyber-physical systems (CPSs), such as UAVs, integrate sensing, network communication, control, and computation to solve challenging problems. In this context, Software Rejuvenation (SR) is a protection mechanism that refreshes the control software to mitigate cyber-attacks, but it can affect the tracking controller's performance due to discrepancies between the control software and the physical system state. Traditional approaches to mitigate this effect are conservative, hindering the overall system performance. In this paper, we propose a novel approach that incorporates Deep Reinforcement Learning (Deep RL) into SR to design a safe and high-performing tracking controller. Our approach optimizes safety and performance, and we demonstrate its effectiveness during UAV simulations. We compare our approach with traditional methods and show that it improves the system's performance while maintaining safety constraints.
Angela Chen, Konstantinos Mitsopoulos, Raffaele Romagnoli
2023-08-27T15:38:15Z
http://arxiv.org/abs/2308.14139v1
Reinforcement Learning-based Optimal Control and Software Rejuvenation for Safe and Efficient UAV Navigation ###### Abstract Unmanned autonomous vehicles (UAVs) rely on effective path planning and tracking control to accomplish complex tasks in various domains. Reinforcement Learning (RL) methods are becoming increasingly popular in control applications [1], as they can learn from data and deal with unmodelled dynamics. Cyber-physical systems (CPSs), such as UAVs, integrate sensing, network communication, control, and computation to solve challenging problems. In this context, Software Rejuvenation (SR) is a protection mechanism that refreshes the control software to mitigate cyber-attacks, but it can affect the tracking controller's performance due to discrepancies between the control software and the physical system state. Traditional approaches to mitigate this effect are conservative, hindering the overall system performance. In this paper, we propose a novel approach that incorporates Deep Reinforcement Learning (Deep RL) into SR to design a safe and high-performing tracking controller. Our approach optimizes safety and performance, and we demonstrate its effectiveness during UAV simulations. We compare our approach with traditional methods and show that it improves the system's performance while maintaining safety constraints. ## I Introduction Path planning and tracking control are key elements for unmanned autonomous vehicles (UAVs). Reinforcement Learning [2]is gaining more attention in control applications [3] since it is able to deal with unmodelled dynamics by learning them from data. UAVs are applications of cyber-physical systems (CPSs) that integrate sensing, network communication, control, and computational methods to solve complex applications in applications such as transportation, healthcare, power supply, etc [4]. In general, the controller design considers only the physical dynamics of a CPS because it is assumed that the inertia of the physical system is slower than any operation performed in the cyber part. Due to the complexity of a CPS, that assumption is getting more and more unrealistic, particularly when solutions to protect the CPS from cyber-attacks are implemented [5, 6, 7]. This is the case of software rejuvenation (SR) [8, 9] which is a mechanism of protection that refreshes the run-time control software in order to mitigate the possible negative effects of a cyber-attack on it. This mechanism of protection imposes constraints to be satisfied, for example in [10], the trajectory setpoints must be updated only under specific conditions that involve time and system dynamics. Despite its effectiveness, in terms of safety and mission liveness [11], the overall control performance can be very poor in terms of trajectory tracking. One of the main issues is that there is a discrepancy between the state of the control software and the actual state of the system. In fact, at each software refresh a previous uncorrupted image of the control software is loaded. This discrepancy becomes more evident in the case of the controller making use of the state estimation. Modeling this aspect into the physical system dynamics in order to develop a tracking controller that mitigates this effect can be very challenging. In the SR framework, the trajectory tracking controller generates a sequence of setpoints that takes into account safety accordingly to Lyapunov's theory [12]. In real applications, the effects of the software rejuvenation on the state estimation error make it difficult to be modeled and find the optimal trajectory tracking algorithm that can improve the overall system performance. In fact, the proposed solutions are quite conservative which is good from the safety viewpoint, but this can be very limiting in terms of the application viewpoint. In this paper, we involve Deep RL in the SR problem for the design of a safe tracking controller that also considers the system's performance during the mission. Our objective is to show the applicability and effectiveness of Deep RL in this context tracing the way of future research directions that combines control theory and Deep RL for safe-critical applications. Deep RL algorithms learn optimal control policies by iteratively optimizing a reward function that measures the success of the control policy. While this approach have been successfully applied to many control problems, in this work we do not intend to replace traditional control methods. Rather, the objective is to integrate it into existing control frameworks to improve their performance and safety. In this paper, we apply it to the SR problem in control theory and demonstrate its potential to enhance the safety and performance of UAVs. Specifically, we show that our approach mitigates the effect of SR on the tracking controller's performance compared to traditional approaches. Our work contributes to the growing body of research that combines control theory and Deep RL to address critical safety issues in cyber-physical systems. ## II Preliminaries Let us consider a positive definite matrix \(M>0\) with \(M\in\mathbb{R}^{n}\), and a vector \(v\in\mathbb{R}^{n}\), the norm of \(v\) w.r.t \(P\), \(P-\)norm is \[\|v\|_{P}=\sqrt{v^{T}Pv}. \tag{1}\] The ellipsoid of size \(\rho\) centered in \(c\in\mathbb{R}^{n}\) \[\mathcal{E}(\rho,c)=\left\{v\in\mathbb{R}^{n}\mid\|v-c\|_{P}^{2}\leq\rho\right\}\] A linear time-invariant (LTI) continuous-time system is described by \[\dot{x}=Ax+Bu \tag{2}\] where \(x\in\mathbb{R}^{n}\) represents the state of the system, \(u\in\mathbb{R}^{p}\) is the input vector, the matrix \(A\in\mathbb{R}^{n\times n}\), and the matrix \(B\in\mathbb{R}^{n\times p}\). The output vector is \(y=Cx\) with \(y\in\mathbb{R}^{q}\) and \(C\in\mathbb{R}^{q\times n}\). Let us consider a state feedback controller \(u=-Kx\) that defines the closed-loop system \[\dot{x}=(A-BK)x \tag{3}\] where matrix \(A-BK\) is Hurwitz. Since the controlled system is asymptotically stable there exists a positive definite matrix \(P>0\) with \(P\in\mathbb{R}^{n\times n}\) that satisfies the Lyapunov equation \[(A-BK)^{T}P+P(A-BK)=-Q \tag{4}\] where \(Q\in\mathbb{R}^{n\times n}\) and \(Q>0\). The ellipsoid centered at the origin \(\mathcal{E}(\rho,0)\) is a Lyapunov level set which is positively invariant. Moving the system to another equilibrium point (or setpoint) \(x_{sp}\) the new control law is \(u=-K(x-x_{sp})\), then the closed-loop system can be rewritten as \[\dot{x}=(A-BK)(x-x_{sp}). \tag{5}\] The Lyapunov analysis remains the same except for the origin of the system which is translated as the ellipsoid \(\mathcal{E}(\rho,x_{sp})\). In case the state of the system is not measurable, and only the measurements \(y\) are available, a state estimation \(\hat{x}\in\mathbb{R}^{n}\) can be used to close the loop. If the system is observable, thanks to the separation principle it is possible to design a deterministic observer \[\dot{\hat{x}}=A\hat{x}+Bu+L\left(C\hat{x}-y\right). \tag{6}\] By defining the estimation error \(e\triangleq x-\hat{x}\), and substituting \(y\) with \(Cx\), the dynamics of the estimation error is \[\dot{e}=(A-LC)e. \tag{7}\] Thanks to the observability property of the system it is possible to design \(L\) that makes the matrix \((A-LC)\) Hurwitz. The new control input is now \[u=-K(\hat{x}(t)-x_{sp}) \tag{8}\] ## III Software Rejuvenation Fig. 1 describes the SR approach over time. At the beginning of the mission the drone is in secure control (\(SC\)) mode which means that the control software is not vulnerable to attacks (e.g. not connected to the communication network). Before switching to mission control (\(MC\)) mode an image of the run-time software can be saved in a protected memory location, checkpoint (\(CP\)) since the system is assumed to be clean. During \(MC\) the drone can communicate through the communication network and then be vulnerable to cyber-attacks. To avoid possible catastrophic consequences of a worst-case attack, a protected timer triggers the software refresh before it is too late for preventing any irreversible damage to the system. The amount of time the system is in \(MC\) mode is indicated by \(T_{MC}\). During software refresh, the saved clean image of the run-time software is rolled back (\(RB\)), and the time needed for this operation is indicated with \(T_{RB}\). During \(RB\), the control input is kept constant and equal to the last provided. The total time the system is under unknown control is \(T_{UC}=T_{MC}+T_{RB}\). Fig. (2) shows the mode-switching graph and it offers more details in particular for the recovery and setpoint update. For the moment we consider that \(x(t)\) is available and it is used to compute the control law, hence the time spent by the system in SC mode depends on the following condition \[\|x(t)-x_{sp}\|_{P}^{2}\leq\rho_{s} \tag{9}\] that determines that the system has been fully recovered, and \(0<\rho_{s}<1\). Since after \(RB\), the software is clean and the system is in \(SC\) mode, and \(x_{sp}\) is the same saved during the previous \(CP\), all the information used in (9) is not corrupted. If there is no attack, the time spent in \(SC\) mode can last only the period to check (9). ### _Safety and Setpoint Update_ For a given setpoint \(x_{sp}\), the safety set is provided by the ellipsoid \(\mathcal{E}(1,x_{sp})\) which is an invariant set for the controlled system (5). Considering a \(\rho_{m}\) such that \(0<\rho_{s}<\rho_{m}<1\), we compute \(T_{MC}\) as the time that \(\forall x(t)\in\mathcal{E}(\rho_{m},x_{sp})\), \(x(t)\) is always recoverable into \(\mathcal{E}(\rho_{s},x_{sp})\)[12]. For the trajectory tracking, we assume that the setpoints \(x_{sp}\) are generated along the line that joins two waypoints \(w_{i}\), and \(w_{i+1}\). The Fig. 1: Software Rejuvenation timeline. Fig. 2: SR mode-transition graph. Unlabeled transitions occur immediately after the operations for the preceding mode are completed, or when the time indicated for the mode has elapsed. safety condition for the setpoint transition is \[\|x(t)-x_{sp}\|_{P}^{2}\leq\rho_{m}\Rightarrow\|x(t)-x_{sp}^{\prime}\|_{P}^{2} \leq\rho_{m} \tag{10}\] where \(x_{sp}^{\prime}\) is the new setpoint [11]. Fig.3 shows the Assuming that the state is available, the above condition is verified if \(x_{sp}\) is updated as \[x_{sp}^{\prime}=x_{sp}+(\sqrt{\rho_{m}}-\sqrt{\rho_{s}})\mathbf{v} \tag{11}\] where \(\mathbf{v}\) is the unitary vector along the trajectory. ### _State Estimation_ In a real application, only the state estimation \(\hat{x}\) is available. This information is stored in the run-time software and during \(RB\) the state estimation is computed starting from the initial conditions saved during \(CP\). Since \(\hat{x}\) is used to compute \(u\) (8), after each \(RB\) there is the effect of the estimation error \(e\) that may increase after each SR cycle making the system unstable. Moreover, \(\hat{x}\) is now used for evaluating (9), and large estimation can make the SR scheme switch when the system cannot be exposed to possible attacks. In this situation, the safety conditions for \(T_{MC}\) and setpoint generation, can be the same by just replacing \(x(t)\) with \(\hat{x}(t)\) if \(T_{est}\) is introduced [13]. \(T_{est}\) is the minimum time for the system to be in \(SC\) mode after software refresh in order to reduce the estimation error to keep the system stable and safe against attacks. ### _Problem Statement_ In this paper, we consider a UAV whose nonlinear dynamics can be reduced into the form of (2) and stabilized around a setpoint \(x_{sp}\) by a linear controller (3). We also assume that the state \(x\) is not directly accessible and a state observer (6) is needed to compute \(u\) as in (8) and for evaluating the SR condition (9). We assume that \(\mathcal{E}(\rho_{s},x_{sp})\), \(\mathcal{E}(\rho_{m},x_{sp})\), \(T_{MC}\), and \(T_{est}\) given and they ensure safety against cyber-attacks [10]. In this paper, we are interested in the improvement of the performance of the system protected via SR when the system is not under attack. Specifically, we redesign (11) as \[x_{sp}^{\prime}=x_{sp}+\alpha\mathbf{v} \tag{12}\] with \[\alpha=f(\hat{x},x_{sp};\theta) \tag{13}\] where \(f\) is a neural network with parameters \(\theta\) optimized with RL. This formulation aligns with reference governor (RG) [14, 15] and explicit reference governor (ERG) [16] frameworks. The safe-trajectory controller for SR can be regarded as an ERG, albeit with distinct operating conditions. With our method we aim to show that the effects of the SR scheme can be captured by a learning technique which it would be difficult to model with the traditional control tools as shown in [10]. The goal is to demonstrate that the efficacy of RL approach improves performance in terms of reducing the time of the mission, while the safety conditions are satisfied. Finally, we also consider the presence of noise in the measurements. ## IV Learning Setpoint Generation We consider a task where a UAV is required to navigate from a starting location A to a goal location B within a bounded 3D space free of obstacles. The UAV is controlled by (8), and we assume that an RL agent must effectively learn to modulate the displacement of a setpoint at discrete timesteps, based on input information related to the UAV's state. To accomplish this, the agent must select an appropriate value for the parameter \(\alpha\), which determines the magnitude of the displacement modulation. Between two consecutive decision-making points in simulation, disturbances SR affect the UAV, including state estimation errors that depend on the agent's choice of \(\alpha\). Generally, a higher value of \(\alpha\) leads to a greater degree of disturbance experienced by the UAV. The main objective of the learning agent is to identify an optimal value of \(\alpha\) that can modulate the UAV's displacement in a way that respects the safety constraints - discussed in Section III-A - while simultaneously improving the speed of the UAV. In contrast to (11), which employs a more conservative approach that prioritizes safety but overlooks performance, our proposed approach seeks to optimize both safety and performance. Specifically, by using a learning agent that can dynamically adjust the value of \(\alpha\) in response to the UAV's state, we can achieve a better balance between safety and performance, leading to improved task outcomes. ### _Reinforcement Learning_ We formulate this task as a Markov Decision Process [17] (MDP) which is defined as a tuple \(\langle\mathcal{S},\mathcal{A},T,\mathcal{R},\gamma\rangle\) where: * \(\mathcal{S}\) denotes the continuous state space; in our case the state at decision timestep \(k\) is \(s_{k}=\hat{x}-x_{sp}\), * \(\mathcal{A}\) the continuous action space; in our case the action at timestep \(k\) is \(a_{k}=\alpha\in[0,1]\), * \(\mathcal{T}\) is the transition probability for arriving into state \(s_{k+1}\) when executing action \(a_{k}\) from state \(s_{k}\), Fig. 3: Safe setpoint transition scheme. * \(\mathcal{R}\) is the reward function that defines the reward received by the agent for transitioning from state \(s_{k}\) to state \(s_{k+1}\) when taking action \(a_{k}\), * and \(\gamma\) is the discount factor that determines the importance of future rewards relative to immediate rewards. In this case, it can be used to model the trade-off between short-term and long-term objectives. The objective of the agent is to maximize the expected return \(G_{k}=\sum_{l=0}^{\infty}\gamma^{l}r_{k+l+1}\) from each state \(s_{k}\), where \(r_{k}\) denotes a specific instance of the reward function, obtained at evaluating it at a specific state-action pair. The reward function in our case is defined in section V-A. A solution to an MDP is obtained by finding an optimal policy \(\pi\left(.\mid s_{k}\right)\), that maps a state \(s_{k}\) to a distribution over possible actions that lead the agent to higher sums of rewards. The probability of performing action \(a_{k}\) in state \(s_{k}\) is denoted by \(a_{k}\sim\pi\left(a\mid s_{k}\right)\). One way to obtain an optimal policy is to use value-based RL methods. The action value \(Q^{\pi}(s,a)=\mathbb{E}\left[G_{k}\mid s_{k}=s,a\right]\) is the expected return for selecting action \(a\) in state \(s\) and following policy \(\pi\). The optimal value function \(Q^{*}(s,a)=\max_{\pi}Q^{\pi}(s,a)\) gives the maximum action value for state \(s\) and action \(a\) achievable by any policy. Similarly, the value of state \(s\) under policy \(\pi\) is defined as \(V^{\pi}(k)=\mathbb{E}\left[G_{t}\mid s_{k}=s\right]\) and is simply the expected return for following policy \(\pi\) from state \(s\). The optimal state value function is given by \(V^{*}(s)=\max_{a\in\mathcal{A}}Q^{\pi^{*}}(s,a)\). Value functions can be used to define a policy (e.g. \(\epsilon\)-greedy). RL methods that estimate value functions are usually called _critic methods_. In many real-world scenarios, the state and action spaces of an MDP are so large that it is impractical to enumerate all possible combinations. For an agent to learn a successful policy it is necessary to be able to estimate value functions of unseen states. The action value function could be represented using a function approximator, such as a neural network. Let \(Q(s,a;\theta)\) be an approximate action-value function with parameters \(\theta\). The updates to \(\theta\) can be derived from a variety of reinforcement learning algorithms which aim to directly approximate the optimal action value function: \(Q^{*}(s,a)\approx Q(s,a;\theta)\). In contrast to value-based methods, policy-based model-free methods directly parameterize the policy \(\pi(a\mid s;\theta)\) and update the parameters \(\theta\) by performing, typically approximate, gradient ascent on \(\mathbb{E}\left[G_{k}\right].\) One example of such a method is the REINFORCE family of algorithms [18] which updates the policy parameters \(\theta\) in the direction \(\nabla_{\theta}\log\pi\left(a_{k}\mid s_{k};\theta\right)G_{k}\). Such types of methods are called _actor methods_. As discussed, we can introduce an estimation of the return in the form of a critic which results in Actor-Critic methods. In this work we employ the Soft Actor Critic [19] (SAC) algorithm to demonstrate the importance of learning methods in improving performance and ensuring safety in control applications. SAC is an entropy-regularized RL method that changes the RL problem (i.e obtain an optimal policy \(\pi^{*}\)) to: \[\pi^{*}=\arg\max_{\pi}\mathbb{E}\left[\sum_{k=0}^{K}\gamma^{k} \left(R\left(s_{k},a_{k}\right)+\beta H\left(\pi\left(\cdot\mid s_{k}\right) \right)\right)\right] \tag{14}\] where the temperature parameter \(\beta\) controls the stochasticity of the optimal policy as it determines the relative importance of the entropy \(H\) of the policy term against the reward. SAC incorporates a modified action and state value functions that offer the agent a bonus proportionate to the policy's entropy. This approach renders policies optimized for maximum entropy ([20, 21]) more robust, allowing for a greater ability to respond successfully to unexpected perturbations during testing. Additionally, optimizing for maximum entropy during training can improve both the algorithm's robustness to hyperparameters and its sample efficiency, making SAC a useful tool for control problems [22]. ## V Experimental setup and Simulation Results ### _Simulation Environment_ To simulate the interaction between the RL agent and the UAV system, we developed a customized OpenAI gym environment. The environment models the nonlinear dynamics of the UAV system [10], along with the effects of the software rejuvenation and recovery periods. The state estimation \(\hat{x}(t)\), computed as (6), is evaluated after each cycle of the SR scheme of Fig. 2, with \(T_{MC}=200\) ms, \(T_{RB}=10\) ms, and \(T_{est}=1.7\) s. Those numbers have been computed accordingly to [10]. The total time needed for one cycle of SR is at least \(1.910\)s. At approximately every 2s interval, the RL agent receives the current state \(s_{k}\) of the system and selects an action \(a_{k}=\alpha\), indicating its displacement from the current location as depicted in 3. Based on (11), the \(\alpha\) value is bounded as \(0\leq\alpha\leq\sqrt{\rho_{m}}-\sqrt{\rho_{s}}\) for safety considerations, and we set the size of the outer ellipsoid \(\rho_{m}=0.01\) and the inner ellipsoid \(\rho_{s}=0.0012\). In this experiment, the drone starts from (1,1,1) and stops when it reaches (5,5,5). **Reward Function:** The reward function considers the effect of the action, generated by the agent, to the SR period and how far from the goal the UAV is: \[R(s_{k},a_{k})=-r_{\texttt{mpn}}-\|x_{k}-x_{\text{goal}}\|_{P}^{2} \tag{15}\] where \(r_{\texttt{mpn}}\) is the maximum \(p\)-norm of all \(\|x(t)-x_{sp}\|_{P}^{2}\) evaluated during the entire SR cycle. The groundtruth state when the SR cycle has been completed is indicated by \(x_{k}\). At any point in time if the system was becoming unstable (\(\|x(t)-x_{sp}\|_{P}^{2}>10\)) we terminated the simulation with \(r_{\texttt{mpn}}=10\). **Baseline Method:** This method refers to the setpoint update (11). To make a fair comparison with the RL method, we used the following computation \[\alpha(\hat{x},x_{sp})=\sqrt{\rho_{m}}-\|\hat{x}(t)-x_{sp}\|_{p} \tag{16}\] where \(\hat{x}(t)\) is the current state estimation, \(x_{sp}\) denotes the current setpoint, and \(\rho_{m}\) is the size of the outer ellipsoid set. The new formulation (16) provides less conservative values than (11), because of (9) \(\|\hat{x}(t)-x_{sp}\|_{p}\leq\sqrt{\rho_{s}}\). **Reinforcement Learning Method:** The RL method adopts the Soft Actor-Critic algorithm, with both Actor and Critic having two fully connected hidden layers 256 hidden units each. The model takes the difference between the state estimation and the current setpoint \(\hat{x}(t)-x_{sp}\) as input and learns the optimal \(\alpha\) value to generate the next setpoint under the safety constraints. We train the model with 20,000 steps on an NVIDIA GeForce RTX 3090, and the training takes approximately an hour to converge. During training, the policy is stochastic whereas during evaluation is deterministic. ### _Simulation Results_ Under the baseline method, the drone completes the task in around 116s, and the drone's 3D trajectory is shown in Fig. 4. The baseline method produces \(\alpha\) based on equation (16) and generates setpoints shown as red dots in the figure. Our objective is to minimize the time required to reach the goal while ensuring that the system satisfies the safety conditions. Our RL method reduces the total time required to complete the task, achieving the goal within 106 s, as demonstrated in the last three plots of Fig. 5. In order to explain the benefit of the RL approach we consider the behavior of \(\|x(t)-x_{sp}\|_{P}^{2}\) over time. This function shows how far the system is from the boundaries of the ellipsoids \(\mathcal{E}(\rho_{m},x_{sp})\) and \(\mathcal{E}(\rho_{s},x_{sp})\) that guarantee the safety of the system under SR. Fig. 6 shows the first 5 s of the baseline method along with the safety bounds. The sequence (A, B, C, D, A) forms a complete SR cycle, as shown in Fig. 2. At local minima A, the setpoint is updated, and the system enters the \(MC\) mode from B to C. To ensure safety, B points should be less than \(\rho_{m}\), so \(x\in\mathcal{E}(\rho_{m},x_{sp})\). At point C, the software refreshes to the same value as the previous cycle's point B. From C to A, the system is in \(SC\) mode, and \(x\) can be outside \(\mathcal{E}(\rho_{m},x_{sp})\) as in D, which should be kept small to avoid stability issues. From Fig. 7, we compare \(\|x(t)-x_{sp}\|_{P}^{2}\) value between the Fig. 4: Drone trajectory with the baseline method. The drone starts from (1,1,1) and ends at (5,5,5). The red dots are the waypoints produced by the baseline method. The blue line shows the actual trajectory of the drone. Fig. 5: Drone trajectories of baseline method (top three plots) and RL method (bottom three plots). The baseline method takes 116 seconds, and the RL method takes 106 seconds to complete the same task. Fig. 6: Example of \(p\)-norm analysis of the actual state w.r.t the current \(x_{sp}\) selected from the first 5 s of our simulation with the baseline method. The red lines indicate the safety bounds \(\rho_{m}=0.01\) and \(\rho_{s}=0.0012\). (A, B, C, D, A) is a full checkpoint update cycle in Fig. 2. baseline method and the RL method. The baseline method has B points at around 0.0075 during \(MC\), while the RL method pushes B points up to 0.009, reducing the gap with the upper safety bound. ## VI Conclusions This work demonstrated the effectiveness of incorporating Reinforcement Learning and optimal control methods in the design of safe and efficient UAV navigation systems. Our approach optimizes a reward function that balances safety and performance and incorporates Software Rejuvenation (SR) protection mechanisms to mitigate cyber-attacks. Results from simulations of UAVs show that our approach improves the system's performance while respecting the safety bounds compared to traditional methods. This work contributes to the growing body of research that combines control theory and Reinforcement Learning to address critical safety issues in cyber-physical systems. Future work can explore the application of our approach in other domains (e.g Bipedal walking) and investigate its impact on system performance and safety. ## VII Acknowledgements The authors would like to express their gratitude to Tao Jin and Prof. Anthony Rowe from the Wireless Sensing and Embedded Systems Lab at Carnegie Mellon University for their invaluable support in providing computational resources for this research.
2306.15953
Angle Sensitive Pixels for Lensless Imaging on Spherical Sensors
We propose OrbCam, a lensless architecture for imaging with spherical sensors. Prior work in lensless imager techniques have focused largely on using planar sensors; for such designs, it is important to use a modulation element, e.g. amplitude or phase masks, to construct a invertible imaging system. In contrast, we show that the diversity of pixel orientations on a curved surface is sufficient to improve the conditioning of the mapping between the scene and the sensor. Hence, when imaging on a spherical sensor, all pixels can have the same angular response function such that the lensless imager is comprised of pixels that are identical to each other and differ only in their orientations. We provide the computational tools for the design of the angular response of the pixels in a spherical sensor that leads to well-conditioned and noise-robust measurements. We validate our design in both simulation and a lab prototype. The implications of our design is that the lensless imaging can be enabled easily for curved and flexible surfaces thereby opening up a new set of application domains.
Yi Hua, Yongyi Zhao, Aswin C. Sankaranarayanan
2023-06-28T06:28:53Z
http://arxiv.org/abs/2306.15953v1
# Angle Sensitive Pixels for Lensless Imaging on Spherical Sensors ###### Abstract We propose OrbCam, a lensless architecture for imaging with spherical sensors. Prior work in lensless imager techniques have focused largely on using planar sensors; for such designs, it is important to use a modulation element, e.g. amplitude or phase masks, to construct a invertible imaging system. In contrast, we show that the diversity of pixel orientations on a curved surface is sufficient to improve the conditioning of the mapping between the scene and the sensor. Hence, when imaging on a spherical sensor, all pixels can have the same angular response function such that the lensless imager is comprised of pixels that are identical to each other and differ only in their orientations. We provide the computational tools for the design of the angular response of the pixels in a spherical sensor that leads to well-conditioned and noise-robust measurements. We validate our design in both simulation and a lab prototype. The implications of our design is that the lensless imaging can be enabled easily for curved and flexible surfaces thereby opening up a new set of application domains. ## 1 Introduction Can we build a imager that is thin and conforming to a curved surface? Such a imager would be invaluable for many applications. For example, it can be wrapped on a ball to produce a panorama. It can enable flexible robots to see their environment and can be pressed against human skin to accurately sense blood flows. While recent advancements in flexible electronics [16] allow us to measure light intensity on a curved or flexible surface, it is very challenging to design lenses that are of a thin form factor and yet focus images on curved surfaces. On the other hand, lensless imaging have delivered imaging solutions that are lightweight, compact and of thin form factor [5]. So far, lensless imaging techniques have been only developed for planar sensors, and we aim to incorporate flexible and curved sensors into lensless imagers. In this paper, we take a step towards thin, surface conforming imager design, by proposing and analyzing the performance of a thin, lensless imager on a well-studied curved surface -- the sphere. The missing piece of designing thin imagers that sense on a curved surface lies in producing the modulation element. In planar imagers, a modulation element, traditionally a lens, is used to provide a diverse set of measurements so that the inverse problem of image recovery is well conditioned. Consider the situation where a point light source is placed far from the sensor. In absence of the modulation element, all pixels on a small planar sensor would all measure nearly identical measurements, as they receive nearly identical amount of light. A thin modulation elements such as an amplitude masks [2], phase masks [1], refractive [22] and diffractive [6] elements introduced diversity in the measurements so that the effective pixel response is rich and diverse enabling a well-conditioned inverse problem. However, such elements are difficult to manufacture precisely for use on a curved or potentially flexible surface. While designing a modulation element to induce diversity in measurements of a curved sensor is challenging, the curved surfaces present us with another source of measurement diversity that is absent in planar imagers, namely, _the orientation of the pixels_. On a curved sensor, the inherent diversity of pixel orientations is often sufficient to resolve the Figure 1: We present a thin form-factor lensless imager on spherical surface. Imaging on a non-planar surface has many advantages, for example, imaging a large angle of view without radial distortion and vignetting. This image from inside of a cube built with foam mat captures \(180^{\circ}\times 133^{\circ}\) angle of view with 6030 pixels. scene at a high angular resolution. For example, if we had a spherical sensor comprised of photodiodes with very narrow cone of view, we could image a scene without any additional modulation. But clearly, such a solution will have low light efficiency since we restrict the amount of light that enters each photodiode; this is especially true when we seek to resolve the scene at high resolution. In this paper, we propose a design for lensless imaging using spherical sensors, where by tailoring the angular response of the pixels, we resolve the scene at high resolution without a commensurate loss in light throughput. We envision a lensless imaging system with a bare spherical sensor, where each photodiode has identical but carefully designed angular response. Unlike previous lensless cameras with planar sensors, we are able to have the same angular response at all pixels since spherical sensors have a diversity of pixel orientations. As with previous lensless designs, the image of the scene is computationally recovered with or without the image priors from the measurements. The proposed design where all pixels are engineered with the same angular response have many advantages against previous lensless imagers. The angular-response engineering can be done during the CMOS process of the pixels. The calibration procedure is simple, as we only need to measure the angular response of one pixel. Finally, regardless of the orientation of the pixels, the measurements always sample from the convolution between scene and angular response; this means the imager can be designed to be flexible, but the conditioning of the forward process is stable. Contributions.This paper presents a new design for lensless imaging on spherical sensor and make the following contributions: * _Introduce pixel orientations as a source of measurement diversity._ As mentioned above, we exploit the insight that pixel orientations is in essence a form of modulation and provides diversity in measurements. This facilitates a simple design where all pixels on the spherical sensor can have the same angular response. * _Design of optimal azimuth-symmetric angular response function._ Under the assumptions of identical angular responses, we show that the sensor measurements are samples from isotropic spherical convolution between scene and angular response, provided (a) the scene is sufficiently far away from the sensor, and (b) the angular response is also azimuthally-symmetric (with the azimuth angle calculated with respect to the pixel orientation). There are many benefits to be derived from the choice of azimuthal symmetry. First, the image formation can be computed inexpensively by scaling spherical harmonic coefficients of the scene. Second, modeling as a spherical convolution provides analytical expressions for reconstruction error for known angular response. As a consequence, the search for the optimal azimuthally-symmetric masks is tractable. * _Validation via prototype experiments._ We verify the design of lensless camera, and in particular, the angular response function using a lab prototype. The prototype consists of a photodiode that is rotated around to simulate a spherical surface; the angular sensitivity of the photodiode is manipulated using an amplitude mask. Fig. 1 shows a reconstruction from our prototype. In addition to reconstructions from real data, we also provide numerous simulation results that highlight the properties of the proposed lensless camera. ## 2 Related Work We discuss prior art in lensless imaging, as well as imaging on curved and spherical surfaces. ### Lensless Imaging on Planar Sensors Lensless imaging techniques have been proven effective in miniaturizing cameras. Lensless imagers use a spatial light modulator as an alternative to traditional camera lens, and reconstruct captured scenes computationally. Most prior work model the sensor measurements, \(\mathbf{z}\), as a linear combination of scene intensities, \(\mathbf{x}\), \[\mathbf{z}=\Phi\mathbf{x} \tag{1}\] where \(\Phi\) is the (linear) measurement operator. The scene \(\mathbf{x}\) is recovered by inverting (1). A common approach for implementing a well-conditioned measurement matrix is to cover the sensor with an amplitude or phase mask. Asif _et al_. [2] built a imager only 500 \(\upmu\)m thicker than the sensor by using an amplitude mask that is separable, so that the forward model can be simplified into two 1D convolutions. Boominathan _et al_. [4] and Antipa _et al_. [1] employed thin transparent phase masks, and recovered 3D voxels in the scene by modeling measurement matrix as a sum of 2D cropped convolution. Stork and Gill [6] created ultra-miniature imagers (\(\sim\)100 \(\upmu\)m) with phase anti-symmetric spiral gratings integrated on planar sensors. Figure 2: The proposed imager consists of a set of pixels with engineered angular response on a sphere. This figure shows a design of using amplitude mask for modifying the angular response of pixel. All of these designs rely on imaging with a planar sensor; we show that sensing on a curved surface allows the imager to be built with simple masks that are insufficient for imaging with a planar sensor. The idea of engineering the angular response of sensors has some prior work in lensless imaging. Wang _et al_. [23] manufactured an \(8\times 8\) angle-sensitive pixel (ASP) array on a CMOS sensor; here, each pixel in the array has a Gabor-like response function. The measurements captured with the ASP sensor array were also used in combination with a lens to produce either high resolution 2D image or a low resolution 4D light field [9]. We further extend this idea of angle-sensing pixels for imaging on non-planar sensors. ### Imaging on Flexible / Curved Surfaces The last decade has seen many advances in flexible electronics and this has inspired the design of curved and flexible imaging sensors. Previous solutions for focusing image onto curved surfaces can be split into three categories: with traditional monocentric spherical lens, with other types of refractive optical elements, and other lensless approaches. An early work of curved sensor [11] built an array of \(16\times 16\) photodiodes on a hemisphere, and more recently Guenter _et al_. [8] have created mass-manufacturable high-resolution curved sensors by deforming 18 megapixel CMOS sensors. Both results demonstrate curved sensor surfaces can improve image quality with traditional monocentric lens designs. Krishnan and Nayar [13] propose and analyze the design of a spherical imager consisting of a spherical sensor wrapped around a ball lens. Ball lenses have been used extensively with spherical sensors; however, such lenses add significantly to the bulk and weight of the imaging system and, further, cannot be used when the sensor is on the exterior of the sphere, i.e., the pixel are facing outwards. Microlens arrays reduce the length of optical axis and lie closer to the curved sensor surface, and have been demonstrated useful for producing wide field-of-view images [21]. Sims _et al_. [19][18] designed elastic microlens arrays that can adapt with the curvature of the sensor. Alternatively, gradient refractive index lens designs were proposed for focusing a scene of a wide angle of view on curved sensor [10]. They construct a prototype which can identify the angular position of a light source; however, the prototype was not extended to high resolution imaging. Compared to lens-based designs, lensless cameras place modulators even closer to the sensor surface, resulting in a more compact form factor. Koppelhuber and Bimber [12] provide a flexible imager design using flexible Soller collimator and luminescent concentrators to guide light onto line-scan sensors. However this imager assumes a focused image is formed on the luminescent surface which is generally hard to obtain. ### Modeling Functions on the Sphere Understanding convolution on the sphere helps to clarify our choice of angular response function and placement of pixel orientations. The underlying modeling and analysis is greatly simplified via the use of spherical harmonics. In addition to imaging applications, functions on the sphere have also been used to model light transport in rendering applications [17, 20]. We give a brief description of the properties most relevant to our results while referring the reader to [7] for a detailed description. Spherical harmonicsare a orthonormal basis for square-integrable functions \(\mathcal{L}^{2}(\mathbb{S}^{2})\) on the sphere. The spherical harmonics of degree \(l\) order \(m\) is a complex-valued function on the sphere, \(l=0,1,\ldots,\infty\) and \(m=-l,\ldots,l\), \[Y_{l,m}(\theta,\phi)=\sqrt{\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}}P_{l}^{m}( \cos\theta)e^{im\phi}, \tag{2}\] where \(P_{l}^{m}(x)\) are associated Legendre functions, \(\theta\in[0,\pi]\), \(\phi\in[0,2\pi)\) defines point on a sphere by its azimuth and altitude. Any square-integrable function on the sphere \(f\in\mathcal{L}^{2}(\mathbb{S}^{2})\) can be composed into a sum of spherical harmonic basis weighted by \(f_{l,m}\), its spherical harmonic coefficients \[f(\mathbf{r})=\sum_{l=0}^{\infty}\sum_{m=-l}^{l}f_{l,m}Y_{l,m}(\mathbf{r}). \tag{3}\] Spherical harmonic coefficients\(f_{l,m}\) of a function \(f(\mathbf{r})\) can be obtained by spherical harmonics transform, \[f_{l,m}=\int_{\mathbf{r}\in\mathbb{S}^{2}}f(\mathbf{r})Y_{l,m}(\mathbf{r})d\mathbf{r}. \tag{4}\] Bandlimit on the sphere.We say the function \(f\) has bandlimit \(L\) if its spherical harmonics coefficients \(f_{l,m}=0\) for \(l\geq L\). A function with bandlimit \(L\) satisfies \[f(\mathbf{r})=\sum_{l=0}^{L-1}\sum_{m=-l}^{l}f_{l,m}Y_{l,m}(\mathbf{r}). \tag{5}\] Azimuthally-symmetric functions.When a function \(g\in\mathcal{L}^{2}(\mathbb{S}^{2})\) is azimuthally-symmetric, i.e. \(g(\theta,\phi)=g(\theta)\), it only contains spherical harmonics of order \(m=0\). A azimuth-symmetric function \(g\) satisfies \[g(\mathbf{r})=\sum_{l=0}^{\infty}g_{l,0}Y_{l,0}(\mathbf{r}). \tag{6}\] Isotropic convolutions on the sphere.Convolving any signal with a bandlimited azimuthally-symmetric signal can be conveniently computed by scaling the signal's spherical harmonics coefficients. The associated deconvolution also simplifies to scaling spherical harmonics coefficients. Formally, let \(f(\mathbf{r})\) be a square-integrable signal on the unit sphere, and \(g(\mathbf{r})\) be a square-integrable azimuthally-symmetric signal on the unit sphere with bandlimit of \(L\), where \(\mathbf{r}=(\theta,\phi)\) are points on the sphere with altitude \(\theta\in[0,\pi]\) and azimuth \(\phi\in[0,2\pi)\). Their spherical convolution is \[(f*g)(\mathbf{r})=\int_{\mathbf{s}\in\mathbb{S}^{2}}g^{*}(\mathbf{r}\cdot\mathbf{s})f(\mathbf{s})d \mathbf{s}, \tag{7}\] where \(g^{*}(\mathbf{r})\) denotes conjugate of \(g(\mathbf{r})\), and \(\mathbf{r}\cdot\mathbf{s}\) is the inner-product between the vectors \(\mathbf{r}\) and \(\mathbf{s}\). The spherical harmonic coefficients \((f*g)_{l,m}\) of \(f*g\) satisfies \[(f*g)_{l,m}=\left\{\begin{array}{cc}\sqrt{\frac{4\pi}{2l+1}}f_{l,m}g_{l,0}^{ *}&l=0,...,L-1,\;m=-l,...,l\\ 0&\text{otherwise}\end{array}\right. \tag{8}\] We utilize this observation to design angular responses that are azimuthally-symmetric. We choose our pixel orientations to measure \(f*g\) according to the sampling scheme with fast exact spherical harmonics transform [15]; this allows us to recover \(f_{l,m}\) up to bandlimit \(L\). ## 3 A Lensless Spherical Imager In this section, we discuss a simple instance of imaging on a curved surface, namely imaging on a sphere. We refer to the resulting imager design and its implementation as _OrbCam_. ### Forward model We now derive the measurement model for our imaging system. The imaging system is modeled as a collection of \(N\) photodiodes; all of them the same real-valued angular response function on the sphere \(g(\mathbf{r})\), where \(\mathbf{r}=(\theta,\phi)\) in Euler angles. We limit to the discussion of azimuthally-invariant \(g(\cdot)\), _i.e._\(g(\theta,\phi)=g(\theta)\), for tractable analysis. We assume the scene is a real-valued function on a sphere; \(x(\mathbf{r})\) representing radiance of light from direction \(\mathbf{r}\). This assumption is valid for scene very far away compared to the radius of the spherical sensor. We also assume the orientations of the pixels \(\{(\theta[i],\phi[i]),i=1,\dots,N\}\) are known. Under these assumptions, the measurement made by the \(i\)-th pixel is the convolution of the scene and the angular response function, \(g(\mathbf{r})\), evaluated at \((\theta[i],\phi[i])\), _i.e._, \[y[i]=(x*g)(\theta[i],\phi[i]), \tag{9}\] where \(n[i]\) denotes the measurement noise. ### Design of the Pixel Angular Response For open apertures, there is a trade off between light throughput and the conditioning of the matrix. Large apertures have high light throughput and produces measurements robust to measurement noise, but creates ill-conditioned systems that cannot be inverted. Small apertures, _i.e._, pinholes, creates well-conditioned systems but have very low light throughput, and suffers from measurement noise. We next analyze the performance of imaging with a spherical sensor for a given pixel angular response function \(g(\cdot)\) and, subsequently, optimize it for well-conditioned recovery. Spherical harmonic coefficients of \(y[i]\) and \(n[i]\), \(y_{l,m}\), \(n_{l,m}\), can be computed by spherical harmonics transform given by the sampling scheme. With the constraint of azimuth symmetric angular response and bandlimit assumptions, the forward imaging process in (9) can be rewritten in spherical harmonic domain using (4), (5), (6), and (8) into \[y_{l,m}=\widehat{g}_{l}f_{l,m}+n_{l,m}, \tag{10}\] where \(\widehat{g}_{l}=\sqrt{\frac{4\pi}{2l+1}g_{l,0}^{*}}\) for \(l=0,\dots,L-1\), and \(m=-l,\dots,l\). An estimate of \(f_{l,m}\) by \[\widehat{f}_{l,m}=y_{l,m}/\widehat{g}, \tag{11}\] results in overall error, \[\text{error}=\sum_{l,m}|\widehat{f}_{l,m}-f_{l,m}|^{2}=\sum_{l,m}|\widehat{g }_{l}^{-1}|^{2}|n_{l,m}|^{2} \tag{12}\] Metric for optimizing pixel angular response.Assuming additive white Gaussian sensor noise, minimizing the expected value of error is equivalent to maximizing \[\text{mask robustness}(g)=\left(\sum_{l=0}^{L-1}|\widehat{g}_{l}^{-1}|^{2} \right)^{-1} \tag{13}\] for recovering scene within a bandlimit of \(L\) levels. Depending on the method used to modify angular response function, we can search the feasible pixel angular response under fabrication constraints, and select the mask that maximizes the mask robustness function. Searching for the optimal binary amplitude masks.Placing an amplitude mask that blocks light on top of each pixel is a easy way of modifying the angular response function. Binary masks can be laser printed on thin films with feature size around 10um and placed very close to the sensor. While amplitude masks can only reduce the angular response from the cosine shaped angular response of most pixel wells (right column in Fig. 8), we found it improves the conditioning of the measurement matrix while increasing light throughput, compared to a narrow open aperture. We search for the optimal mask by calculating the robustness of its resulting angular response, using Eq. (13), from binary masks of given length with a given angle of view. For the results in this paper, we optimized masks for two angle of view: for a 10 degree half-aperture opening, we exhaustively searched all 10-bit binary codes; for the 30 degree half-aperture opening, we used stochastic gradient descent with random initialization to search for 30-bit binary codes. The resulting angular response functions are plotted in Fig. 3. Light efficiency of optimal binary amplitude masks.Total light throughput can be calculated by integrating angular response function \(g(\cdot)\) on the sphere. \[\begin{split}\text{light\ throughput}(g)&=\int_{ \boldsymbol{r}\in\mathbb{S}^{2}}g(\boldsymbol{r})d\boldsymbol{r}\\ &=\sum_{l,m}g_{l,m}\int_{\boldsymbol{r}\in\mathbb{S}^{2}}Y_{l,m} (\boldsymbol{r})d\boldsymbol{r}\\ &=\sqrt{4\pi}g_{0,0}\end{split} \tag{14}\] The unmodified angular response function on the sensor we did experiments with results in light throughput of 0.59 steradian from our measurements. The 10\({}^{\circ}\) 10-bit mask shown in Fig. 3 has light throughput of 0.09 steradian, i.e. 15.2% of unmodified light throughput. The 30\({}^{\circ}\) 30-bit mask has light throughput of 0.20 steradian, i.e. 33.9% of unmodified light throughput. ### Reconstruction Most deconvolution algorithms can be extended to recover the original scene convolved with the designed angular response function. In the absense of noise, the scene \(f\) can be recovered from its spherical harmonics coefficients \(\widehat{f}_{l,m}\) estimated by Eq. (11). When noise is present and we have measurement on all sampling orientations, we can use sim Figure 3: Simulations for comparing effectiveness of different angular responses. Top left: ground truth scene. Top right: Norm of scaling coefficients \(\hat{g}_{l}\) of different angular responses. Minimizing expected reconstruction error corresponds to maximizing area under the curve. Second row: plot of different angular responses; the binary amplitude mask used to produce the angular response is shown in inset. Third row: measurements obtained with corresponding angular response. Photon noise and readout noise is simulated for a sensor of resolution 180 \(\times\) 359 with 32761 \(e^{-}\) saturation capacity and dynamic range of 73.07dB, with scene brightness of 40%, where 100% brightness saturates the full well capacity of the sensor with 40 \({}^{\circ}\) open aperture. Intensity is scaled to show noise details. Last row: images recovered with isotropic total variation prior. Each color channel is individually reconstructed. ilarly fast filter approaches such as Wiener filtering. When we undersample the measurements, as in the case when we only image on parts of a sphere, we can recover the scene robustly with image priors by \[\min_{\mathbf{f}\in\mathcal{R}^{+}}\text{prior}(\mathbf{f})\text{ such that }\|\mathbf{\Phi}\mathbf{f}-\mathbf{y}\|^{2}<\epsilon, \tag{15}\] where \(\mathbf{\Phi}\) is the measurement matrix, and \(\epsilon\) is a upper bound on sensor noise. Iterative approaches are commonly used for solving this optimization problem, and our formulation of the forward process is isotropic convolution on the sphere makes it possible to adopt such iterative approaches. The reconstruction results in this paper are solved using isotropic total variation on the sphere [14] as prior, and we implemented MFISTA as described in [3] to optimize the objective, \[\operatorname*{arg\,min}_{\mathbf{f}\in\mathcal{R}^{+}}\|\mathbf{\Phi}\mathbf{f}-\mathbf{y}\|^ {2}+\lambda\text{TV}_{\text{isotropic}}(\mathbf{f}). \tag{16}\] ### Performance of different angular response functions We demonstrate the robustness of our 10-bit and 30-bit binary masks by simulating reconstruction from noisy measurements obtained with this mask, compared against 40\({}^{\circ}\) large aperture and a small 1\({}^{\circ}\) aperture, using open angular response measured from a photodiode. Specifications of sensors are used to generate noisy measurements: Photon noise is computed from full well capacity, and the readout noise is computed from dynamic range. Some simulated measurements, along with their reconstructions and the ground truth scene are shown in Fig. 3. Quantitatively, the reconstruction signal to noise ratio compared to the original scene is plotted for scene of different brightness in Fig. 4. The reconstructed signal to noise ratio is evaluated using \[\text{SNR}_{I}=10\log\left(\frac{\mathbf{f}^{T}Q\mathbf{f}}{\left(\widehat{\mathbf{f}}-\bm {f}\right)^{T}Q\left(\widehat{\mathbf{f}}-\mathbf{f}\right)}\right), \tag{17}\] where \(Q\) contains the quadrature weights given by the sampling scheme [14] to account for sampling density on different parts of the sphere. We observe that the optimized mask outperforms small apertures in reconstruction SNR. ## 4 Experiments on Real Data We conducted experiments to demonstrate that the proposed imager can be used to image planer scenes, complex scenes, and scenes with a large large angle of view. Prototype.To emulate the working of a spherical sensor, we built a two degree of freedom stage using two rotation motors and mounted a planar sensor (Sony IMX 174) on top. The rotation stages allowed the sensor to be oriented, repeatably, over the full 360\({}^{\circ}\) angle of view at a precision of \(0.5^{\circ}\). Amplitude masks are laser printed and affixed to 5mm away from the sensor. Minimum ring widths on film mask is \(\sim\) 90 \(\upmu\)m. The image sensor was used as a photodiode by grouping together the central \(4\times 4\) pixels -- this provides an effective sensing area of \(23.5\times 23.5\)\(\upmu\)m\({}^{2}\). Given that we had a color sensor, half of these pixels contributed to the green channel and a quarter each toward red and blue channels. The setup is shown in Fig. 7. Calibration.Angular response of a open pixel was measured by rotating a sensor observing a small LED light source and projecting onto closest azimuthally-symmetric function for each color channel. Fig. 8 show the calibrated angular responses for the a \(10\)-bit mask pattern, as well as 40\({}^{\circ}\) aperture. Note the \(10\)-bit mask used in real experiments differs from the optimized \(10\)-bit mask shown in the rest of the paper. We observe diffraction effects and other secondary effects not modeled in our search for optimal mask. Yet once we calibrated the masks, the scaling coefficients of the measured angular response indicates the angular response we managed to produce are still robust. Our architecture allows angular response shaping by reflection, diffraction, or refraction. Reconstructions.We scan a range of scenes with different angle of views and sampling resolutions. Due to moving components on the experiment prototype, we were able to reconstruct partial scenes on the sphere with the largest being one-third of the whole sphere (see Fig. 1). Figure 4: Average reconstruction \(SNR_{I}\) for scene of different brightness over three runs. Measurements simulated with photon noise and readout noise of a sensor with full well capacity 32761 \(e^{-}\) and dynamic range of 73.07dB. Scene at 100% brightness saturates pixel’s full well capacity at the brightest pixel with \(40^{\circ}\) open aperture. In Fig. 9, we present scenes with different angle of view, depth, and structure. For one of the scene, we also reconstruct from measurements spanning a random subset of orientations; given that the measurements from neighboring orientations are highly redundant, this does not cause a significant drop in performance. ## 5 Discussions In this paper, we present the design of a lensless imager that consists of a spherical sensor where-in each pixel has an identical but optimized angular response function. This design enables lensless imaging with a new capability -- namely, imaging on a spherical surface. We propose a met Figure 5: Simulations for capturing images at different resolution. Top row: measurements obtained with 10\({}^{\circ}\)10-bit mask shaped angular response on a sensor with resolution L\(\times\)(2L-1), and the same noise statistics and scene brightness as given in Fig. 3. Bottom row: reconstructed images. Figure 8: Measured angular response, before and after being modified by binary amplitude mask. Insets show the captured pixel angular response over 80\({}^{\circ}\times\) 80\({}^{\circ}\). The angular response functions are estimated from measurements obtained by rotating the sensor facing a small LED light in a black box. Though unmodeled effects such as diffraction are present, the measured angular responses are still robust to noise. Figure 6: Simulations for undersampled images. Top row: measurements obtained with 10\({}^{\circ}\) 10-bit mask shaped angular response on a sensor with resolution 180\(\times\)359, and the same noise statistics and scene brightness as given in Fig. 3. Bottom row: reconstructed images. Figure 7: Setup of our prototype experiments. A planar sensor (Sony IMX 174) is mounted on an side rotation motor, which is mounted on a rotation stage to produce measurements from a spherical sensor. An amplitude mask is laser printed on thin film and fixed on top of the sensor. The center region of the sensor is used to produce measurement of one pixel at a given orientation. ric to evaluate the optimality of pixel angular response functions for robust recovery of image in the presence of noise, and demonstrated the validity of our design with simulation and real experiments. ### Limitations Our current implementation has two key limitations. First, we placed an amplitude mask in front of the sensor with a standoff distance of about 5mm. Clearly, this design does not scale when we have high resolution spherical sensor arrays. For such arrays, the mask needs to be embedded on top of each pixel and this requires the design of a diffraction element that can provide the desired angular response. A second limitation, also stemming from the use of amplitude mask, is the loss of light due to the mask itself. This could potentially be addressed via phase masks that redis Figure 9: Prototype experiment results. Left: scene setup; inset shows the mask pattern used to modify angular response of the pixel. Middle: captured measurements. Right: reconstruction of the scene. tribute light instead of blocking it. ### Extension to curved and flexible surfaces The proposed design can still image when it takes the shape of other curved surfaces, as long as the diversity in pixel orientations is maintained. We simulate the situation where the designed spherical sensor is deformed to other curved surfaces in Fig. 10.
2304.03948
Equilibrium distribution and diffusion of mixed hydrogen-methane gas in gravity field
Repurposing existing natural gas pipelines is a promising solution for large-scale transportation of mixed hydrogen-methane gas. However, it remains debatable whether gravitational stratification can notably affect hydrogen partial pressure in the gas mixture. To address this issue, we combined molecular dynamics simulation with thermodynamic and diffusion theories. Our study systematically examined the equilibrium distribution of hydrogen-methane mixtures in gravity fields. We demonstrated that partial pressures of both gases decrease with altitude, with hydrogen showing slower decrease due to its smaller molar mass. As a result, the volume fraction of hydrogen is maximized at the top end of pipes. The stratification is more favorable at low temperature and large altitude drops, with notable gas stratification only occurring at extremely large drops in altitude, being generally negligible even at a drop of 1500 m. Furthermore, we showed that the diffusion time required to achieve the equilibrium distribution is proportional to gas pressure and the square of pipeline height. This requires approximately 300 years for a 1500 m pipeline at 1 bar. Therefore, temporary interruptions in pipeline gas transportation will not cause visible stratification. Our work clarifies the effect of gravity on hydrogen-methane gas mixtures and provides quantitative insights into assessing the stratification of gas mixtures in pipelines.
Shiyao Peng, Qiao He, Ducheng Peng, Xin Ouyang, Xiaorui Zhang, Chong Chai, Lianlai Zhang, Xu Sun, Huiqiu Deng, Wangyu Hu, Jie Hou
2023-04-08T07:47:15Z
http://arxiv.org/abs/2304.03948v1
# Equilibrium distribution and diffusion of mixed hydrogen-methane gas in gravity field ###### Abstract Repurposing existing natural gas pipelines is a promising solution for large-scale transportation of mixed hydrogen-methane gas. However, it remains debatable whether gravitational stratification can notably affect hydrogen partial pressure in the gas mixture. To address this issue, we combined molecular dynamics simulation with thermodynamic and diffusion theories. Our study systematically examined the equilibrium distribution of hydrogen-methane mixtures in gravity fields. We demonstrated that partial pressures of both gases decrease with altitude, with hydrogen showing slower decrease due to its smaller molar mass. As a result, the volume fraction of hydrogen is maximized at the top end of pipes. The stratification is more favorable at low temperature and large altitude drops, with notable gas stratification only occurring at extremely large drops in altitude, being generally negligible even at a drop of 1500 m. Furthermore, we showed that the diffusion time required to achieve the equilibrium distribution is proportional to gas pressure and the square of pipeline height. This requires approximately 300 years for a 1500 m pipeline at 1 bar. Therefore, temporary interruptions in pipeline gas transportation will not cause visible stratification. Our work clarifies the effect of gravity on hydrogen-methane gas mixtures and provides quantitative insights into assessing the stratification of gas mixtures in pipelines. Keywords: hydrogen-methane mixture; gravitational stratification; molecular dynamics, Boltzmann distribution; diffusion theory; + Footnote †: journal: Journal of Chemical Physics ## 1 Introduction The European energy crisis has refocused global attention on the issue of energy. Continued economic growth has led to a new peak in carbon emissions [1], making the transition to cleaner energy has become a matter of great urgency. As part of the European Green Deal, the EU has proposed that hydrogen as the optimal choice for achieving a carbon-neutral economy by 2050 [2]. Hydrogen has advantages such as being carbon-free and renewable [3], and can easily achieve large-scale production and application in comparison to solar, wind, or tidal power [4], making hydrogen energy a desirable clean energy source. Apart from being a new type of clean energy, hydrogen energy is also a form of conversion for other energy sources [5], which can alleviate long-distance transmission loads and large-scale power storage difficulties for wind energy and solar energy [6]. With the continuous development of hydrogen production technology, many countries and organizations (US [7; 8], EU[9] and China[10], etc.) have begun to carry out large-scale production and transportation of hydrogen. Despite these advantages of hydrogen energy, building new hydrogen pipelines for large-scale transportation can be expensive and time-consuming. A promising solution is to using existing natural gas pipelines for transporting blended hydrogen-methane gas [11]. This approach is more cost-effective than building new hydrogen pipelines, as existing natural gas pipelines can be repurposed for transporting hydrogen with relatively minor modifications. However, this involves long-term exposure of pipeline steels to pressurized hydrogen gases, which can alter the mechanical properties of the pipelines and cause crack propagation, often leads to the reduction in fracture toughness, namely, "hydrogen embrittlement (HE) [12]. Numerous results from the adaptability evaluation of pipeline materials and connections indicate that hydrogen has a detrimental effect on them[13]. With an increase in the hydrogen mixing ratio, the hydrogen embrittlement sensitivity of pipeline materials and connections also increases, and the deterioration of the performance of pipeline connections becomes more pronounced. Zhang et al [14]. pointed out that the degree of damage caused by hydrogen to high strength pipeline steel is sensitive to the partial pressure. The fracture toughness and fatigue life of the high strength steel decrease with the increasing of hydrogen pressure [15-17]. Note methane is much denser than hydrogen gas, which raise the concern that whether gravity will cause notable fluctuations in the partial pressure of hydrogen in hydrogen-methane mixture. Though many previous works have studied gas mixtures in gravity, the stratification behavior of hydrogen-methane mixture remains much debated. Azatyan et al. used the Boltzmann distribution to demonstrate that [18] the stratification of the mixed hydrogen-propane due to gravity can be ignored, even in the absence of convection. Badino et al. [19] point out that stratification may exist in non-mixing atmospheres with a drop of several kilometers, while at scales of several meters, almost no changes can be detected. Pitts et al. [20] conducted experiments on releasing hydrogen in a garage, and reported that measurements taken along a vertical array indicated a nearly uniform hydrogen volume fraction from top to bottom. These studies showed that gas generally does not stratify in gravity field. Nonetheless, a systematical investigation of hydrogen-methane mixture stratification behavior is yet to be made, and quantitative evaluations of pipeline height, temperature, mixing ratio, and diffusion time on the gas distribution remains missing. Apart from these researches that claimed no stratification, there are also studies reported contradictory results with evident gas stratification. Liu et al. [21] measured the degree of hydrogen embrittlement to characterize hydrogen concentration in hydrogen-methane mixture, and modelled the evolution of the mixture in gravity with computational fluid dynamics (CFD) simulation, both approaches led to the conclusion of evident stratification. Nonetheless, it remains questionable that whether hydrogen concentration can be accurately backtracked from the degree of hydrogen embrittlement, and the CFD model used does not include Brownian movements of gas molecules which is the main contributor that blends the mixture. Shebeko et al. [22] conducted hydrogen leakage and diffusion experiments inside a vessel filled with quiescent air, showed that after stopping the injection for 250 min, the volume fraction of the hydrogen at the top and the bottom differed by 10%. However, as the diffusion of gases is a relatively slow process [23; 24], the it's possible that the stratification is simply a result of the initial non-uniform mixing state. In general, the stratification of hydrogen-methane mixture remains to be a controversial topic, which necessitates further systematical and quantitative investigations on the stratification behavior. In this work, we addressed this issue by combining molecular dynamic simulations, Boltzmann distribution theory, and Mason-Weaver equations to analyze the equilibrium distribution and diffusion of hydrogen-methane mixtures in a gravity field. We systematically investigated the effects of height, temperature, and mixing ratio on the gas distribution. Our results demonstrate that significant gas stratification occurs only at extremely large drops in altitude, and the stratification at a drop of 1500 m is negligible. Additionally, we discovered that gas diffusion is generally slow, and it may take up to 300 years to achieve equilibrium distribution in a 1500 m pipeline at 1 bar. Thus, temporary interruptions in gas transportation through the pipeline will not cause visible stratification. Our study provides quantitative insights into the impact of gravity on hydrogen-methane gas mixtures, offering useful references for material selection and safety assessment of pipelines. ## 2 Method In this work, we combined numerical simulations with analytical theory to investigate the behavior of hydrogen-methane mixture. We first carried out molecular dynamic (MD) simulations to direct model the movement of gas atoms/molecules in gravity field, then compare these simulations with statistic mechanics theory based on Boltzmann distribution. We also calculated the time evolution of gas distribution based on diffusion theories. Details of the numerical/analytical methods used are given in below. ### Molecular dynamic simulations All MD simulations in this work were carried out using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) package [25]with the results visualized using the OVITO software [26]. Note according to previous studies of equation of states of hydrogen and methane [27; 28], the behavior of H\({}_{2}\) and CH\({}_{4}\) is almost identical to ideal gas when in P\(<\)10 MPa region, indicating that the inter-atomic/molecular interaction is not important in our simulations. Therefore, we simply used He molecules as a modelling element for simplicity, and manually adjust its molar mass to 2 and 16 to mimic H\({}_{2}\) and CH\({}_{4}\) molecules. The embedded atom method (EAM) potential developed by Chen et al. [29]was adopted to describe He-He interactions. Two orthogonal simulation cells with dimensions \(x\times y\times z=20\times 20\times 400\ nm^{3}\) and \(20\times 20\times 4000\ nm^{3}\) were adopted to model the effect of different heights (referred to as "400 nm box" and "4000 nm box" in below). At the beginning of the simulation, 2000 \(\mathrm{H_{2}}+2000\) CH\({}_{4}\) atoms were randomly inserted into the 400 nm box (20000 \(\mathrm{H_{2}}+20000\) CH\({}_{4}\) for the 4000 nm box), which roughly corresponds to the number density of ideal gas at 300 K and 1 bar. After the insertion, the system was relaxed for 10 ps with isobaric-isothermal ensemble (i.e., NPT with P=1 bar and T=300 K) and with periodic boundary conditions applied to all three directions. The Nose-Hoover thermostat was adopted and a timestep of 1 fs was used for all simulations. After reaching equilibrium, we switched to canonical ensemble (i.e., NVT with T=300 K), then turned on a gravity field along z direction, and applied a reflective boundary condition at the bottom and top of z direction to mimic a pipe with finite height. Finally, the system was relaxed under the gravity field for 2 ns, which is long enough for to establish new equilibrium in the gravity field. The equilibrium concentration profile of \(\mathrm{H_{2}}\) and CH\({}_{4}\) is calculated by analyzing the last 10 snapshots of the MD trajectory with an interval of 20 ps. **2.2 Boltzmann distribution** Considering a rigid vertical pipe filled with ideal gases, the normalized probability density of finding a gas molecular of type \(i\) (\(i=\mathrm{H_{2}}\) or CH\({}_{4}\) in this case) at height \(h\) is given by the Boltzmann distribution: \[\rho_{i}(h)=\frac{\exp\left(-\frac{m_{i}gh}{kT}\right)}{\int_{0}^{H}\exp\left(- \frac{m_{i}gh}{kT}\right)dh},\#(1)\] where \(m_{i}\) is the mass of the gas molecular, \(g\) is gravitational acceleration, \(k\) is Boltzmann constant, \(T\) is the absolute temperature, and \(H\) is the maximum height of the pipe. With the probability density given, assuming idea gas equation of state, we can calculate the distribution of partial pressure of gas by: \[P_{i}(h)=P^{tot}F_{i}^{tot}\rho_{i}(h)H,\#(2)\] where \(P^{tot}\) is the average pressure of the whole system, \(F_{i}^{tot}\) is the volume fraction of gas \(i\) in the whole system. Below we find it's more convenient to consider the normalized partial pressure: \[p_{i}(h)=P_{i}(h)/P^{tot}=F_{i}^{tot}\rho_{i}(h)H,\#(3)\] Beside the normalized partial pressure, it's often useful to calculate the distribution of volume fraction of gas \(i\): \[F_{i}(h)=\frac{F_{i}^{tot}\rho_{i}(h)H}{\sum_{j}F_{j}^{tot}\rho_{j}(h)H}=\frac {p_{i}(h)}{\sum_{j}p_{j}(h)},\#(4)\] **2.3 Mason-Weaver equation** Note the standard Fick's law of diffusion does not include the effect of external force field. To take gravity field into account, we can write the chemical potential of gas \(i\) as: \[\mu_{i}(h)=m_{i}gh+kT\ln p_{i}(h)+\mu_{ref.}\#(5)\] where \(\mu_{ref}\) is the chemical potential of an arbitrary reference state, which will be canceled out in the following calculation. Based on Eq. 5, we can calculate diffusion flux by: \[J_{i}(h)=-\frac{Dp_{i}(h)}{kT}\frac{\partial\mu_{i}(h)}{\partial h}==-D\;\left[ \frac{m_{i}g}{kT}p_{i}(h)+\frac{\partial p_{i}(h)}{\partial h}\right],\#(6)\] where \(D\) is the diffusion coefficient. Finally, we have the normalized partial pressure evolution against time: \[\frac{\partial p_{i}(h)}{\partial t}=-\frac{\partial J_{i}(h)}{\partial h}=D\; \left[\frac{m_{i}g}{kT}\frac{\partial p_{i}(h)}{\partial h}+\frac{\partial^{2 }p_{i}(h)}{\partial h^{2}}\right].\#(7)\] Eq. 7 is also known as the Mason-Weaver equation [30], and can be solved analytically using the tool provided in Ref. [31]. In addition, it's easy to see that at long-time limit where \(\frac{\partial p_{i}(h)}{\partial t}=\mathbf{0}\), the Mason-Weaver equation converges to the Boltzmann distribution that given by Eq. (1), demonstrating the consistence of two theories. ## 3 Results and discussion ### Equilibrium distribution of hydrogen-methane mixture in gravity field We started with direct MD simulations of 1:1 mixed H\({}_{2}\)-CH\({}_{4}\) gases in different gravity field, and modeled their stratification behavior at atomic level. Fig. 1a shows the simulation results without any gravity, where we can see a uniform distribution of both H\({}_{2}\) and CH\({}_{4}\) molecules with no stratification observed, which is well expected due to the lack of gravitational separation. Next, to examine the effect of gravity on molecular distributions at nanometer scales, we applied a very large gravity field of \(g=10^{11}\;G\) (where \(G=9.8\,m^{2}/s\)), which leads to a visible stratification as demonstrated in Fig. 1b, with CH\({}_{4}\) molecules showing apparent enrichment at the bottom of the simulation box. Note the distribution of H\({}_{2}\) molecules remains relatively uniform at \(10^{11}\) G, indicating that gravity has much weaker effect on H\({}_{2}\) distribution due to its small molar mass. As we continue to increase the gravity to \(10^{12}\) G, the stratification phenomenon in the pipeline becomes more evident (see Fig. 1c), with almost all CH\({}_{4}\) molecules segregated at z\(<\)100 nm region and a visible enrichment of H\({}_{2}\) molecules at the bottom region. To obtain more quantitative insights against the above results, we calculated the normalized partial pressure and volume fraction profiles for H\({}_{2}\) and CH\({}_{4}\) molecules in the MD simulations in \(10^{11}\) G gravity field. The results are plotted in Fig. 2a and 2f with a comparison to Boltzmann distribution predictions given by Eqs. 1-4, where our MD simulations matches perfectly with the Figure 1: Snapshots of H\({}_{2}\)-CH\({}_{4}\) mixtures at the end of MD simulations using the 400 nm box. (a-c) are results with 0, \(10^{11}\), \(10^{12}\) G gravity fields applied along z axis. Blue dots are CH\({}_{4}\) molecules and red dots represent H\({}_{2}\) molecules. Boltzmann distribution, demonstrating the validity of both approaches. We can see in Fig. 2a that in the 10\({}^{11}\) G gravity field, \(p_{CH_{4}}\) changes from \(\sim\)137% (at z=0) to \(\sim\)11% (at z=400 nm), while \(p_{H_{2}}\) only changes from \(\sim\)58% (at z=0) to \(\sim\)43% (at z=400 nm). Both gases are enriched at the bottom and depleted at the top due to the gravitational effect. However, the volume fraction of H\({}_{2}\), \(F_{H_{2}}\), shows a monotonic increase with the height (see Fig. 2d). The increase of H\({}_{2}\) volume fraction with height is often interpreted as a risk for hydrogen embrittlement at the top region for a pipe containing mixed H\({}_{2}\)-CH\({}_{4}\) gases. But here we should highlight that the increase of \(F_{H_{2}}\) does not come from the increase of \(p_{H_{2}}\), but from the more evident decrease of \(p_{CH_{4}}\) (see Fig. 2a). While in terms of normalized partial pressure, both \(p_{CH_{4}}\) and \(p_{H_{2}}\) decreases with height. Since the hydrogen embrittlement risk is actually associated with the partial pressure of H\({}_{2}\), the embrittlement risk at the top region is actually lower than the bottom region due to gravity effects. Next, we did similar calculations for gas distribution in 10\({}^{12}\) G gravity field. As demonstrated in Fig. 2b and 2e, both H\({}_{2}\) and CH\({}_{4}\) gases show evident non-uniform distributions in the 10\({}^{12}\) G gravity field, with MD simulation results also agree well with Boltzmann distribution. Here we find that \(p_{CH_{4}}\) changes from \(\sim\)1264% (at z=0) to \(\sim\)0% (at z=400 nm), and \(p_{H_{2}}\) changes from \(\sim\)165% (at z=0) to \(\sim\)7% (at z=400 nm). Comparing with that shown in 10\({}^{11}\) G case, the segregation of gases is more evident in the 10\({}^{12}\) G gravity field. Note according to Eq. 1, the gas distribution profile is determined by the product of gravitational acceleration and height, namely, \(g\times h\). In MD simulations, we can only handle very small heights (nm-\(\mu\)m) due to computational limitations, which is why a very strong gravity field Figure 2: Normalized partial pressures of H\({}_{2}\) and CH\({}_{4}\) in (a) 10\({}^{11}\) G gravity field with the 400 nm box, (b) 10\({}^{12}\) G gravity field with the 400 nm box, and (c) 10\({}^{11}\) G gravity field with the 4000 nm box. (d-f) are corresponding volume fraction. Hollow symbols are MD simulation results, lines are Boltzmann distribution predictions. (10\({}^{11}\) -10\({}^{12}\) G) is necessary to observe any visible gravity effect. Based on Eq. 1, we expect the distribution profile remain unchanged if we increase \(h\) and decrease \(g\) by a same factor (i.e., keep \(g\times h\) constant). To examine this equivalency, we performed additional MD simulations using a 4000 nm simulation box and a 10\({}^{11}\) G gravity field. The results are shown in Fig. 2c and 2f, where we find the normalized partial pressure and volume fraction is practically identical to that in Fig. 2b and 2e (except with different height scales) for both gases. Suggesting that our results can reflect the distribution at larger \(h\) and lower \(g\). In this way, the results shown in Fig. 2a and 2b should be equivalent to the distribution in 40 and 400 km of drop at 1 G, which means it takes a 100 km level of drop in altitude to observe evident stratification of H\({}_{2}\) and CH\({}_{4}\). **3.2 Effect of height, temperature, and mixing ratio on the stratification** The above results demonstrated that the equilibrium gravitational stratification of H\({}_{2}\)-CH\({}_{4}\) gas mixture can be accurately predicted by the Eqs. 1-4. Below we further investigate different factors that could affect the stratification behavior. To begin with, we consider the realistic gravity field of 1 G and a pipe height of 1500 m (approximately the maximum drop in Shan-Jing pipeline) filled with 1:1 mixed H\({}_{2}\)-CH\({}_{4}\) gases. Fig. 3 shows the equilibrium distribution calculated using Eqs. 1-4. Here we find that within the drop of 1500 m, normalized partial pressure vary almost linearly with the height. Similar to the above, both \(p_{CH_{4}}\) and \(p_{H_{2}}\) decreases with the height, with \(p_{CH_{4}}\) decreasing faster than \(p_{H_{2}}\) (see Fig. 3a), which in turn increase volume fraction of H\({}_{2}\), \(F_{H_{2}}\), in the top region (see Fig. 3b). In general, large altitude drops promote the gravitational stratification of gases. But even with a large drop of 1500 m, the \(p_{H_{2}}\) only changes slightly (from 49.7% to 50.3%), which is apparently too small to cause any notable impact on hydrogen embrittlement behavior. Figure 3. Equilibrium distribution profile of 1:1 mixed H\({}_{2}\)-CH\({}_{4}\) gases within a 1500 m high pipe at 1 G and 300 K. (a) shows normalized partial pressures of two gases and (b) shows the volume fraction of H\({}_{2}\). Next we examine the effect of temperature on the stratification behavior. Considering that partial pressures vary almost linearly against the height at km ranges, here we only focus on the partial pressure at the top and bottom of the pipe. Fig. 4 shows the calculated normalized partial pressure and volume fraction at different temperatures, where we find the difference between top and bottom partial pressures diminish with increasing temperature, suggesting that gas stratification is suppressed at elevated temperatures due to intensified Brownian motion of molecules. Yet even at a very low temperature of 150 K where gravitational stratification is more favorable, the difference between top and bottom \(p_{H_{2}}\) is still insignificant (\(\sim\)1.2%). Figure 4. Normalized partial pressure and volume fraction of H\({}_{2}\) and CH\({}_{4}\) at the top and bottom of a 1500 m high pipe at 1 G, plotted as a function of temperature, where H\({}_{2}\) and CH\({}_{4}\) are 1:1 mixed. (a) shows normalized partial pressures of two gases and (b) shows the volume fraction of H\({}_{2}\). Fig. 5 demonstrates the effect of mixing ratios (noted by overall H\({}_{2}\) volume fraction) on gas stratification behavior. We can see in Fig. 5a that normalized partial pressures of two gases are linearly correlated with their respective average concentration. That is, the difference between top and bottom \(p_{H_{2}}\) is maximized in pure H\({}_{2}\) conditions. While the difference between top and bottom \(F_{H_{2}}\) behaves differently (see Fig. 5b), as it first increases with average \(p_{H_{2}}\), peaks at 50%, then start to reduce with average \(F_{H_{2}}\). For the case of 1500 m high pipe at 300 K, the maximum difference in \(F_{H_{2}}\) is \(\sim\)2.7%. ### Diffusion of hydrogen-methane mixture in gravity field The above results reveal the behavior of mixed H\({}_{2}\)-CH\({}_{4}\) gases at thermodynamic equilibrium. However, it remains unclear how long it takes to reach such equilibrium. To understand the diffusion behavior of the gas mixture, we fist calculated its diffusion coefficient according to [23, 24]: \[D(P,T)=3.13\times 10^{-5}T^{1.765}\frac{0.1\;\text{MPa}}{P}\text{cm}^{2}\text{s}^{ -1}.\,\#(8)\] At \(P=0.1\;\text{Mpa}\) and \(T=300\;\text{K}\), we have \(D=0.74\;\text{cm}^{2}\text{s}^{-1}\). Now consider a pipe with height \(H=1500\;\text{m}\), a rough estimation for the characteristic time would be \(\text{t}\approx H^{2}/D\approx 242\;\text{years}\). Which means the gas diffusion is a very slow process at the scale of km. To justify this estimation, we calculated the time evolution of \(F_{H_{2}}\) profile by solving the Mason-Weaver equation (Eq. 7). The results are shown in Fig. 6, where we can see it takes about 300 years for fully blended gas mixtures to approach its \(t\rightarrow\infty\) limit in the 1500 m high pipe (i.e., the equilibrium state in Fig. 3b). This time scale generally agrees with the characteristic time estimated above. Moreover, we noticed that within 1 month, the volume fraction profile remains virtually unchanged. That means even if the stratification is somehow thermodynamically feasible, it's still unlikely to observe any visible stratification during a temporary interruption in pipeline gas transportation. Figure 5: Normalized partial pressure and volume fraction of H\({}_{2}\) and CH\({}_{4}\) at the top and bottom of a 1500 m high pipe at 1 G and 300 K, plotted as a function of the overall H\({}_{2}\) volume fraction (H\({}_{2}\) mixing ratio). (a) shows normalized partial pressures of two gases and (b) shows the volume fraction of H\({}_{2}\). Fig. 7 presents more details on the diffusion behavior by visualizing \(\Delta F_{H_{2}}\) at the top of pipes against time. In general, \(\Delta F_{H_{2}}\) first increases linearly with \(t^{1/2}\) (note the logarithm scale), then gradually plateaus after reaching thermodynamic equilibrium. As demonstrated in Fig. 7a, increasing the pressure leads to a shift in the curve along the time axis, which means the time required to reach steady state increases linearly with increasing pressure, being approximately \(10^{5}\), \(10^{6}\), and \(10^{7}\) days under 0.1, 1, and 10 MPa. According to Eq. 8, the diffusion coefficient is inversely proportional to gas pressure. As a result, diffusion time will be proportional to the pressure, which explains the results shown in Fig. 7a. We also evaluated the effect of pipe height on the diffusion behavior with relative data plotted in Fig. 7b. Here we find that increasing height does not shift the curve, but postpones the time needed to establish equilibrium. Note this time is proportional to the square of the height, namely, the time required for reaching equilibrium in 1500 m pipe is 100 times higher than that in 150 m pipe. That means thermodynamic and kinetic feasibility of stratification usually cannot be satisfied simultaneously, as evident stratification requires a large pipe height, which in turn prevents the establishment of equilibrium in limited time. Figure 6: Evolution of volume fraction profile of H\({}_{2}\) in a 1500 m high pipe at 1 G and 300 K, where H\({}_{2}\) and CH\({}_{4}\) are 1:1 mixed and fully blended at t=0. Note the above calculations are based on initially fully blended gas mixtures, and the diffusion of gases will lead to gravitational stratification. Here we consider another situation in which gas mixtures are initially fully stratified, with pure H\({}_{2}\) fills the top half and pure CH\({}_{4}\) fills the bottom half of a short pipe with 1.5 m of height. In this case, the equilibrium stratification is negligible inside the 1.5 m pipe, which means the initial stratification will diminish at \(t\rightarrow\infty\). However, from the calculation results shown in Fig. 8, we find the stratification remain quite evident at 1 hour, and is still visible even after 10 days of gas diffusion. These results suggest that if gases were not well blended at the beginning, then it could lead to a misleading conclusion that gas can stratify due to gravity effect, which should be carefully checked in related experiments. Figure 8: Evolution of volume fraction profile of H\({}_{2}\) in a 1.5 m high pipe at 1 G and 300 K, where H\({}_{2}\) is the same as in Fig. 8. Figure 7: Change of volume fraction of H\({}_{2}\) at the top of pipes at 1 G and 300 K, where H\({}_{2}\) and CH\({}_{4}\) are 1:1 mixed and fully blended at t=0. (a) 1500 m high pipe with different pressures, (b) different pipe heights with 0.1 MPa pressure. H\({}_{2}\) and CH\({}_{4}\) are fully stratified at t=0, with pure H\({}_{2}\) fills the top half and pure CH\({}_{4}\) fills the bottom half. ## 4 Conclusion In this study, we employed molecular dynamic simulations and analytical theories to investigate the equilibrium and diffusion behavior of mixed hydrogen-methane gas in a gravity field. Our findings led to the following conclusions: 1. The molecular dynamic simulations of H\({}_{2}\) and CH\({}_{4}\) gases in a gravity field show that gravitational stratification of these gases requires either an extremely strong gravity field or very large drops in altitude (on the order of 100 km), in agreement with Boltzmann distribution theory predictions. 2. Both CH\({}_{4}\) and H\({}_{2}\) lose their partial pressure at high altitudes. This effect is more evident in CH\({}_{4}\) due to its larger molar mass, which, in turn, maximizes the volume fraction of H\({}_{2}\) at the top end of pipes. 3. Large pipe height and low temperature promote the stratification. However, even with a low temperature of 150 K and a large height of 1500 m, the stratification is insignificant and will not significantly affect the risk of hydrogen embrittlement. 4. The diffusion time required to reach thermodynamic equilibrium increases linearly with pressure and the square of pipe height. It takes approximately 300 years to reach equilibrium in a 1500 m high pipe, and temporary interruptions in pipeline gas transportation will not cause significant stratification. However, if gases are not well blended initially, it can lead to the misleading conclusion that gas can stratify in a gravity field. This present study provides quantitative insights into evaluating the stratification of gas mixtures in pipelines and sheds light on the equilibrium and diffusion behavior of mixed hydrogen-methane gas in a gravity field. ## Data availability The data generated and/or analysed within the current study will be made available upon reasonable request to the authors. ## Acknowledgement This work was financially supported by Research on key technologies of hydrogen-mixed natural gas transportation by pipeline in service (PipeChina, project No: DTXNY202203), Pilot scale study on green hydrogen production by alkaline water electrolysis of hundred Nm\({}^{3}\)/h capacity (CNPC, project No.: 2022DJ5006(GF)), and by Research and development of key technologies for medium and low-pressure pure hydrogen and hydrogen-mixed gas transporting and utilizing in pipelines (Inner Mongolia Science and Technology Department, project No.: 2021ZD0038). ## Competing interests The authors declare no competing interests.
2302.09655
PAPRAS: Plug-And-Play Robotic Arm System
This paper presents a novel robotic arm system, named PAPRAS (Plug-And-Play Robotic Arm System). PAPRAS consists of a portable robotic arm(s), docking mount(s), and software architecture including a control system. By analyzing the target task spaces at home, the dimensions and configuration of PAPRAS are determined. PAPRAS's arm is light (less than 6kg) with an optimized 3D-printed structure, and it has a high payload (3kg) as a human-arm-sized manipulator. A locking mechanism is embedded in the structure for better portability and the 3D-printed docking mount can be installed easily. PAPRAS's software architecture is developed on an open-source framework and optimized for low-latency multiagent-based distributed manipulator control. A process to create new demonstrations is presented to show PAPRAS's ease of use and efficiency. In the paper, simulations and hardware experiments are presented in various demonstrations, including sink-to-dishwasher manipulation, coffee making, mobile manipulation on a quadruped, and suit-up demo to validate the hardware and software design.
Joohyung Kim, Dhruv C Mathur, Kazuki Shin, Sean Taylor
2023-02-19T19:02:41Z
http://arxiv.org/abs/2302.09655v1
# PAPRAS: Plug-And-Play Robotic Arm System ###### Abstract This paper presents a novel robotic arm system, named PAPRAS (Plug-And-Play Robotic Arm System). PAPRAS consists of a portable robotic arm(s), docking mount(s), and software architecture including a control system. By analyzing the target task spaces at home, the dimensions and configuration of PAPRAS are determined. PAPRAS's arm is light (less than 6kg) with an optimized 3D-printed structure, and it has a high payload (3kg) as a human-arm-sized manipulator. A locking mechanism is embedded in the structure for better portability and the 3D-printed docking mount can be installed easily. PAPRAS's software architecture is developed on an open-source framework and optimized for low-latency multiagent-based distributed manipulator control. A process to create new demonstrations is presented to show PAPRAS's case of use and efficiency. In the paper, simulations and hardware experiments are presented in various demonstrations, including sink-to-dishwasher manipulation, coffee making, mobile manipulation on a quadruped, and suit-up demo to validate the hardware and software design. ## I Introduction Robotic manipulation has been explored since the start of robotics research. Industrial robotic arms have been developed and used for decades for fast and precise manufacturing and heavy-loaded tasks. From car assembly to semiconductor fabrication, robotic arms and manipulation have made a significant contribution to the manufacturing industry. Recently, robotic manipulation has broadened its boundaries for various manipulation tasks, including cooperative manipulation tasks between humans and robots. Many robotic arms for pHRI (physical Human-Robot Interaction) were commercialized, such as the LBR iwa from Kuka [1], the robotic arms from Kinova [2], and Franka robot arms from Franka Emika [3]. These robotic arms have force and/or torque sensors to consider safe interaction when there are contacts or collisions with humans in the workspace. In addition to the force/torque sensors, mechanically compliant joints and lightweight mechanisms with high back-drivability have been investigated to ensure safe interaction. One of the most popular approaches for the compliant joint design is having an elastic component in the mechanism [4]. The lightweight mechanism can be achieved by locating the heavy actuators near the robot base and utilizing a transmission mechanism (e.g. cable-driven mechanism) for joint actuation [5, 6]. Despite continued achievements and efforts, robotic manipulators are not ready for home use yet. Most existing robotic arms are not affordable (higher than $20,000 USD) and not lightweight (more than 10kg). Additionally, they must be bolted down to a fixed structure or need mechanical clamps to hold the base of the robot, and it is hard to move once installed. Although the home environment is well-segmented depending on the purposes, such as a kitchen for cooking and a utility room for laundry, it is cumbersome and time-consuming if the arm needs assembly and disassembly every time when moving from one place to another. Some researchers are trying to utilize mobile manipulators [7, 8], but these are ongoing efforts to make them compact, dexterous, and safe. In this paper, we introduce a new robotic manipulator, PAPRAS (Plug-And-Play Robotic Arm System), to tackle this challenge by designing a docking system with a modular, plug-and-play robotic arm and optimizing its design. Our primary objective was to create a pluggable, portable, and lightweight manipulator capable of performing a variety of tasks as shown in Fig. 1. There are several research works related to pluggable robotic limbs. Topping [9] introduced an interesting concept of Flexibot which has the pluggable feature to a wheelchair or a wall mount, but it has not been implemented. In modular robot research, researchers have been working on modular, reconfigurable limbed robots [10, 11, 12, 13]. Also, a company named GITAI recently released large-scale pluggable robotic arms for space [14]. The proposed PAPRAS is designed for home and human collaboration. Within this paper, we present the design of PAPRAS and provide empirical evidence for its efficacy by showing its applications. The paper is organized as follows. In Sections II and III, we introduce the hardware design of the proposed robotic arm. Section IV explains PAPRAS's software architecture. Validations and applications of PAPRAS are presented in Section V. Lastly, the conclusion and future work are discussed in Section VI. Fig. 1: PAPRAS mounted in various environments using the plug-and-play feature and executing manipulation tasks. ## II Mechanical Hardware Design ### _Design Considerations_ From the task-level point of view, the location of the robotic arm's base in the target task space is essential to determine the robot's workspace. In a manipulation task with one robotic arm, controlling the robot means moving the relative position and orientation of its end-effector frame \(\{e\}\), based on the robot's base frame \(\{b\}\) in a world frame \(\{w\}\). When there are \(n\) objects in the world, we can represent a set of \(n\) frames as \(\textit{O}=\{\{o_{1}\},\{o_{2}\},...,\{o_{n}\}\}\), where \(\{o_{k}\}\) is the local frame attached the _nth_ object \((1\leq k\leq n)\). In most cases, the relation between \(\{w\}\) and \(\{b\}\), _Twb_, is fixed and given, and the information related to \(\{o_{k}\}\), _Tw\({}_{\textit{wo}}\)_, can be estimated through the perception. From this information, we can get the transformation \(\textit{T}_{\textit{bo}_{k}}=\textit{T}_{\textit{wb}}{}^{-1}\textit{T}_{ \textit{wo}_{k}}\) and use it for control and planning, considering environments and human collaborators. If the set of target objects \(O\) is bounded within a specific area, the optimal location of the robot's base \(\{b\}\) to handle all the objects can be found for the target task. For example, while manually washing dishes in a kitchen, the dishes would stay in the kitchen sink. Or, when using a dishwasher, the positions of the kitchen items would be bounded within a certain area including a sink and a dishwasher. Given a robotic manipulator and kitchen information, one can find the proper position and orientation of the robot's base \(\{b\}\) for this task. The dimensions of PAPRAS were determined considering the target task spaces, which are the tabletop and dishwashing areas. As shown in Fig. 2, the length of the arm's moving part needs to be around 800-900 mm long for these tasks, and its DoF (degree of freedom) needs to be at least 6. To meet these requirements, PAPRAS was developed based on a 6-DoF open-source robotic arm, OpenMANIPULATOR-P [15]. We modified its design and developed two variants of PAPRAS in Fig. 3, one version being 100 mm longer than the other. We define the link between Joint \(n\) and Joint \((n+1)\) as Link \(n\), and the arm can be broken down into five links and six joints. The placement and orientation of each joint are shown in Fig. 3. Joints 2, 3, and 5 are pitch joints and their rotation axes are parallel. However, they are not in the same plane because the position and orientation of Joint 3 have offsets. The offsets make PAPRAS fully foldable around Joint 3. Since the shorter arm has the same configuration of the OpenMANIPULATOR-P, detailed information can be found in [15]. For all the joints, Dynamixel-P series motors are used. Table I shows the specs of all the joints including the range of motion, max speed, and torque information. An arm has a gripper with a camera at the output of Joint 6. The gripper and camera are commercialized products, RH-P12-RN from ROBOTIS and RealSense D435 from Intel, respectively. ### _Linkage Design_ For the linkage design, we aimed to reduce the overall weight of the links. This was done to reduce the moments of inertia about the joints and thus decrease the torques required to perform motions. As shown in Fig. 3(a), Links 1 and 4 are commercialized parts, and Links 2, 3, and 5 are 3D-printed parts. All 3D-printed components in PAPRAS were made using two FDM printers (Flashforge Creator 3, Raise 3D E2), and were printed using standard PLA filament. The use of 3D-printed PLA not only reduced the weight of the components but also allowed other features to be incorporated into the design. In order to reduce the weight of the new 3D-printed links, we used the topology optimization tool in Solidworks. This feature allowed us to selectively remove sections from each link while simultaneously maximizing the stiffness-to-weight \begin{table} \begin{tabular}{|c|c|} \hline Items & Specifications \\ \hline \hline Mass & Short Version: 4.771 kg \\ & Long Version: 4.894 kg \\ \hline Payload & Short Version: 3 kg \\ & Long Version: 2.5 kg \\ \hline Range of Motion & Joint 1: \(-180^{\circ}\sim 180^{\circ}\) \\ (): Long version & Joint 2: \(-115^{\circ}\sim 115^{\circ}\) \\ & Joint 3: \(-135^{\circ}\) (\(-45^{\circ}\)) \(\sim 135^{\circ}\) \\ & Joint 4: \(-158^{\circ}\sim 158^{\circ}\) \\ & Joint 5: \(-90^{\circ}\sim 90^{\circ}\) \\ & Joint 6: \(-180^{\circ}\sim 180^{\circ}\) \\ \hline Max speed & Joint 1, Joint 2: \(174^{\circ}\)/s \\ (continuous) & Joint 3, Joint 4: \(175.2^{\circ}\)/s \\ & Joint 5, Joint 6: \(175.2^{\circ}\)/s \\ \hline Max Torque & Joint 1, Joint 2: 44.7 Nm \\ (continuous) & Joint 3, Joint 4: 25.3 Nm \\ & Joint 5, Joint 6: 5.1 Nm \\ \hline \end{tabular} \end{table} TABLE I: Specifications Fig. 3: PAPRAS long version (top) and short version (middle) in front view, and joint configuration (bottom). Fig. 2: Visualization of workspaces. Most rectangular dinner tables are 914.4mm (36”) wide. The standard dishwasher and sink dimensions (LxWxH) are typically 609.6x609.6x889 mm and 762x558.8x254 mm, respectively. ratio. To use this feature, the implementation of some constraints was needed to get the desired output. These constraints ensured that the parts had the desired symmetry and did not remove desired geometric features such as faces/holes which connected with the motors. Fig. 3(b) shows the process to get the optimized design for Link 5. We used the same process for Link 2 and Link 3 to reduce the weight while keeping the target stiffness we set. After each design, loads were simulated using the FEA tool in Solidworks to test the design. In the long version of PAPRAS, Links 2 and 3 are both 50 mm longer than those in a short version. In Fig. 3, the long version (top) has a more complicated structure than the short version (middle). Link 3 of the long version has a reinforced structure near Joint 3 to meet the stiffness constraints. This makes Link 3 slightly heavier, and Joint 3 has a smaller range of motion than Joint 3 in a long version. ### _Docking mount_ The docking mount of PAPRAS is composed of male and female sections. The male section in Fig. 4(c) is composed of Joint 1 (a motor), along with two sets of 10-pin Molex connectors and three 3D-printed components. The female section in Fig. 4(d) is composed of four 3D-printed components which allow for a locking mechanism to keep the motor in place when in use. The mechanism works by having a threaded component on the female section; when this section is screwed in it presses itself onto the motor, securing it in place as shown in Fig. 4(a). When it is unlocked as shown in Fig. 4(b), the arm can be unplugged easily by pulling it out. Four different types of these female sections were used in this paper: one for the table demo that is made to be orientated vertically, one for the dual-arm and cage demos that is made to orientated horizontally, with a modified horizontal version used for the kitchen demo, and one for the quadruped demo that is made to be orientated at an angle (45\({}^{\circ}\)). The horizontally orientated one is shown in Fig. 5, and all the mounts have the exact locking mechanism. \begin{table} \begin{tabular}{|c c c c|} \hline Component & MANIPULATOR-P & Short Version & Long Version \\ \hline \hline Link 2 & 644.0 g & 231.9 g & 277.4 g \\ \hline Link 3 & 496.0 g & 211.6 g & 289.1 g \\ \hline Link 5 & 114.0 g & 35.7 g & 35.7 g \\ \hline \end{tabular} \end{table} TABLE II: Link weight comparison Fig. 4: PAPRAS parts break down and linkage weight reduction process. Fig. 5: Docking mount. Fig. 6: Portability features of PAPRAS. ### _Features for portability_ One design goal of PAPRAS is portability. To achieve this, the weight reduction in the previous section is important. At the same time, an easy-to-carry design should be considered. There are two key features for this in PAPRAS. #### Ii-D1 Locking mechanism PAPRAS is designed with a locking mechanism, which holds the arm in a closed position when it is not in use. The mechanism is based on a rotating cam in Link 2 which hooks onto a protrusion from Link 3. When the mechanism is in the unlocked position it is held in place with magnets. The arm can be seen in both locked and unlocked positions in Fig. 5(a). #### Ii-D2 Covers for both ends The gripper/camera module and Joint 1 with the connectors are located at each end of PAPRAS. When PAPRAS is fully folded around Joint 3 and locked with the locking mechanism, these parts lie one upon another as shown in Fig. 5(a). Both parts are relatively feeble to impact and need to be fixed in order to not move around Joints 2 and 5. To prevent damage and lock their positions, a cover system was created. Two parts of the cover system were 3D-printed and a cloth strap was used to connect the two parts. Fig. 5(b) shows a folded PAPRAS (short version) with the camera and socket covers attached. ## III Electrical Hardware The electrical hardware for PAPRAS consists of three parts: the wiring harness on each arm, the connection on each mount, and the control box for each application. ### _Arm Wiring Harness_ The motors are all connected to an RS-485 serial communication bus, which is daisy chained up the arm to each motor and the gripper. The Joint 1-4 motors each require external power, while the Joint 5-6 motors and the gripper are powered by the RS-485 voltage. There is also a USB-C cable that runs the length of a PAPRAS arm to connect to a USB camera mounted on the gripper. All these wires are routed to two 10-pin male connectors located at the base of the arm, which mate with connectors in the mounts. ### _Mount Connection_ Each mount has a custom PCB (F in Fig. 4(d)) which consists of the connections responsible for distributing power to the arm and transferring data to and from the arm. The PCB has two 10-pin female connectors that mate with the matching set on each arm. Of the twenty pins, four are used for the RS-485 serial communication with the motors. Eight more pins are used for the external power connections required by the Joint 1-4 motors. The final eight pins are used for the USB 3.1 connection between the computer in the control box and the USB camera on the gripper. The PCB also contains connectors that lead back to the control box - an XT connector for power, a 4-pin Molex connector for serial communication, and a USB 3.1 Type A receptacle. ### _Control Box_ Each application has a single control box regardless of how many mounts and arms are involved. The control box contains a power supply, a controlling computer, an RS-485 adapter (U2D2), an emergency stop, and another custom PCB. The power supply connects to AC mains and outputs 24V DC power, which is routed through the emergency stop switch before running to the mount(s) to power the arm motors. The power supply also feeds a buck regulator on the PCB (independent from the emergency stop) which outputs 19V to power the computer. This setup allows for the e-stop to safely cut power to the arms without dropping power to the controlling computer. The computer connects to the U2D2 over USB, which converts the USB signal into the RS-485 signal required to interface with the motors. The serial nature of the communication protocol allows multiple mounts to connect in parallel with a single U2D2. The computer also can connect directly to the USB receptacle on the mount(s) to communicate with the camera on the gripper. For mobile applications, the power supply is replaced with a 6S LiPo battery. The battery power is used for both the arm motors and the controlling computer. The emergency stop switch and the communication setup are identical to fixed applications. This change keeps the mobile platform untethered and free to move to any location. Fig. 7 shows a control box for a fixed application as well as the setup for a mobile application using Boston Dynamic's Spot. ## IV Software Architecture PAPRAS's software architecture is built on an open-source framework and optimized for low-latency multiagent-based distributed manipulator control. Developed in C++, the architecture runs on ROS Noetic with the Ubuntu 20.04 Linux distribution. Low-level programming for motor sub-routine calls employs the Dynamixel SDK. High-level programming of the robot, such as the planning and perception pipelines, is accomplished through the incorporation of various ROS packages. A breakdown of the complete flow of the software Fig. 7: Control box for a fixed PAPRAS application (a) and setup for an untethered mobile application (b). The power source (blue), controlling computer (green), and custom PCB (red) are highlighted. pipeline is shown in Fig. 8, including user operation, perception, planning, control, simulation, and hardware. ### _Global ROS Parameter Server_ The Global ROS Parameter Server, shown in 1 of Fig. 8, is a core component of the software architecture that stores data and settings that are shared across all active nodes. It includes the URDF and SRDF files, a list of controllers, hardware configurations, and joint limits. This data is used to initialize PAPRAS demo applications and is critical for the proper functioning of the robot. The URDF defines the environment, components and geometric structure of the robot arms, while the SRDF assigns planning groups, poses and collision checking information. Joint groups consist of the chain of links from the first link to the end-effector link, and can also be broken into sub-groups. Default poses like 'init','rest', 'open' and 'close' are assigned for testing and safe positioning, and the SRDF can tweak the collision checking information to reduce trajectory validation time. ### _User-based Operation and Monitoring_ As shown in 2 of Fig. 8, users can operate the system in three ways: using a 3D marker in Rviz, with joystick or keyboard inputs, or by running a command script which contains joint angles or cartesian-based end-effector poses for each step of a given task. The user can command robots to reach desired configurations either by sending a command to the planning pipeline (IV-D) to generate a valid trajectory, or alternatively by sending the command directly to the controller manager (IV-E) to move directly to the goal position. The process is monitored with the help of RQT and Rviz plugins which provide debugging tools during development and validation testing. ### _Perception Pipeline_ The perception pipeline, shown in 3 of Fig. 8, is responsible for obtaining data from the camera on the gripper of each arm, with a sampling rate of 15Hz. Fig. 9 demonstrates the RGBD image and point cloud data of a table environment. This data is used in various perception algorithms to give the robot increased autonomy and decision-making capabilities. For example, the DOPE (Deep Object Pose Estimation) algorithm can recognize objects and estimate their 6 DoF pose [16]. This pose is recorded as a transformation from the camera frame to the object frame, \(T_{co}\). This data is used by the pipeline to create an occupancy map, which helps the robot to build a semantic belief state of its surroundings. Additionally, OpenPose human skeleton tracking can be used to extract corresponding joints from the 2D image [17]. This information can then be retargeted to the robot arm. This technology is demonstrated as a demo application in Section V-B. All of these perception algorithms provide the robot with increased control over its environment. ### _Planning Pipeline_ The planning pipeline, shown in 3 of Fig. 8, uses MoveIt to manage the planning scene and trajectory generation. The planning scene is used to represent the environment around the robot arms and their current state by processing the joint states of each motor in the arm. Objects of interest can be added to the planning scene manually or by using a 2D or 3D camera to detect them. By using the move group interface, requests can be made to the planning scene to move to a certain pose defined in either joint space, cartesian space, or by an object to interact with. Goal poses must be defined in the base frame Fig. 8: Overview of software architecture. Fig. 9: Visual of planned robot motion (top left), RGB (top right), point cloud (mid left), and depth data (mid right), motion retargeting (bot left), object pose estimation (bot right). of the arm. To calculate the goal pose for interacting with an object, the transform \(T_{co}\) found by the perception pipeline must be transformed into the correct frame with the equation \(T_{bo}=T_{bc}T_{co}\), where \(T_{bo}\) is the goal pose for the arm to interact with the object, and \(T_{bc}\) is the pose of the camera in the base frame of the arm, calculated through the forward kinematics defined by the URDF and current joint angles. In order to determine the correct joint angles from a given cartesian space goal, the TRAC-IK numerical inverse kinematics solver [18] is then invoked. The goal state is then defined in joint space and a trajectory to reach it from the current state is found by running OMPL's RRT-Connect planner [19, 20], taking into account any constraints that have been specified. Collision checking is performed using FCL [21] at each time step of the trajectory to ensure that the returned trajectory is safe. Once the trajectory has been generated, it is sent to the controller manager (IV-E). ### _Controller Manager_ The controller manager, shown in 4 of Fig. 8, provides a real-time compatible control loop that runs a rate of 8 ms, and the infrastructure to load, unload, start, and stop controllers. It enables controllers to access joint state information and execute commands from a single interface. The manager is able to read joint states from the hardware and send commands to it. The joint position controller is utilized to send position commands to the arm's six joints. The joint trajectory controller is used for planned trajectory messages, which is done with the help of its follow joint trajectory action server. This action server allows for tracking of trajectory execution, passing goals to the controller, and reporting success when done. The trajectory may also be constrained or aborted if the constraints are broken. Lastly, an effort controller is used for the grippers because current-based position control is needed at the embedded level control. ### _Hardware and Simulation Interfaces_ The hardware interface, shown in 6 of Fig. 8, uses the Dynamixel SDK to communicate with the motor controllers through U2D2. Before communication is established, the hardware configuration file is used to relate the motor IDs to the arm configuration. This enables the hardware interface to send commands to the correct actuators and access their parameters. During the execution of a task, the controller manager sends velocity, position, and torque commands to the motors. The Dynamixel SDK then applies these commands to the motors to move them to the desired setpoints and enforce joint limits. To ensure joint limits are not exceeded, the controller manager continuously monitors the encoder positions of the motors, and if the setpoint is outside the joint limits, the motor is commanded to stop. The simulation environment Gazebo 11 is used to simulate multiple manipulators in complex scenarios. It is loaded using environment-specific URDF files, which contain mass, inertia, and joint configuration information. Gazebo is bridged with ROS using the gazebo_ros_lib plugin, allowing the controller manager to interface with the motor input and sensor output. The simulator's physics engine is used to validate the goal action by evaluating the plan. The state of the virtual environment can be synchronized with the real world by updating the simulation through the perception pipeline. As shown in the data flow of 7 in Fig. 8, the robot can use the updated belief state to spawn 3D models in simulation and planning scenes. ### _Communication between Distributed Computing Units_ As demonstrated in Fig. 9(a), PAPRAS uses distributed computing capabilities to allow communication between distributed computing units. Using the master ROS node on the main operating computer as the host machine, all other running processes can be configured with their IP and master URI addresses for the connection. The host machine runs the high-level and computationally heavy nodes such as the user or AI-based task commands as well as the perception and planning algorithms. Furthermore, multiple PAPRAS arms can be controlled from each client machine and its movements can be synchronized with other client machines. The main host machine used is the Lambda Dual GPU Workstation, while the client machines are Intel NUCs composed of an Intel Core i3-10110U processor, 16GB DDR4 RAM, and a 256GB SSD. ## V Hardware experiments ### _Validations_ Hardware experiments were performed in a kitchen environment to validate the functions and performance of PAPRAS. In Fig. 11, a docking mount (Fig. 5) and a control box (Fig. 7a) Fig. 10: PAPRAS applications. were installed between the dishwasher and the kitchen sink. The mount is located right below the kitchen top facing perpendicular to the surface of the dishwasher door and the cabinet door, as planned in Fig. 2. With this mount location, PAPRAS is able to reach the objects in the sink and the bottom drawer of the dishwasher. We used a 2.27 kg (5 lbs) dumbbell for the payload test. First, 300 hundred random poses near the boundary of the PAPRAS workspace were selected in simulation (Fig. (a)a). From the 300 poses, we selected 10 poses in which the motor torques were close to the maximum torques. A motion was planned between the 10 poses while avoiding collisions. PAPRAS was able to track the planned motion in both simulation and hardware experiments. Please refer to the supplementary video to see the hardware experiment. ### _Applications_ PAPRAS offers a modular framework for quick and efficient implementation of the system in various environments. Fig. (b)b shows how to create a new demonstration. First, a URDF file needs to be created that details the model of the environment and the location(s) and orientation(s) of the PAPRAS mount(s). A MoveIt package needs to be generated with appropriate inter-link collisions disabled and planning groups set up. ROS launch and YAML configuration files need to be created along with a YAML file outlining desired poses and actions. Lastly, a C++ script uses the Mission Plan to control the arms and gripper. In general, all steps are relatively straightforward and require minimal effort. As a result, PAPRAS allows for the rapid deployment of new demonstrations. Our system demonstrated scenarios such as dishwashing, coffee making, dressing, and dual-arm motions in Fig. 12. #### Vi-B1 Sink to Dishwasher In the kitchen environment, the goal was to move a set of dirty dishes from the sink to the dishwasher. To do this, we used the same setting as in the payload test. A command script was used to perform the task of localizing and mapping the sink, detecting objects, estimating each item's pose, choosing the most cost-efficient object to pick, picking an object from the sink, moving the object to the dishwasher, and placing the object in the dishwasher. #### Vi-B2 Coffee Making For our coffee-making demonstration, we made a dinner table with three arm mounts in Fig. (a)a and used a cup, an electric kettle, a pour-over coffee cone, and ground coffee. Our command script runs the two stages of the demo task which consists of pouring periodically from the kettle into the pour-over coffee cone, then serving the coffee into a cup without spilling. #### Vi-B3 Quadruped PAPRAS can also be extended to mobile robots, such as Spot. In this application, two arm mounts were attached to the front end of Spot to pick and place objects between the table and sink, where mobility is needed. A command script coordinates arm control operations with mobile navigation while communicating with the Spot SDK. #### Vi-B4 Cage In the cage demo, four arm mounts were placed on the four vertical beams of the suit-up environment. Here, all four arms were simultaneously controlled by a single client computer to assist the subject with putting on a jacket. The task planning pipeline starts with the back arms holding the end sleeves, bringing the jacket forward for the two front arms, helping the subject step in, and finally bringing down the jacket as the back arms go back to the rest position. This setup enables automated dressing while ensuring safety with real-time multi-agent motion planning. The estimated collision box of the subject seen in Fig. 13 was used to ensure that the arms always avoid contact with the user. #### Vi-B5 Dual Arm The dual arm demo showcases the potential of human-robot interaction and collaboration in real-world scenarios. The platform is built with a single stand, two arm mounts, and a client computer. A camera records the motion of a lead demonstrator, and OpenPose human skeleton tracking is used to directly map the joint angle values from the human to the robot. The mapping is sent as raw action commands to Fig. 11: Payload check in simulation and experiments. Fig. 12: Execution of coffee making, quadruped, cage, and dual arm demos. Fig. 13: Visualization of cage with collision box. the joint trajectory controller. In addition to the HRI aspect, the dual arm highlights the group communication capabilities between distributed systems. We enable synchronized control of multiple dual arm stands by connecting each client machine to the host machine, as described in IV-G. This communication allows the movements of each robotic arm to be synchronized with other client machines, creating a highly coordinated display. ## VI Conclusion and Future Work This paper presented PAPRAS, a pluggable robotic arm system for home and human-robot collaboration. By analyzing the target task spaces and utilizing 3D printing and structure optimization, we were able to accomplish the lightweight and high-payload arm design. A locking mechanism was embedded in PAPRAS for better portability and a docking mount was implemented for the plug-and-play function. We built the PAPRAS software architecture based on an open-source framework and optimized for low-latency multiagent-based distributed manipulator control. To show PAPRAS's ease of use and efficiency, we developed a process to create new demonstrations. Simulations and hardware experiments were conducted in various demonstrations to validate the hardware and software design. PAPRAS successfully performed sink-to-dishwasher manipulation, coffee making, mobile manipulation on Spot, and suit-up demo. For future work, we will keep creating more applications in human environments. Furniture, home appliances, and other types of mobile base robots are potential applications of PAPRAS. Furthermore, we are planning to optimize the workspace of PAPRAS based on the 3D data from the target task. As stated in Section II-A, collecting spatial data, such as trajectories of the objects in 3D space, is important for the robot to do the task. For this, we will collect the trajectories of the objects and human motions using a motion capture system. From this data, we can get the possible end-effector trajectories for the sequential movements. It will be an interesting design optimization problem to solve various target tasks by optimizing the number of arms, the locations of the mounts, as well as the trajectories of the objects. Computational coordination methods, such as [22], can be used to find design parameters and motion trajectories for the robotic system at the same time.
2310.06777
Information Content Exploration
Sparse reward environments are known to be challenging for reinforcement learning agents. In such environments, efficient and scalable exploration is crucial. Exploration is a means by which an agent gains information about the environment. We expand on this topic and propose a new intrinsic reward that systemically quantifies exploratory behavior and promotes state coverage by maximizing the information content of a trajectory taken by an agent. We compare our method to alternative exploration based intrinsic reward techniques, namely Curiosity Driven Learning and Random Network Distillation. We show that our information theoretic reward induces efficient exploration and outperforms in various games, including Montezuma Revenge, a known difficult task for reinforcement learning. Finally, we propose an extension that maximizes information content in a discretely compressed latent space which boosts sample efficiency and generalizes to continuous state spaces.
Jacob Chmura, Hasham Burhani, Xiao Qi Shi
2023-10-10T16:51:32Z
http://arxiv.org/abs/2310.06777v1
# Information Content based Exploration ###### Abstract Sparse reward environments are known to be challenging for reinforcement learning agents. In such environments, efficient and scalable exploration is crucial. Exploration is a means by which an agent gains information about the environment; we expand on this topic and propose a new intrinsic reward that systemically quantifies exploratory behaviour and promotes state coverage by maximizing the information content of a trajectory taken by an agent. We compare our method to alternative exploration-based intrinsic reward techniques, namely Curiosity Driven Learning (CDL) and Random Network Distillation (RND). We show that our information-theoretic reward induces efficient exploration and outperforms in various games, including Montezuma's Revenge - a known difficult task for reinforcement learning. Finally, we propose an extension that maximizes information content in a discretely compressed latent space which boosts sample-efficiency and generalizes to continuous state spaces. Machine Learning, Reinforcement Learning 2016). CDL uses a form of state dynamics prediction error to entice the agent to visit unfamiliar states (Burda et al., 2018). Similarly, RND uses random features as a learning target for a curiosity model, the error of which is used as an intrinsic reward; neural network prediction error is observed to be a good proxy for novelty and hence quantifies exploration progress (Burda et al., 2018). We contribute to the family of curiosity-inspired exploration methods by proposing an information theory-driven intrinsic reward that induces effective exploration policies without introducing auxiliary models or relying on approximations of environment dynamics. Information Content-based Exploration (ICE) introduces an intrinsic reward that maximizes information gain in the state space. In each episode of interaction with the environment, we aggregate trajectory statistics to approximate the entropy of the state visitation distribution and reward the agent for relative improvements to this measure. This approach formalizes the exploration process as seeking low-density states but can easily accommodate prior knowledge of the environment by replacing entropy with KL-divergence to a desired state density, similar to (Lee et al., 2020). We run several experiments comparing ICE's performance against RND and CDL using A3C (Mnih et al., 2016) as our base RL algorithm to show that ICE significantly outperforms RND and CDL in various environments. We also observe that ICE exploration exhibits trajectories akin to depth-first search. In most environments, this offers the best opportunity to find extrinsic rewards, where the agent must decisively commit to a path through the environment rather than dithering locally via action entropy. Finally, we discuss extensions to our approach that generalize to continuous state spaces by maximizing information content on a hashed latent space extracted from an auto-encoder architecture. ## 2 Background We briefly cover A3C in section 2.1, and introduce the Source Coding Theorem which provides the theoretical justification for using entropy in the state space for exploration in ICE. In 2.3 and 2.4, we cover Curiosity Driven Learning and Exploration by Random Network Distillation algorithms. ### Actor Critic Algorithm We consider a parametric policy \(\pi_{\theta}\in\Pi:=\{\pi_{\theta}:\theta\in\Theta\}\) interacting with a discounted Markov Decision Process \(\mathcal{M}=\)(\(\mathcal{S},\mathcal{A},\gamma,\mathcal{P},r\)). At each time step \(t\), the policy out-puts an action \(a_{t}\sim\pi_{\theta}(\cdot|s_{t})\) that leads to the next state \(s_{t+1}\) according to the transition kernel \(\mathcal{P}(s_{t+1}|s_{t},a_{t})\). The environment also provides an extrinsic reward \(r_{t}\sim(s_{t},a_{t})\). The objective is to maximize the expected discounted cumulative reward \(\mathcal{J}(\pi)=\mathbb{E}_{\pi}(\sum_{t=0}^{\mathcal{T}}\gamma^{t}r_{t})\). The critic is an auxiliary model with parameters \(\phi\in\Phi\) that reduces the variance of the policy gradient by approximating the value function: \(\tilde{V}_{\phi}^{\pi}(s_{t})\approx\mathbb{E}_{\pi}[\sum_{k}\gamma^{k}r_{t+k +1}|s_{t}]\). The Actor-Critic method maximizes the objective \(\mathcal{J}\) by optimizing the model's weights based on value loss, policy loss, and entropy loss, which have the form \[L_{value,t} =\alpha_{value}(r_{t}+\gamma V_{\phi}(s_{t+1})-V_{\phi}(s_{t}))^{2} \tag{1}\] \[L_{policy,t} =\alpha_{policy}(-A_{t}\log\pi_{\theta}(a_{t}|s_{t}))\] \[L_{entropy,t} =\alpha_{entropy}(-\mathbb{H}(\pi_{\theta}(\cdot|s_{t})))\] where \(\alpha_{value},\alpha_{policy},\alpha_{entropy}\) are the weight coefficients of the three losses, \(\pi_{\theta}(a_{t}|s_{t})\) is the probability of the selected action policy, \(\mathbb{H}(\pi_{\theta}(\cdot|s_{t}))\) is the entropy of the policy. In addition, \(k\)-step Advantage estimate \(A_{t}\) is defined to be \[A_{t}=\sum_{i=0}^{k-1}(\gamma^{i}r_{t+i})+\gamma^{k}V_{\phi}(s_{t+k})-V_{\phi} (s_{t}) \tag{2}\] In the case where the intrinsic reward is present, the reward \(r_{t}\) is the weighted sum of intrinsic and extrinsic reward, \[r_{t}=r_{t}^{extrinsic}+\beta r_{t}^{intrinsic} \tag{3}\] where \(\beta\) is the weight coefficient of the intrinsic reward. The k-step state distribution induced by a policy \(\pi\) is given by: \[d_{k,\pi}(s)=\mathbb{E}_{\begin{subarray}{c}s_{1}\sim\rho_{0}(\cdot)\\ a_{t}\sim\pi(\cdot|s_{t})\\ s_{t+1}\sim\mathcal{P}(\cdot|s_{t},a_{t})\end{subarray}}[\frac{1}{k}\sum_{t= 1}^{k}1(s_{t}=s)] \tag{4}\] ### Source Coding Theorem Given a list of discrete elements \(L=\{l_{0},l_{1},...,l_{t}\},l_{i}\in\mathbb{R}^{1}\), the Source Coding Theorem states that \(L\) can be compressed Figure 2: A random exploration strategy works with fairly high probability if the solution can be achieved in a small distance traveled from the origin. As can be seen above, as the distance from origin \(K\) increases, the probability of success decreases exponentially for the same episode length \(N\). See A.2 in the Appendix for derivation. into no less than \(H_{t}^{L}\times(t+1)\) bits, where \(H_{t}^{L}\) is the entropy of \(L\) up to step \(t\). **Theorem 2.1** (Source Coding Theorem (Shannon, 1948), \(D=1\)).: _A single random variable (D=1) with entropy \(H\) can be compressed into at least \(H\) bits without risk of information loss._ We note that the joint entropy is sub-additive, with equality when the random variables are independent (Aczel et al., 1974): \[H(X_{1},...,X_{n})\leq\sum_{i}H(X_{i}) \tag{5}\] We also note that up to a constant, the entropy of a discrete random variable \(X:\mathcal{X}\rightarrow\{1,...,K\}\) distributed according to \(p\) is given by \(-D_{KL}(p,\mu)\) where \(\mu\) is the uniform distribution on \(\mathcal{X}\), and \(D_{KL}\) is the KL divergence: \[D_{KL}(p,\mu) =\sum_{k=1}^{K}p(X=k)log\frac{p(X=k)}{K^{-1}} \tag{6}\] \[=logK+\sum_{k=1}^{K}p(X=k)logp(X=k)\] (7) \[=logK-\mathbb{H}(X) \tag{8}\] Hence, maximizing the state distribution entropy is equivalent to minimizing the reverse KL to the uniform distribution over state. ### Curiosity Driven Learning (CDL) This branch of exploration provides an intrinsic reward based on the agent's familiarity with the current state; curiosity-driven Exploration by Self-supervised Prediction (Pathak et al., 2017) is one of the popular approaches. It formulates the intrinsic reward by using three neural networks to estimate state encoding, forward dynamics, and inverse dynamics. The first neural network \(f_{encode}(s)\) encodes \(s_{t}\mapsto g_{t}\) and \(s_{t+1}\mapsto g_{t+1}\). Then a second neural network \(f_{forward}(g_{t},a_{t})\) estimates the next state encoding \(\hat{g}_{t+1}\). Lastly, the third neural network \(f_{inverse}(g_{t},g_{t+1})\) estimates the transition action distribution \(\hat{a_{t}}\). The training loss and intrinsic reward have the form \[L_{forward_{t}} =\alpha_{forward}MSE(g_{t+1}-\hat{g_{t+1}})\] \[L_{inverse_{t}} =\alpha_{inverse}CrossEntropy(\hat{a_{t}},a_{t}) \tag{9}\] \[r_{intrinsic_{t}} =\beta L_{inverse_{t}}\] The agent will naturally favor states that it cannot accurately predict the transition action from \(s_{t}\) to \(s_{t+1}\). ### Exploration by Random Network Distillation (RND) Exploration by Random Network Distillation (Burda et al., 2018b) follows the same design philosophy as Curiosity Driven Learning. It formulates the intrinsic reward by using two neural networks. The first neural network \(f_{random}(s)\), encodes \(s_{t}\mapsto g_{t}\). It is randomly initialized and will not get updated. The second neural network \(f_{encode}(s)\), encodes \(s_{t}\mapsto\hat{g}_{t}\). The training loss and intrinsic reward have the form \[L_{encode_{t}}=\alpha_{encode}MSE(g_{t+1},\hat{g_{t+1}}) \tag{10}\] \[r_{intrinsic_{t}}=\beta L_{inverse_{t}}\] where \(\alpha_{encode}\) and \(\beta\) are coefficients. This method encourages the agent to explore in states where \(f_{encode}\) cannot accurately predict the random encoding. ## 3 Information Content-based Exploration (ICE) ### Motivation While interacting with the environment, we view the t-step state sequence \(s_{1},...,s_{t}\) as a 1-sample Monte-Carlo approximation to the t-step state distribution \(\hat{d}_{t,\pi}(s)\approx d_{t,\pi}(s)\) induced by the agent \(\pi\). Supposing that our state space \(\mathcal{S}\) is isomorphic to \(\textbf{K}^{D}:=\{1,2,..,K\}^{D}\), then we can compute 1 the empirical entropy of the t-step state sequence \(\hat{d}_{t,\pi}(s)\): Footnote 1: An efficient vectorized implementation can be found in the supplemental code. \[\mathbb{H}(\hat{d}_{t,\pi}(s))=-\sum_{x\in\textbf{K}^{D}}\hat{p}(x)log\hat{p}(x) \tag{11}\] in \(\mathcal{O}(t\cdot K^{D})\). For computational reasons, we view each state trajectory as composed of \(D\)_independent_ state elements (e.g., frames of a video made up of \(D\) pixels) which reduces the complexity to \(\mathcal{O}(t\cdot K\cdot D)\): \[\sum_{x\in\textbf{K}^{D}}\hat{p}(x)log\hat{p}(x)\approx\sum_{d\in D}\sum_{k \in\textbf{K}}\hat{p}(x_{d}=k)log\hat{p}(x_{d}=k) \tag{12}\] We provide an intrinsic reward to the agent based on the _relative improvement_ of this measure across time steps. Thus, we optimize for the approximate derivative of state entropy. We view several benefits to using the relative improvement measure: * Intrinsic stochasticity (e.g. persistent white noise) has high Shannon entropy but low fundamental information content (since the data is non-compressible). The relative improvement measure nullifies such entropy sources. * Let \(\delta_{t}\geq 0\) be the sub-additivity gap introduced by assuming our state-sequence \(s_{1},...,s_{t}\) is independent. If \(\delta_{t}\) is relatively constant across time then the relative improvement measure cancels out the overestimation of state entropy introduced by our independence assumption. In 3.2 we give an efficient algorithm for computing the intrinsic ICE reward, then, in 3.4 we discuss a principled approach to state discretization, which enables us to compute ICE in continuous state spaces. ### Algorithm Let us define a _trajectory_, \(X_{t}\), to be the list of states collected from the same episode up to step \(t\), \(X_{t}=[s_{0},s_{1},...,s_{t}]\), where each state \(s_{t}\) is a collection of _state elements_, \(s_{t}=[s_{t}^{0},s_{t}^{1},...,s_{t}^{D-1}]\in\mathbb{R}^{D}\). To collect the total information in a trajectory, we first calculate the information content received over each _state element_ in a trajectory. Consider an arbitrary state element \(d\) in a trajectory up to step \(t\), \(X_{t}^{d}=[s_{0}^{d},s_{1}^{d},...,s_{t}^{d}]\). We can calculate the count for each unique value within \(X_{t}^{d}\). \[q_{t}^{d}=CountUnique(X_{t}^{d}) \tag{13}\] Next, we can calculate the probability of each unique value occurrence within \(X_{t}^{d}\). \[p_{t}^{d}=\frac{q_{t}^{d}}{t+1} \tag{14}\] For example, if \(X_{t=4}^{d}=[a,a,b,a,c]\), then \(q_{t=4}^{d}=[3,1,1]\) and \(p_{t=4}^{d}=[3/5,1/5,1/5]\). We can then obtain the entropy for \(X_{t}^{d}\) by applying Shannon's entropy to \(p_{t}^{d}\). \[H_{t}^{d}=Entropy(p_{t}^{d})=-\sum_{k}p_{t}^{d}[k]\log_{2}p_{t}^{d}[k] \tag{15}\] \(H_{t}^{d}\) measures the information content of the \(d^{th}\) state element trajectory. The information content of the full trajectory can be calculated by summing over all state elements. \[H_{t}=\sum_{d}H_{t}^{d} \tag{16}\] We then denote \(r_{t}^{intrinsic}\) as \[r_{t}^{intrinsic}=H_{t}-H_{t-1} \tag{17}\] Intuitively, \(r_{t}^{intrinsic}\) represents the amount of additional information \(s_{t}\) brings to the trajectory \([s_{0},s_{1},...,s_{t-1}]\). Note that \(\sum r_{t}^{intrinsic}=H_{T}\), which is the total information content of the entire trajectory. A numerical example of \(r_{t}^{intrinsic}\) calculation can be found in Appendix A.3. This formulation of the \(r^{intrinsic}\) requires no estimation, as it can be calculated directly from the trajectory of states. By maintaining count arrays of size \(D\times K\), we can efficiently implement Alg. 1 using vectorized CPU or GPU operations. See the supplementary Material for our implementation. ``` Input: Model \(M\) repeat Input: state \(s_{0}\), \(H_{last}\gets 0\), \(t\gets 0\), \(done\gets False\) Input: unique count dictionary \(q\)\(q\gets Update(q,s_{0})\), from Eq.13 \(H_{last}\gets CalculateICE(q)\), from Eq.16 while not \(done\)do \(a_{t}\gets M(s_{t})\) \(s_{t+1},r_{t}^{extrinisc},done\gets env.step(a_{t})\) \(q\gets Update(q,s_{t+1})\) \(H_{current}\gets CalculateICE(q)\) \(r_{t}^{intrinsic}\gets H_{current}-H_{last}\) \(r_{t}\gets r_{t}^{extrinisc}+\beta r_{t}^{intrinsic}\) \(H_{last}\gets H_{current}\) \(t\gets t+1\) endwhile Update model \(M\) until Model \(M\) Converge ``` **Algorithm 1** Information Content-based Intrinsic Reward ### State Exploration vs Action Exploration ICE is a state space exploration algorithm and assumes the environment to be an observable Markov Decision Process. This assumption is necessary because ICE measures the information content of the observable state trajectory. ICE encourages the agent to efficiently explore the trajectory containing the most information content at the cost of disincentivizing the agent from pursuing trajectories Figure 3: A trajectory is the collection of all states traversed by the agent in sequence. State element trajectory is the collection of elements from the same position in all states in a trajectory. with low information. Therefore, ICE offers no guarantee that every state in the decision process will be visited. On the other hand, action space exploration (Eq.1) encourages the agent to take actions based on uniform distribution. Albeit inefficient, it does guarantee that the agent will eventually visit all states in an environment. State space exploration is fundamentally different than action space exploration. They are complements to each other. In the case where the agent is driven only by the action of space exploration, the agent can reach all reachable states, but the expected amount of time required can be very long. On the other hand, an agent driven only by state space exploration can reach more distinct states in significantly less amount of time, but it is no longer guaranteed to reach all reachable states. The probability of reaching a reward is proportional to the percent of all possible states reached. As shown in Fig.4, The agent with high state exploration and low action exploration can potentially reach rewards in a shorter amount of time but with a possibility that it may never reach all the rewards. Therefore, it is important to balance the weighting of these two types of explorations in order to explore optimally, illustrated in (Fig.4) and 4.3. ### ICE in a Learned Latent Space A trajectory's information content depends on how we choose to discretize the state space; naturally, different discretization methods will change the absolute information content. Selecting the appropriate discretization is only possible with prior knowledge of the environment. We propose to address this by using an auto-encoder to learn a compressed, low-dimensional discrete representation upon which information is maximized. Computing ICE on the latent manifold decouples the ICE formulation from the dimensionality of the observation space. The information bottleneck principle (Tishby et al., 1999) allows us to trade between _sufficiency_ and _minimality_ in our discrete representation. Since the KL divergence is parameterization-invariant, we can be sure that insofar as our auto-encoder learns an invertible representation of the state space, computing ICE in latent space should lead to desirable exploration behaviour as in discrete settings. We validate this procedure in 4.4. #### 3.4.1 Motivation We put \(\mathcal{D}=\{d:\mathcal{S}\rightarrow\mathbb{R}_{\geq 0}\}\) to be the set of distributions over the state space, and we consider functionals \(\mathcal{R}(\cdot)\) that describe how desirable a given state distribution is. If we take \(\mathcal{R}(d)=\mathbb{H}(d)\) on a discrete space we recover the formulation of 3.2. More generally, if we have some target distribution \(d^{*}\), we can consider the reverse KL: \[\mathcal{R}_{d^{*}}(\cdot)=D_{KL}(\cdot,d^{*}) \tag{18}\] We consider the family of measurable surjective maps: \(\mathcal{F}:\{\mathcal{S}\rightarrow\mathcal{Z}\}\), where \(\mathcal{Z}\) is a discrete space of dimensions \(k\leq dim(\mathcal{S})\). The inverse set map partitions \(\mathcal{S}\), giving rise to the following relation: \[d_{1}\sim^{f}d_{2}\iff f_{*}(d_{1})=f_{*}(d_{2}) \tag{19}\] where \(f_{*}()\) is the push-forward measure on \(\mathcal{Z}\). We let: \[[\mu]_{f}=\{d\in\mathcal{D}:d\sim^{f}\mu\}=f^{-1}(Unif_{z}) \tag{20}\] denote the subset of state distributions whose projection onto \(\mathcal{Z}\) by \(f\) result in the uniform distribution. Then we put: \[\mathcal{R}_{f}(\cdot)=D_{KL}(\cdot,[\mu]_{f})=\mathbb{H}[f(\cdot)] \tag{21}\] In words, we think about \(f\) as a filter that represents various invariants on our state space \(\mathcal{S}\). All state distribution that map uniformly on \(\mathcal{Z}\) by \(f\) are considered equally desirable, and we reward trajectories with high information content from the _lens_ of \(f\). #### 3.4.2 Learning the latent Representation We let \(\mathcal{Z}\) arise as a discrete bottleneck \(\mathcal{S}\rightarrow^{e}\mathcal{Z}\rightarrow^{d}\mathcal{S}^{\prime}\) from aiming to reconstruct state as best as possible, thereby Figure 4: As we increase the action exploration and decrease the state exploration, it takes significantly longer to reach the same amount of distinct states (blue line). In the case where we decide to only apply state exploration, the agent will be able to efficiently follow the trajectory with very rich information content, at the cost of never visiting states in the low information content trajectories. As a result, such an agent will almost never reach all the states (red line). inducing a representation \(\mathcal{Z}\) that maximizes mutual information with \(\mathcal{S}\). A smoothly parameterizes encoder (e.g. neural network) will ensure that similar states get mapped to similar codes. Hence, maximizing information content on \(\mathcal{Z}\) will further encourages diverse trajectories on \(\mathcal{S}\). To discretize the latent space, we use _locality sensitive hashing_ (LSH) (Zhao et al., 2014). LSH is helpful because state dimensions that are highly coupled (from an information-theoretic view), will be hashed to the same latent code by our auto-encoder whose projections to a lower dimension filter out small perturbations. On the other hand, a novel experiences will project to different latent subspace and thus will be prescribed a unique hash code. This approach decouples the ICE procedure from the state-space dimensionality, and can be directly applied to continuous spaces. We follow the discretization pipeline similar to (Tang et al., 2017), using a _Simhash_(Charikar, 2002) scheme with a auto-encoder. Namely, for an encoder \(e:\mathcal{S}\rightarrow\mathbb{R}^{D}\), a decoder \(d:\mathbb{R}^{D}\rightarrow\mathcal{S}\), fixed stochastic matrix \(A\in\mathbb{R}^{k\times D}\) with \(A_{ij}\sim\mathcal{N}(0,1)\), we retrieve the discrete latent code for state \(s\) according to: \[z(s)=sgn(A\sigma(\mathcal{U}_{(-a,a)}+e(s)))\in\{-1,1\}^{k} \tag{22}\] where \(\sigma(\cdot)\) denotes element-wise application of the sigmoid function, and \(\mathcal{U}_{(-a,a)}\) is the injection of uniform noise to force the latent space to behave discretely (Hubara et al., 2016), (Maddison et al., 2016). The encoder and decoder are parameterized by convolution neural networks, jointly trained with the reinforcement learning policy on a lagged updated schedule to ensure the latent codes are stable while retaining the ability to adjust based on exploratory progress of the agent. The auto-encoder optimizes a log likelihood term and an auxiliary loss that ensures unused latent bits take on an arbitrary binary value, preventing spurious latent code fluctuations (Tang et al., 2017): \[\mathcal{L}(\{s_{i}\}_{i=1}^{N})=-\frac{1}{N}\sum_{i=1}^{N}[logp(s_{i})-\frac{ \lambda}{K}\sum_{i=1}^{D}g(s_{i})] \tag{23}\] \[g(s_{i}):=min\{(1-e(s_{i}),e(s_{i})\} \tag{24}\] The algorithm for running ICE on a learned latent space is given in Alg.2. We note that using learned hash codes involves minimal changes to the base ICE algorithm, requiring only that (1) states pass through the encoder and _SimHash_ prior to computing ICE reward, and (2) a buffer of approximately reconstructed state trajectories is stored to periodically updating the auto-encoder based on Eq.23. ``` Input: Model \(M\), Encoder \(e:\mathcal{S}\rightarrow\mathbb{R}^{D}\), Decoder \(d:\mathbb{R}^{D}\rightarrow\mathcal{S}\), Fixed Gaussian matrix \(A\in\mathbb{R}^{k\times D}\). Buffer \(\mathcal{B}\) repeat Input: state \(s_{0}\), \(H_{last}\gets 0\), \(t\gets 0\), \(done\gets False\) Input: unique count dictionary \(q\)\(z_{0}\gets Encode(s_{0},e,A)\), from Eq.22 \(\hat{s_{0}}\gets Decode(s_{0},z_{0},d)\) \(\mathcal{B}\leftarrow(s_{0},z_{0},\hat{s_{0}})\) add to buffer \(q\gets Update(q,z_{0})\), from Eq.13 \(H_{last}\gets CalculateICE(q)\), from Eq.16 while not\(done\)do \(a_{t}\gets M(s_{t})\) \(s_{t+1},r_{t}^{extrinsic},done\gets env.step(a_{t})\) \(z_{t+1}\gets Encode(s_{t+1},e,A)\), from Eq.22 \(s_{t+1}\gets Decode(s_{t+1},z_{t+1},d)\) \(\mathcal{B}\leftarrow(s_{t+1},z_{t+1},s_{t+1})\) add to buffer \(q\gets Update(q,z_{t+1})\) \(H_{current}\gets CalculateICE(q)\) \(r_{t}^{intrinsic}\gets H_{current}-H_{last}\) \(r_{t}\gets r_{t}^{extrinsic}+\beta r_{t}^{intrinsic}\) \(H_{last}\gets H_{current}\) \(t\gets t+1\) endwhile Update model \(M\) if iteration mod \(ae_{update}=0\)then update encoder \(e\) and decoder \(d\) with data from buffer \(\mathcal{B}\) according to Eq. 23 endif until Model \(M\) Converge ``` **Algorithm 2** Information Content-based Intrinsic Reward in Learned Latent Space ## 4 Experiment We compare information content based intrinsic reward with several other methods from the literature described in Section 2. The selected environments include Grid-World, along with a group of sparse reward Atari games. All of the environments in this experiment are reduced to \(40\times 40\) dimensionality and discretized using 255 gray scale range. We use a single convolution neural network with LSTM layers and seperate value and policy heads to parameterize the actor and critic. Details are given in A.5. The unique count matrix \(Q\) (Eq.13) is preset to be \(Zeros(40,40,C)\) at the beginning of each trajectory, where \(C\) is the number of unique values for each pixel. The algorithm will dynamically update the matrix \(P\) and then calculate the trajectory information content after each step. ### Grid-World The state space for grid world is a \(40\times 40\) grid (Fig.8 left). The agent starts at the (0,0) position and it can take action from \(\{up,down,left,right\}\). The grid visited by the agent is labeled as 1, otherwise 0, the extrinsic reward by the environment is always 0, and the environment terminates after 400 steps. As shown in (Fig.5), the agent is able to explore up to 300 distinct states in every trajectory with ICE enabled, which is significantly more than the case when ICE is disabled. The trajectory information content also increases as the number of distinct states increases. ### Atari We compare ICE with A3C, CDL, and RND on several environments from Atari. Firstly we compare the methods of Pong and Breakout, Fig.6. The performance of ICE is slightly worse than A3C. This is because these two environments both have very rich reward systems. They do not necessarily require an exploration signal in order to solve. In these environments, the intrinsic rewards are noise rather than a signal to the model. Nonetheless, ICE will not degrade the performance by a lot as long as we keep it to a relatively small magnitude. As a result, ICE is able to solve both environments efficiently. The second set of environments is Pong with Sparse Reward, Super Mario, and Montezuma, Fig.7. These are challenging for RL algorithms due to their extremely sparse reward system. In Pong with Sparse Reward, all of the rewards were accumulated and provided to the model at the last step. In Super Mario, the agent gets no reward during learning and gets properly rewarded during evaluation (see A.4 for more details). In Montezuma, the agent will only get a positive reward if the character reaches a key. These environments require the agent to precisely follow an action trajectory in order to solve the problem. In all cases, the ICE agent outperformed the base A3C agent. The learning for ICE is also very efficient as focuses on efficiently exploring the state space. ICE also displays interesting behaviors in the Montezuma Revenge environment. In the early stages, the agent focuses on exploring distinct states as it receives no rewards. As soon as it lands on a key, it quickly shifts its focus towards maximizing the extrinsic reward. Simultaneously, the intrinsic reward quickly degrades. Shortly after, the agent tries to regain the intrinsic reward while still being able to reach the key. As shown in the right column of Fig.6,7, ICE is very correlated with the environment's reward system, because Atari games such as Pong and Breakout have deterministic and observable state spaces, which gives a clear measure of information content using ICE. In general, ICE is a very robust exploration method because it is not very sensitive to hyper-parameters and is not prediction-based. ### Grid-World with Wall This is a special case of the Grid-World to illustrate the idea of 3.3. The basic setup is the same as described in 4.1, with differences shown in Fig.8. The block of the wall marked in black represents states the agent cannot visit. On top of this, the grids marked in blue are unobservable, as they will always be filled with 0, and the grids marked in yellow are standard (turns to 1 if visited, else 0). The agent will get a positive reward and terminate the game if it has reached the green grid. The agent with ICE is efficiently exploring yellow grids, but it is having trouble exploring blue grids and cannot land on the green grid. This is because the agent quickly converged to take the path in yellow grids, as it can generate higher information content compared to a path in blue grids. On the other hand, the agent without ICE is able Figure 5: ICE motivates agent to reach a significantly higher number of distinct states which is correlated with an increase in information content. Measured in a no-reward grid-world environment. Figure 6: Performance on simple Atari Environments, cumulative reward (left), information content (right) to land at the green grid after a while of random search (Fig.9). This is a limitation of ICE that can be overcome by simply including action exploration. ### ICE in a Learned Latent Space We evaluate how well ICE performs when computed on a learned latent space, following the procedure of 3.4.2, on several environments including Pong, Breakout, Montezuma's revenge and Starpilot. The encoder consists of 3 convolution blocks followed by a linearity outputting a \(128\)-dimensional dense representation which is hashed into \(k=16\)-dimensional latent codes. The decoder consists of 3 deconvolution blocks. We use \(\lambda=0.5\) for the auxiliary auto-encoder loss, and update the auto-encoder every 3 updates of the base policy. This architecture is inspired by the representation learning stack from (Tang et al., 2017), which aim to achieve high reconstruction accuracy to help stabilize the learned latent space, and thereby make our ICE reward more consistent. The extrinsic reward learning curves are shown in 3.4.2. We observe that working in a compressed representation can be beneficial, even for state spaces that are already discrete (e.g. images). In Pong and Starpilot, ICE in latent space improves the speed of learning, suggesting that the bottlenecked representation helps regularize against sporadic noise in our state trajectories. In Montezuma's revenge, optimizing for ICE in latent space exhibits asymptotically better extrinsic rewards, suggesting that maximizing information content on the compressed representation is better able to capture semantically meaningful subspaces worth exploring. We believe that these initial results are promising, and we are motivated to further experiment with ICE rewards on various representation spaces. ## 5 Conclusion We have introduced a new intrinsic reward based on the information content of trajectories measured using Shannon's Entropy. Our method is amenable to vectorized implementation and has shown robustness in various observable MDP's. ICE motivates the agent to explore diversified trajectories, thereby increasing its sample-efficiency in sparse environ Figure 8: Setup of grid world (left) and grid world with wall (right). The agent will always start at the top-left corner. Yellow grids are observable, turn to 1 when visited. Blue grids are non-observable, always filled with 0. Black grids are walls that agents cannot step into. Green grid gives a positive reward to the agent. Figure 7: Performance on Sparse Reward Environments, cumulative reward (left), information content (right). Figure 9: The agent with ICE is exploring efficiently in yellow grids but is not able to make it to the green grid. The agent without ICE finds the green grid after a few rounds of random search and quickly converges to this path. ments where the probability of receiving a reward is directly proportional to the number of distinct visited states. We discussed a discretization scheme 3.4 based on locality sensitive hashing that maximizing information content on a compressed latent space, generalizing ICE reward to continuous state spaces. We also examined the limitations of motivating agents to traverse trajectories with a large number of distinct states in 4.3. ## 6 Future Work The idea of this work is loosely related to depth-first search (DFS). By analogy, action state exploration is a cyclic breadth-first search (BFS). We believe that there is opportunity to improve ICE by dynamically shifting between DFS and BFS behaviour as the agent interacts with the environment. We also posit that changing the formulation to incorporate information _across_ multiple episodes can yields better results. For example, we can maximize information content of the current trajectory while minimizing the mutual information with previous experience; we leave this to future work. Finally, we've observed promising results using ICE on a learned latent representation. In future work, we wish to further validate this process by evaluating it using different representations such as the _successor representation_(Machado et al., 2023), and rigorously testing sensitivity to hash-dimension on continuous spaces. ## Software and Data We include an A3C and a PPO implementation of ICE in the Supplementary Material.
2310.09444
Tackling Heterogeneity in Medical Federated learning via Vision Transformers
Optimization-based regularization methods have been effective in addressing the challenges posed by data heterogeneity in medical federated learning, particularly in improving the performance of underrepresented clients. However, these methods often lead to lower overall model accuracy and slower convergence rates. In this paper, we demonstrate that using Vision Transformers can substantially improve the performance of underrepresented clients without a significant trade-off in overall accuracy. This improvement is attributed to the Vision transformer's ability to capture long-range dependencies within the input data.
Erfan Darzi, Yiqing Shen, Yangming Ou, Nanna M. Sijtsema, P. M. A van Ooijen
2023-10-13T23:36:48Z
http://arxiv.org/abs/2310.09444v2
# Tackling Heterogeneity in Medical Federated learning via Aligning Vision Transformers ###### Abstract Optimization-based regularization methods have been effective in addressing the challenges posed by data heterogeneity in medical federated learning, particularly in improving the performance of underrepresented clients. However, these methods often lead to lower overall model accuracy and slower convergence rates. In this paper, we demonstrate that using Vision Transformers can substantially improve the performance of underrepresented clients without a significant trade-off in overall accuracy. This improvement is attributed to the Vision transformer's ability to capture long-range dependencies within the input data. ## 1 Introduction Optimization-based methods have emerged as potent solutions to tackle data heterogeneity in federated setting. These methods are effective at mitigating discrepancies arising from variations in data sizes, sample numbers, or distributions across different client nodes. Despite the general effectiveness of these optimization methods, challenges specific to medical imaging in federated learning settings remain formidable. In the realm of medical imaging, heterogeneity can manifest in a myriad of ways. These include variations in imaging modalities, different prevalence rates of specific diseases, and distinct patterns in medical datasets among hospitals. Such variations culminate in a setting of non-identical and independently distributed (non-i.i.d.) data across client nodes. This statistical heterogeneity has proven to significantly impede federated learning process. For example, heterogeneous data environments are especially challenging in specialized applications like diabetic retinopathy [18], pancreas segmentation [28], and prostate cancer classification [8], as well as in broader contexts like bone age prediction and real-world federated brain tumor segmentation [35][33]. Such heterogeneous distributions often result in reduced diagnostic accuracy and introduce fairness concerns, particularly disadvantage underrepresented hospitals. Addressing the complexities introduced by data heterogeneity is thus critical for the successful deployment of federated learning models in healthcare applications. Existing federated learning methods, most notably Federated Averaging (FedAvg), face significant limitations in effectively handling heterogeneous settings [35]. This has prompted various studies to explore alternative solutions that typically employ optimization techniques, such as modified training heuristics or objective functions. Techniques like SplitAvg [35], adaptive learning [32], hierarchical clustering [5], and proximal learning [11] offer promising avenues but come with their own sets of challenges. These challenges include substantial computational complexity, the potential for overfitting due to multi-layer optimization, and constraints to specific data types. Such limitations restrict their effectiveness across a broad range of medical imaging applications. For instance, a recently proposed top-performing algorithm employs general clustering optimization in every federated round. However, it only achieves a marginal 3% improvement over the baseline performances of FedAvg and FedAP [32], while demanding an order of magnitude more computational resources. This significantly complicates its practical applicability. These challenges are commonly attributed to the inherent difficulties associated with the heterogeneity problem, casting doubts on the practical utility of these models. This raises a fundamental question: do we really need to pay such a high price to mitigate the issues arising from heterogeneity? #### Our Contributions We introduce the Federated Multi-Head Alignment (FedMHA) approach. This method suggests that focusing on the multi-head attention mechanism in Vision Transformers as the alignment objective can lead to improved accuracy and fairness in heterogeneous settings. The attention model's ability to handle long-range, high-dimensional distributions across diverse clients underpins this improvement. The multi-head attention mechanism's intrinsic capabilities mean that aligning it can directly affect the representation of data across clients, perhaps more than other components. This study is driven by two main objectives. First, we address the challenges of fairness in the context of data heterogeneity in federated learning applied to medical imaging. Second, we aim to design a federated learning algorithm that achieves high accuracy levels without resorting to overly intricate optimization design to address the issue. Instead, we consider harnessing model architecture components to address heterogeneity. Based on these objectives, our key contributions are: * **Improved Fairness:** Aligning the multi-head attention mechanism in Vision Transformers between global and local models offers potential solutions to challenges posed by data heterogeneity, especially for underrepresented datasets. * **Enhanced Accuracy:** Our approach has consistently demonstrated superior accuracy compared to other contemporary methods. We have evaluated our model against various federated learning techniques across different levels of heterogeneity. This evaluation provides a reference for future research in federated learning for medical imaging. ## 2 Problem setting and background **Federated learning and heterogeneity** Federated learning has emerged as a decentralized approach for preserving data privacy and confidentiality while enabling models to learn from multiple data sources [34]. In federated learning, each client owns a local private dataset \(D_{i}\) drawn from distribution \(\mathbb{P}_{i}(x,y)\), where \(x\) and \(y\) denote the input features and corresponding class labels, respectively. Usually, clients share a model \(\mathcal{F}(\omega;x)\) with the same architecture and hyperparameters. This model is parameterized by learnable weights \(\omega\) and input features \(x\). The objective function of FedAvg [17] is: \[\arg\min_{\omega}\sum_{i=1}^{m}\frac{|D_{i}|}{N}\mathcal{L}_{S}(\mathcal{F}( \omega;x),y), \tag{1}\] where \(\omega\) is the global model's parameters, \(m\) denotes the number of clients, \(N\) is the total number of instances over all clients, \(\mathcal{F}\) is the shared model, and \(\mathcal{L}_{S}\) is a general definition of any supervised learning task (e.g., a cross-entropy loss). In a real-world FL environment, each client may represent a mobile phone with a specific user behavior pattern or a sensor deployed in a particular location, leading to statistical and/or model heterogeneous environment. In the statistical heterogeneity setting, \(\mathbb{P}_{i}\) varies across clients, indicating heterogeneous input/output space for \(x\) and \(y\). For example, \(\mathbb{P}_{i}\) on different clients can be the data distributions over different subsets of classes. In the model heterogeneity setting, \(\mathcal{F}_{i}\) varies across clients, indicating different model architectures and hyperparameters. For the \(i\)-th client, the training procedure is to minimize the loss as defined below: \[\arg\min_{\omega_{1},\omega_{2},\ldots,\omega_{m}}\sum_{i=1}^{m}\frac{|D_{i} |}{N}\mathcal{L}_{S}(\mathcal{F}_{i}(\omega_{i};x),y). \tag{2}\] Most existing methods cannot handle the heterogeneous settings above well. In particular, the fact that \(\mathcal{F}_{i}\) has a different model architecture would cause \(\omega_{i}\) to have a different format and size. Thus, the global model's parameter \(\omega\) cannot be optimized by averaging \(\omega_{i}\). **Regularization in federated learning** Regularization is often employed in optimization tasks to mitigate the risk of overfitting by incorporating a penalty term to the loss function. In the context of federated learning, this is particularly useful for controlling the complexity of the global model. One popular approach is FedProx [11], which extends the FedAvg algorithm by appending a proximal term to the local optimization objective. Specifically, each client \(i\) aims to minimize: \[\arg\min_{\omega_{i}}\mathcal{L}_{S}(\mathcal{F}_{i}(\omega_{i};x),y)+\frac{ \mu}{2}\left\|\omega_{i}-\omega\right\|^{2} \tag{3}\] Here, \(\mathcal{L}_{S}(\mathcal{F}_{i}(\omega_{i};x),y)\) represents the local loss for client \(i\) (as defined in Eq. 2), \(\omega\) are the global model parameters, \(\omega_{i}\) are the local model parameters for client \(i\), and \(\mu\) is the regularization parameter. The server updates the global model \(\omega\) in a manner similar to FedAvg: \[\omega=\sum_{i=1}^{m}\frac{|D_{i}|}{N}\omega_{i} \tag{4}\] Various other techniques like FOLB [19], MOON [10], and FedSplit [21] also leverage regularization to ensure that local models do not deviate significantly from the global model. However, regularization based methods like FedProx limit the global representation of models, a crucial aspect in federated learning, particularly when dealing with non-i.i.d data. This limitation stems from the predominant evaluation of these models in environments emphasizing localized structures and spatial hierarchies, primarily due to the reliance on convolution-based models. Such constraints in addressing non-i.i.d data distributions lead to the exploration of more personalized FL solutions, such as FedBN [14], FedPer [2], and pFedMe [3]. FL, with its privacy-preserving capabilities, has found utility in numerous medical tasks[9; 30; 20; 6; 23]. Notable applications of FL in medical imaging are seen in multi-institutional brain tumor segmentation[12; 27], breast density classification[24],MRI reconstruction[6] and fMRI analysis[15]. Challenges presented by non-i.i.d. data in medical imaging, however, remain unresolved[23], as Non-i.i.d. data largely impacts FedAvg algorithm's convergence speed[13; 25]. **Vision Transformers** Dosovitskiy et al.'s Vision Transformers [4] have set benchmarks in computer vision and medical image analysis [16; 7; 26]. Swin Transformers [16] enhance Vision transformers by adopting hierarchical architecture with patch merging and relative position embedding. In the medical field, Vision transformers have been intergrated in the U-shaped CNN architectures [7; 36]. Yet, both UNETR and nnFormer, despite their respective merits, have computational limitations due to the constraints of fixed token size and limited receptive field of CNN layers, respectively. ## 3 Global-Local Encoder Alignment ### Image representation in Vision Transformers The Vision Transformer [29; 4] is a prominent architecture for vision tasks that primarily relies on Multi-Head Self-Attention (MHSA) to model long-range dependencies among input features. Given an input tensor \(\mathbf{X}\in\mathbb{R}^{H\times W\times C}\) where \(H\), \(W\), and \(C\) are the height, width, and the feature dimension, we first reshape \(\mathbf{X}\) and define the query \(\mathbf{Q}\), key \(\mathbf{K}\), and value \(\mathbf{V}\) as follows: \[\mathbf{X}\in\mathbb{R}^{H\times W\times C} \rightarrow\mathbf{X}\in\mathbb{R}^{(H\times W)\times C}, \tag{5}\] \[\mathbf{Q} =\mathbf{X}\mathbf{W}^{q}, \mathbf{K} =\mathbf{X}\mathbf{W}^{k}, \mathbf{V} =\mathbf{X}\mathbf{W}^{v},\] where \(\mathbf{W}^{q}\in\mathbb{R}^{C\times C}\), \(\mathbf{W}^{k}\in\mathbb{R}^{C\times C}\), and \(\mathbf{W}^{v}\in\mathbb{R}^{C\times C}\) represent the linear transformation weight matrices, which are trainable. Assuming the input and output share the same dimensions, the traditional MHSA can be expressed as: \[\mathbf{A}=\mathrm{Softmax}(\mathbf{Q}\mathbf{K}^{\mathrm{T}}/\sqrt{d}) \mathbf{V}, \tag{6}\] in which \(\sqrt{d}\) means an approximate normalization, and the \(\mathrm{Softmax}\) function is applied to the rows of the matrix. We simplify the discussion by omitting the concept of multiple heads. In 6, the matrix product of \(\mathbf{Q}\mathbf{K}^{\mathrm{T}}\) computes the pairwise similarity between tokens. Then, each new token is derived from a combination of all tokens based on their similarity. Following the computation of MHSA, a residual connection is added to facilitate optimization, as shown below: \[\mathbf{X}\in\mathbb{R}^{(H\times W)\times C} \rightarrow\mathbf{X}\in\mathbb{R}^{H\times W\times C}, \tag{7}\] \[\mathbf{A}^{\prime} =\mathbf{A}\mathbf{W}^{p}+\mathbf{X},\] in which \(\mathbf{W}^{p}\in\mathbb{R}^{C\times C}\) is a trainable weight matrix for feature projection. Lastly, a multilayer perceptron (MLP) is employed to enhance the representation: \[\mathbf{Y}=\mathrm{MLP}(\mathbf{A}^{\prime})+\mathbf{A}^{\prime}, \tag{8}\] where \(\mathbf{Y}\) denotes the output of a transformer block. It is evident that the computational complexity of MHSA (6) is \[\Omega(\mathrm{MHSA})=3HWC^{2}+2H^{2}W^{2}C. \tag{9}\] Similarly, the space complexity (memory consumption) also includes the term of \(O(H^{2}W^{2})\). As commonly known, \(O(H^{2}W^{2})\) could become very large for high-resolution inputs.This limits the applicability of transformers for vision tasks. **Alignment via regularization** The high computational complexity of MHSA as shown in equation (9) becomes a severe challenge in large-scale vision tasks. Moreover, in a federated learning scenario, we confront another pivotal issue: the statistical heterogeneity among different local models. Specifically, each local model may learn distinct features due to varying data distribution across clients. If not handled appropriately, this heterogeneity can hinder the global model's performance. A surrogate function could be devised that approximates the local behavior of the objective function, yet is simpler to minimize. Let \(f(x)\) represent a function. At a given point \(x=y\), we can express its quadratic approximation as: \[f(y)+\nabla f(y)^{T}(x-y)+\frac{1}{2\mu}|x-y|^{2}, \tag{10}\] where \(\nabla f(y)\) is the gradient of the function \(f\) at \(y\) and \(\mu\) is a positive scalar, representing the step size. In the context of our federated learning setting, this translates into a quadratic upper-bound for the local loss function \(F_{k}(w)\) around the global weights \(w^{t}\): \[F_{k}(w)\leq F_{k}(w^{t})+\nabla F_{k}(w^{t})^{T}(w-w^{t})+\frac{1}{2\mu}|| \mathbf{W}^{q,k}-\mathbf{W}^{q,G}||_{2}^{2}, \tag{11}\] where \(\nabla F_{k}(w^{t})\) is the gradient of the local loss function at \(w^{t}\) and \(\mu\) is a positive scalar representing the step size. This regularization term is analogous to the attention score in MHSA, controlling the contribution of each feature to the final representation. The norm \(||\mathbf{W}^{q,k}-\mathbf{W}^{q,G}||_{2}^{2}\) represents the Euclidean distance between the local and global model's query matrices. This regularization term aligns the local model to the global model in the query space. The resulting objective function becomes: \[\min_{w}h_{k}(w;\;w^{t})=F_{k}(w)+\frac{\mu}{2}||\mathbf{W}^{q,k}-\mathbf{W}^{ q,G}||_{2}^{2}, \tag{12}\] where the term \(||\mathbf{W}^{q,k}-\mathbf{W}^{q,G}||_{2}^{2}\) represents the squared Euclidean norm, aligning the local query matrix \(\mathbf{W}^{q,k}\) to the global one \(\mathbf{W}^{q,G}\). This term constrains the update of the local models, mitigating the issue of statistical heterogeneity. Here, the regularization term is analogous to the MHSA operation, where the contribution of each query (feature) to the final output depends on the similarity between the query and key. As in MHSA, where the attention weights are computed considering all tokens, here, the regularization term takes into account the whole model parameters. However, unlike MHSA, which calculates the similarity between tokens, here we calculate the distance between the local and global model's query matrices. This regularization strategy mirrors the attention mechanism in the Vision transformers. In the next section, we extend the alignment to more matrices of the vision transformers, resulting in more added terms. ### Multi-Head Encoder Alignment Mechanism (FedMHA) First, let's define the weight matrices of the local model \(M_{i}\) and the global model \(M_{G}\) as \(\mathbf{W}^{q}i\), \(\mathbf{W}^{k}i\), \(\mathbf{W}^{v}i\), and \(\mathbf{W}^{p}i\) for client \(i\), and \(\mathbf{W}^{q}G\), \(\mathbf{W}^{k}G\), \(\mathbf{W}^{v}G\), and \(\mathbf{W}^{p}G\) for the global model, respectively. Now, we can reformulate the equations (6)-(8) for each client \(i\) as: \[\mathbf{Q}_{i}=\mathbf{X}\mathbf{W}_{i}^{q},\qquad\mathbf{K}_{i}=\mathbf{X} \mathbf{W}_{i}^{k},\qquad\mathbf{V}_{i}=\mathbf{X}\mathbf{W}_{i}^{v} \tag{13}\] \[\mathbf{A}_{i}=\mathrm{Softmax}(\mathbf{Q}_{i}\mathbf{K}_{i}^{\mathrm{T}}/ \sqrt{d})\mathbf{V}_{i}, \tag{14}\] \[\mathbf{A^{\prime}}_{i}=\mathbf{A}_{i}\mathbf{W}_{i}^{p}+\mathbf{X}_{i}, \tag{15}\] \[\mathbf{Y}_{i}=\mathrm{MLP}(\mathbf{A^{\prime}}_{i})+\mathbf{A^{\prime}}_{i}, \tag{16}\] where \(\mathbf{Y}_{i}\) denotes the output of a transformer block for the local model \(M_{i}\). With the MHEA method, we aim to minimize the difference between each local model encoder's weights and the global model encoder's weights. To do this, we calculate the L2 difference between each local layer's weights and the corresponding global layer's weights. Let's denote the L2 difference between the local and global layers as \(L_{i}^{k}\) for client \(k\) and layer \(i\). For the \(Q\), \(K\), and \(V\) weight matrices, we compute the L2 difference as follows: \[L_{i,Q}^{k}=|\mathbf{W}_{i}^{q,k}-\mathbf{W}_{i}^{q,G}|^{2},Li,K^{k} =|\mathbf{W}_{i}^{k,k}-\mathbf{W}_{i}^{k,G}|^{2},Li,V^{k}=|\mathbf{ W}_{i}^{v,k}-\mathbf{W}_{i}^{v,G}|^{2}, \tag{17}\] For the MLP layers, let's denote the weight matrices as \(\mathbf{W}_{i}^{MLP,k}\) for client \(k\) and \(\mathbf{W}_{i}^{MLP,G}\) for the global model. We compute the L2 difference for the MLP layers as follows: \[L_{i,MLP}^{k}=|\mathbf{W}_{i}^{MLP,k}-\mathbf{W}_{i}^{MLP,G}|^{2} \tag{18}\] Next, we incorporate these L2 differences into the local objective function for each client \(k\) and layer \(i\). The modified local objective function for client \(k\) and layer \(i\) would be: \[\min_{w}h_{k}^{i}(w;\;w^{t})=F_{k}^{i}(w)+\frac{\mu}{2}\left(L_{i,Q}^{k}+L_{i,K}^{k}+L_{i,V}^{k}+L_{i,MLP}^{k}\right), \tag{19}\] This local objective function includes both the local loss function \(F_{k}^{i}(w)\) and the L2 difference between the local and global layers. The MHEA term encourages the local updates to stay close to the initial global model, addressing the issue of statistical heterogeneity and safely incorporating variable amounts of local work. The federated learning process continues for multiple rounds, with the global model sending its updated parameters to the local clients and receiving their updated parameters until a specified convergence criterion is met. ## 4 Data and experimental setup ### Dataset and Pre-processing We utilized the IQ-OTH/NCCD Lung Cancer dataset from the Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases. Collected in 2019, this dataset comprises 1190 CT scan slice images from 110 distinct instances. Each instance contains multiple slices. The CT scan images cover a window width ranging from 350 to 1200 HU and are categorized into three types: benign, malignant, and normal[31]. The images are representative of a diverse patient demographic, capturing a broad spectrum of pathological conditions. For the purpose of our experiment, the dataset was partitioned across ten clients. Each client received a different number of samples, simulating a genuine federated learning environment. For pre-processing, we standardized the images to a consistent size of 224 x 224 pixels and applied common data augmentation techniques like random rotations and horizontal flipping, as advocated in works like [1]. These steps align with standard practices, especially for this dataset. The choice of this dataset was influenced by its heterogeneity, with variations across gender, age, and health conditions, as noted by the original authors. ### Model Architectures and Training Our study compared the convolutional neural network (ConvNet5) and a pre-trained vision transformer in a federated learning setting. The ConvNet5 architecture consists of five convolutional layers, each followed by batch normalization and ReLU activation function. These are succeeded by max-pooling layers and two fully connected layers with dropout to prevent overfitting. The dataset allocation across clients was accomplished using a Latent Dirichlet Allocation (LDA) based data splitter, which is graphically represented in Figure 1. We used one A100 GPU in combination with the PyTorch framework for our experiments. We utilized the Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.01 and implemented gradient clipping with a max value of 5.0 to avoid exploding gradients. For FedProx method, the proximity coefficient, \(\mu\), was set at 0.5. ### Evaluation metrics The performance of the models was evaluated using metrics such as accuracy, number of correct predictions, and loss. For each client \(i\) with local dataset \(D_{i}\), we define the accuracy as: \[\text{Acc}_{i}=\frac{1}{|D_{i}|}\sum_{x\in D_{i}}\mathbb{I}\left(y(x)=\mathcal{ F}(\omega_{i};x)\right) \tag{20}\] Where \(y(x)\) is the true label of instance \(x\) and \(\mathbb{I}(\cdot)\) is the indicator function, returning 1 if the model's prediction matches the true label, and 0 otherwise. Given the globally trained model, denoted by \(\mathcal{F}(\omega_{\text{global}};x)\), the accuracy of this model on each client's test dataset \(D_{i}^{\text{test}}\) can be calculated. The worst accuracy of the global model, when evaluated across all clients, is then: \[\text{Lowest Acc}_{\text{global}}=\min_{i}\left(\frac{1}{|D_{i}^{\text{test}}| }\sum_{x\in D_{i}^{\text{test}}}\mathbb{I}\left(y(x)=\mathcal{F}(\omega_{ \text{global}};x)\right)\right) \tag{21}\] Figure 1: Client Data Distribution Variability at Different Heterogeneity Levels. The plot illustrates the variance in data distribution among clients as the heterogeneity levels, denoted by \(\alpha_{\text{LDA}}\) values, alter. A longer vertical axis at lower \(\alpha_{\text{LDA}}\) values signifies increased variability, while a wider and shorter plot at higher \(\alpha_{\text{LDA}}\) values suggests diminished variability. This metric captures the scenario where the globally trained model has its poorest accuracy across client test datasets. ## 5 Results ### Fairness in heterogenous settings We evaluate the impact of our proposed model on enhancing local models for underrepresented clients as well as for all clients in terms of test accuracy improvement over different rounds. Figure 2 demonstrates the difference between using a Multi-head encoder alignment mechanism (solid blue curve) and local SGD training (dashed orange curve). We observed that in highly heterogeneous settings (i.e. lower \(\alpha_{\mathrm{LDA}}\) values), the improvement brought about by our model was more noticeable.The local model typically outperformed traditional settings after the first round, indicating both higher accuracy and rapid convergence rates for our approach, as depicted in the provided figures. To compare our proposed method with other federated learning algorithms, we trained the models for 10 rounds with LDA value of 0.2. Each method was evaluated using a cross-entropy loss function for each round. The results were then averaged based on the number of samples per client using a weighted averaging approach. Figure 3 provides a comparative analysis of the average loss across various federated learning settings over the initial 10 rounds. As expected, the global model with a centralized data delivers the best performance. Following the global model, FedMHA method outperforms the other federated learning algorithms. FedProx and FedAvg methods exhibit lower performance, with the FedBN approach was the least satisfactory among the considered federated learning algorithms. ### Evaluation for minority clients In this section, we analyze the performance of minority clients in our proposed FedMHA as shown in Figure 4. The purpose of this study is to highlight the potential struggles of minority clients in heterogeneous data environments. We trained the models independently, and then evaluated them on Figure 2: Accuracy analysis of Multi-head encoder alignment mechanism (solid blue curves) vs. Local Stochastic Gradient Descent (SGD) (dashed orange curves) training across various heterogeneity levels. The graph shows higher accuracy improvement in higher heterogeneity levels (i.e. lower \(\alpha_{\mathrm{LDA}}\)) a benchmark global dataset. Each client was trained on their own local dataset, and subsequently tested against a global dataset. Table 1 provides a comparison of various federated learning methods, including our proposed FedMHA method, under different heterogeneity levels, represented by varying \(\alpha_{\mathrm{LDA}}\) values. The table highlights the average accuracies achieved after 5 rounds of federated learning.A detailed analysis of these results reveals that while all models generally improve their performance as the \(\alpha_{\mathrm{LDA}}\) value increases (corresponding to a more homogeneous data distribution), the FedMHA outperforms all other models, particularly in low \(\alpha_{\mathrm{LDA}}\) values. We conduct a side-by-side comparison with other models to analyze their personalization for various models, as shown in Figure 5. The experiments are carried out at different alpha levels and measured Figure 4: Comparative loss analysis of our proposed Multi-head encoder alignment mechanism against Local SGD Training in a range of heterogeneity settings. Improved loss reduction is observed in highly heterogeneous environments when incorporating MHA, reflecting the effectiveness of our proposed FedMHA method. Figure 3: Comparative analysis of the average loss across various federated learning settings over the initial 10 rounds. This showcases the trajectory of client loss, with the global setting employed as thebenchmark. the average test accuracy for all clients. Our model is represented by the blue area, while the other models are depicted with a yellow area, and the overlapping area in green. Each dot in the figure represents the mean accuracy across all clients for each level of heterogeneity. The area stretching from bottom to top illustrates the range of accuracy for the ten clients involved, with a narrower area signifying a fairer model. The top line of our model is also higher, indicating that the performance of our method is better in Vision Transformers for various clients, particularly in the 0.1 setting. The maximum accuracy achieved for these clients is around 0.4%, while the minimum is close to zero. As the alpha value increases, the area becomes narrower, signifying that the personalization benefits are more pronounced for better-performing clients. To get a better understanding of the fairness, we explore the effect of three components of our training process. **Weighted averaging boosts the effect of alignment** The first component of our investigation targets the effect of the averaging paradigm. A comparison has been made between using weighted averaging, where updates from each client are weighted by the size of their respective training sample, and a more straightforward scenario where all updates are given equal weight. The results are shown in Table 2. FedMHA shows the most noticeable enhancements when weighted averaging is employed. **Effect of Number of Clients** As the second component, we investigate the impact of the number of clients on the performance of federated learning systems. A clear correlation emerges between the number of clients and the efficacy of federated learning models. As shown in Table 2, the performance improvements associated with FedMHA, both with and without weighted averaging, span an accuracy range of 67.67% to 84.09%. This range contrasts the range of 21.36% to 80.91% for the other models. **Heterogeneity intensifies minority clients' underperformance** The third part of our investigation looks into the influence of alignment loss on the performance of federated learning models. Here, \(\alpha_{\mathrm{LDA}}\) values ranging from 0.1 to 0.9 were implemented to evaluate improvements in fairness. This analysis aims to alleviate the loss experienced by worst-performing, typically underrepresented, clients. Our approach resulted in marked enhancements, especially in settings of high heterogeneity, as shown in Figure 6. Incorporating alignment loss in the local objective functions led to a boost \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Method** & \(\alpha_{\mathrm{IDA}}\) =0.1 & \(\alpha_{\mathrm{IDA}}\) =0.2 & \(\alpha_{\mathrm{IDA}}\) =0.3 & \(\alpha_{\mathrm{IDA}}\) =0.5 & \(\alpha_{\mathrm{IDA}}\) =0.7 & \(\alpha_{\mathrm{IDA}}\) =0.8 & \(\alpha_{\mathrm{IDA}}\) =0.9 \\ \hline \hline FedAvg [17] & \(62.03\)\% & \(65.55\)\% & \(80.24\)\% & \(80.03\)\% & \(72.19\)\% & \(85.79\)\% & \(84.94\)\% \\ FedAvg (ResNet50) [22] & \(52.73\)\% & \(61.90\)\% & \(69.25\)\% & \(76.88\)\% & \(76.06\)\% & \(85.20\)\% & \(84.05\)\% \\ FedBN [14] & \(47.89\)\% & \(65.97\)\% & \(60.93\)\% & \(63.45\)\% & \(61.15\)\% & \(74.05\)\% & \(74.10\)\% \\ FedProx [11] & \(47.18\)\% & \(67.42\)\% & \(62.99\)\% & \(72.72\)\% & \(71.00\)\% & \(83.89\)\% & \(80.43\)\% \\ **FedMHA (ours)** & \(67.09\)\% & \(74.23\)\% & \(70.12\)\% & \(81.84\)\% & \(77.96\)\% & \(87.76\)\% & \(83.99\)\% \\ \hline Local & \(18.08\)\% & \(17.85\)\% & \(28.60\)\% & \(46.54\)\% & \(50.77\)\% & \(36.67\)\% & \(54.74\)\% \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of federated learning methods (Local, FedMHA, FedAvg [17], FedAvg ResNet [22], FedBN [14], FedProx [11]) under different levels of data heterogeneity represented by varying \(\alpha_{\mathrm{LDA}}\) values. The average accuracies were calculated after 5 rounds of federated learning. Figure 5: Comparison of fairness in Federated Learning strategies across heterogeneity levels. Dots represent the mean accuracy for each level. The vertical stretch signifies the accuracy range for the 10 clients, with a narrower area indicating a fairer model. Our method (blue) generally outperforms other models (green), particularly in the 0.1 setting. in local training generalization and fairness for Federated Averaging.Despite achieving satisfactory performance on training data in ideal conditions, it was observed that minority clients generally underperform in settings with high levels of data heterogeneity. Using attention layers to gather global representations from all clients and then aligning them shows great promise for improving model fairness. This likely improvement is due to the ability of attention mechanisms to effectively capture information, posing a key point of reconsideration for using convolutional layers as the main architecture in current FL algorithms [11][10]. This suggests a need for more focus on Vision Transformers in future updates and improvements. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Method** & \multicolumn{3}{c}{**W/O Weighted Averaging**} & \multicolumn{3}{c}{**With Weighted Averaging**} \\ \cline{2-7} & **2 Clients** & **5 Clients** & **8 Clients** & **2 Clients** & **5 Clients** & **8 Clients** \\ \hline FedAvg [17] & \(63.67\)\% & \(61.51\)\% & \(61.70\)\% & \(79.55\)\% & \(76.36\)\% & \(76.36\)\% \\ FedAvg (ResNet50) [22] & \(60.72\)\%\(\downarrow\) & \(64.96\)\%\(\uparrow\) & \(64.26\)\%\(\uparrow\) & \(76.36\)\%\(\downarrow\) & \(80.91\)\%\(\uparrow\) & \(80.00\)\%\(\uparrow\) \\ FedProx[11] & \(45.73\)\%\(\downarrow\) & \(59.69\)\%\(\downarrow\) & \(60.58\)\%\(\downarrow\) & \(58.18\)\%\(\downarrow\) & \(74.55\)\%\(\downarrow\) & \(75.91\)\%\(\downarrow\) \\ FedBN [14] & \(31.12\)\%\(\downarrow\) & \(54.05\)\%\(\downarrow\) & \(61.35\)\%\(\downarrow\) & \(38.64\)\%\(\downarrow\) & \(65.45\)\%\(\downarrow\) & \(74.55\)\%\(\downarrow\) \\ **FedMHA (ours)** & \(67.70\)\%\(\uparrow\) & \(67.67\)\%\(\uparrow\) & \(69.38\)\%\(\uparrow\) & \(83.64\)\%\(\uparrow\) & \(83.64\)\%\(\uparrow\) & \(84.09\)\%\(\uparrow\) \\ \hline Local & \(26.20\)\% & \(26.20\)\% & \(26.20\)\% & \(21.36\)\% & \(21.36\)\% & \(21.36\)\% \\ \hline \hline \end{tabular} \end{table} Table 2: Analysis of various federated learning models (FedMHA, FedAvg [17], FedAvg ResNet [22], FedProx [11], FedBN [14], Local) with and without weighted averaging across different numbers of clients (2, 5, 8). Evaluations are made with FedAvg [17] as baseline, with improvements and declines represented by \(\uparrow\) and \(\downarrow\) respectively. Figure 6: Impact of alignment loss on model performance. We compare the scenarios where the alignment term is retained in the local objective functions versus its removal. Models with the alignment term exhibit lower loss. Conclusion In this paper, we have presented and evaluated a federated learning approach that leverages Vision Transformers and multi-head attention mechanisms to effectively handle data heterogeneity in distributed settings. Our experiments, conducted on lung cancer CT scans, demonstrate that combining optimization based approaches with vision transformer modules, outperforms existing federated learning models, particularly in scenarios with high data heterogeneity. The success of our approach in medical imaging underscores its potential in facilitating collaboration among healthcare institutions while preserving data privacy. Our analysis also highlights the importance of considering client data distribution and sample size during model aggregation, as a means to improve the overall accuracy. It encourages further research on employing vision transformers in heterogenous environments.The results have implications for the medical domain, where accurate diagnosis and treatment planning are paramount. Future work could focus on further enhancing fairness among clients and addressing potential scalability issues in large-scale federated learning scenarios. Additionally, exploring the use of alignment methods and vision transformers in other medical application domains could provide valuable insights into its generalizability and adaptability in the broader healthcare context. There are a few limitations to consider for our work. While our approach showcases the benefits of data heterogeneity handling, it doesn't address potential trade-offs related to computational overhead or communication costs. Our focus on accuracy and fairness as the primary metrics might also overlook other important aspects such as latency, or model compactness. Lastly, the real-world deployment of such algorithms may encounter challenges that are not captured in the controlled environment of our experiments. ## Acknowledgement This research is supported by KWF Kankerbestrijding and the Netherlands Organisation for Scientific Research (NWO) Domain AES, as part of their joint strategic research programme: Technology for Oncology IL. The collaboration project is co-funded by the PPP allowance made available by Health Holland, Top Sector Life Sciences & Health, to stimulate public-private partnerships.
2302.06566
Characterizing the VPN Ecosystem in the Wild
With the shift to working remotely after the COVID-19 pandemic, the use of Virtual Private Networks (VPNs) around the world has nearly doubled. Therefore, measuring the traffic and security aspects of the VPN ecosystem is more important now than ever. It is, however, challenging to detect and characterize VPN traffic since some VPN protocols use the same port number as web traffic and port-based traffic classification will not help. VPN users are also concerned about the vulnerabilities of their VPN connections due to privacy issues. In this paper, we aim at detecting and characterizing VPN servers in the wild, which facilitates detecting the VPN traffic. To this end, we perform Internet-wide active measurements to find VPN servers in the wild, and characterize them based on their vulnerabilities, certificates, locations, and fingerprinting. We find 9.8M VPN servers distributed around the world using OpenVPN, SSTP, PPTP, and IPsec, and analyze their vulnerability. We find SSTP to be the most vulnerable protocol with more than 90% of detected servers being vulnerable to TLS downgrade attacks. Of all the servers that respond to our VPN probes, 2% also respond to HTTP probes and therefore are classified as Web servers. We apply our list of VPN servers to the traffic from a large European ISP and observe that 2.6% of all traffic is related to these VPN servers.
Aniss Maghsoudlou, Lukas Vermeulen, Ingmar Poese, Oliver Gasser
2023-02-13T18:09:54Z
http://arxiv.org/abs/2302.06566v1
# Characterizing the VPN Ecosystem in the Wild ###### Abstract With the increase of remote working during and after the COVID-19 pandemic, the use of Virtual Private Networks (VPNs) around the world has nearly doubled. Therefore, measuring the traffic and security aspects of the VPN ecosystem is more important now than ever. VPN users rely on the security of VPN solutions, to protect private and corporate communication. Thus a good understanding of the security state of VPN servers is crucial. Moreover, properly detecting and characterizing VPN traffic remains challenging, since some VPN protocols use the same port number as web traffic and port-based traffic classification will not help. In this paper, we aim at detecting and characterizing VPN servers in the wild, which facilitates detecting the VPN traffic. To this end, we perform Internet-wide active measurements to find VPN servers in the wild, and analyze their cryptographic certificates, vulnerabilities, locations, and fingerprints. We find 9.8M VPN servers distributed around the world using OpenVPN, SSTP, PPTP, and IPsec, and analyze their vulnerability. We find SSTP to be the most vulnerable protocol with more than 90% of detected servers being vulnerable to TLS downgrade attacks. Out of all the servers that respond to our VPN probes, 2% also respond to HTTP probes and therefore are classified as Web servers. Finally, we use our list of VPN servers to identify VPN traffic in a large European ISP and observe that 2.6% of all traffic is related to these VPN servers. + Footnote †: journal: Journal of LaTeX Templates ## 1 Introduction Virtual Private Networks (VPNs) provide secure communication mechanisms, including encryption and tunneling, enabling users to circumvent censorship, to access geo-blocked services, or to securely access an organization's resources remotely. The COVID-19 pandemic changed Internet traffic dramatically. Studies investigating the impact of the COVID-19 pandemic on Internet traffic show that streaming traffic being tripled around the world due to remote work, remote learning, and entertainment services [11, 27, 31]. VPN traffic has been no exception to this major traffic shift. After the COVID-19 pandemic, the VPN traffic observed in a large European IXP nearly doubled [18]. In a campus network, even a more dramatic increase of 20x has been reported [27], which shows a prominent growth of remote work and e-learning. Additionally, several articles find that remote work is here to stay [21; 48]. According to recent statistics from SurfShark [44], 31% of all Internet users use VPNs. In order to facilitate network planning and traffic engineering, Internet Service Providers (ISPs) have an interest in understanding the network applications being used by their clients, and how these applications behave in terms of traffic patterns and volume. Therefore, detecting and characterizing VPN traffic is an important task for ISPs. Certain VPN protocols use known port numbers for their operation, e.g. port number 4500 is used for IPsec, and port number 1723 is used for SSTP. Thus, the traffic using protocols over the known port numbers can easily be detected as VPN traffic. However, some VPN protocols, e.g. SSTP, and in some occasions, OpenVPN use port number 443 which is commonly used for secure web applications. This makes it challenging to distinguish between web and VPN traffic. Moreover, VPN users might share sensitive private or corporate data over VPN connections. As the number of cyber attacks has almost doubled after the pandemic [9], it makes Internet users even more aware of their privacy and the security of their VPN connections. Therefore, investigating the vulnerabilities of the VPN protocols helps to highlight existing shortcomings in VPN security. Previous studies focused on detecting VPN traffic using machine learning [15; 39], or DNS-based approaches [2; 18]. Some studies have also analyzed the commercial VPN ecosystem [29; 47]. However, to the best of our knowledge, this is the first work which conducts active measurements to detect and characterize VPN servers in the wild. In this paper, we aim to detect, characterize, and analyze the deployment of VPN servers in the Internet using active measurements along with passive VPN traffic analysis. Specifically, this work makes the following main contributions: * **VPN server deployment:** We perform active measurements to the complete IPv4 address space and an IPv6 hitlist for 4 different VPN protocols both in UDP and TCP. We find around 9.8 million IPv4 addresses and 2.2 thousand IPv6 addresses responsive to our probes. * **VPN security evaluation:** We analyze the detected IP addresses in terms of TLS vulnerabilities, certificates, and geolocation. We observe that the United States is the most common location among our detected IP addresses. We also find that more than 90% of SSTP servers are vulnerable to a TLS attack and nearly 7% of the certificates are expired. * **VPN traffic analysis:** We analyze passive traffic traces from a large European ISP, we find that 2.6% of the traffic uses our list of VPN servers as either source or destination address. Moreover, we use rDNS data along with DNS records from a large European ISP to compare our results with previous work looking into VPN classification [18]. We find that using our methodology, we find 4 times more VPN servers in the wild. * **VPN probing tool:** We develop new modules for ZGrab2 [52] to send customized VPN probes. We make these modules publicly available [56] to foster further research in the VPN ecosystem. ## 2 Background VPNs establish cryptographically secured tunnels between different networks and can be used to connect private networks over the public network. Thus, a proper VPN connection should be encrypted in order to prevent eavesdropping and tampering of VPN traffic. The tunneling mechanism of a VPN connection also provides privacy since the traffic is encapsulated. Therefore, users remotely accessing a private network appear to be directly connected. While the exact tunneling process varies depending on the underlying VPN protocol, it is quite common to categorize VPNs in two different groups: * **Site-to-site VPNs**: In this configuration, a VPN is used to connect two or more networks of geographically distinct sites. This is common for companies with branches in different locations. * **Remote access VPNs**: This kind of VPN connection is mainly used by individual end-users in order to connect to a private network. ### VPN Usage The usage of VPNs has evolved over the past three decades. David Crawshaw [13] gives a very comprehensive overview of how and why VPNs changed over the years. While in the earlier days of the Internet, they were primarily used by companies to connect their geographically distinct offices, VPNs nowadays provide a variety of use cases for individuals as well and are used by millions of end-users around the globe. Use cases include: * **Privacy preservation**: The encrypted VPN tunnels provide end-users the means to preserve their privacy. * **Censorship circumvention/accessing geo-blocked content**: Specific services might be censored in some countries or geographically restricted. By connecting to a VPN server in a different country, it is still possible to access such content since it would now appear as if the user was located in a different country. * **Remote access**: It is common to use VPNs to remotely access restricted resources or to connect with an organization's network. This usage scenario has gained importance especially during the COVID-19 pandemic among employees and students alike due to remote working. Different usage patterns, the general understanding of the functionality of VPNs, and awareness of potential risks vary between different demographic groups. Dutkowska-Zuk et al. [17] studied how and why people from different demographic backgrounds use VPN software primarily comparing the general population with students. They found that the general population is more likely to rely on free, commercial VPN solutions to protect their privacy. Students, on the other hand, rather resort to VPN software for remote access or to circumvent censorship and access geographically blocked services with an increased use of institutional VPNs. Generally, they found that, while most VPN users are concerned about their privacy, they are less concerned about data collection by VPN companies. Especially during the COVID-19 pandemic, VPNs increasingly gained significance. The pandemic and the resulting lockdowns caused many employees and students to work and study remotely from home. Feldmann et al. [18] analyzed the effect of the lockdowns on the Internet traffic. Their work included the analysis of how VPN traffic shifted during the pandemic. They detected a traffic increase of over 200% for VPN servers identified based on their domain with increased traffic even after the first lockdowns. These findings highlight the rising significance of VPNs. With progressing digitalization, VPN traffic can be expected to increase even further. ### VPN Protocols We want to cover as many protocols as possible including some of the most prominent ones like OpenVPN and IPsec. The functionality of a VPN connection establishment varies depending on the underlying VPN protocol. Table 1 gives an overview of all the VPN protocols we consider with general information on their underlying protocols. Among them, especially PPTP, which was the first actual VPN protocol standardized in 1999 (see RFC 2637 [22]), can be considered rather outdated and it is not recommended to be used anymore [13, 38]. WireGuard is the most modern protocol at the moment. It is much more simplistic than, e.g., OpenVPN or IPsec and incorporates state-of-the-art cryptographic principles. ## 3 Methodology In this section, we introduce our methodology for our passive and active measurements. We perform Internet-wide measurements in order to detect VPN servers in the wild and create hit lists of identified VPN servers. Based on those results, we conduct follow-up measurements to fingerprint the VPN servers and further \begin{table} \begin{tabular}{l l l l l} \hline \hline VPN protocol & Transport protocol & Port & (D)TLS-based & Server detection possible \\ \hline IPsec/L2TP & UDP & 500 & ✗ & ✓ \\ OpenVPN & UDP \& TCP & 1194, 443 & ✓ & partially \\ SSTP & TCP & 443 & ✓ & ✓ \\ PPTP & TCP & 1723 & ✗ & ✓ \\ AnyConnect & UDP \& TCP & 443 & ✓ & ✗ \\ WireGuard & UDP & 51820 & ✗ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of VPN protocols showing the transport protocol, port, (D)TLS encryption, and possible detection. analyze them in terms of security. Finally, we look for the detected IP addresses in the traffic from a large European ISP to find out the amount of VPN traffic. ### VPN Server Detection Our measurements to detect VPN servers include the whole IPv4 address range as well as over 530 million non-aliased IPv6 addresses from the _IPv6 Hitlist Service_[20, 58]. We send out the connection initiation requests that are used in the connection establishments of the different VPN protocols. For UDP-based protocols, we use ZMap [53] (ZMapv6 for IPv6 [12]), a transport-layer network scanner, to directly send out UDP probes. If the VPN protocol is TCP-based, we first use ZMap to find targets with the respective open TCP ports using TCP SYN-scans. We then use ZGrab2 [52] to send out the actual VPN requests. ZGrab2 works on the application layer. It can be used complementary to ZMap for more involved scans. It also allows us to implement custom modules needed for our VPN requests over TCP and TLS. We identify an address as a VPN server based on the responses we receive to our initiation requests. If the parsed response satisfies the format of the expected VPN response, the target is classified as a VPN server. To completely detect the VPN ecosystem for a specific server, we might have to take several server configurations into account and perform multiple measurements for a single protocol accordingly. Apart from that, for some protocols and configurations, we require knowledge of cryptographic key material which we do not have since we perform measurements in the wild. Therefore, we cannot detect the entirety of the VPN ecosystem with our method. The last column of Table 1 summarizes for which protocol we are able to detect VPN servers. When OpenVPN servers specify the so-called _tls-auth_ directive, an HMAC signature is required in all control messages. This means that we can only craft requests without HMACs and hence detect only a subset of all OpenVPN servers. As mentioned above, for some protocols, it might also be necessary to consider different configurations. For IPsec, e.g., we suggest seven different cipher suites in the initiation request. Apart from that, we have to specify a key exchange method in the OpenVPN requests. Out of the two possible key exchange methods, namely _key method 1_ and _key method 2_, key method 1 is considered insecure and is therefore deprecated [41]. We therefore specify key method 2 in our initiation requests and then perform a follow-up scan where we suggest the deprecated key exchange method to identified OpenVPN servers to investigate how many of them might still support key method 1. ### TLS Analysis For the TLS-based VPN protocols, which include SSTP and OpenVPN over TCP, we perform follow-up measurements to further fingerprint the servers and assess them in terms of security. For that, we collect TLS certificates of the VPN servers to analyze them for expiry, check for self-signed certificates and investigate how many of them are snake oil certificates. We characterize a certificate as a snake oil certificate if the common name (CN) of the subject and issuer are both specified as _localhost_ or _user.local_. For the certificates signed by a Certificate Authority (CA), we collect the most common issuing organizations. We gather domain names corresponding to the responsive IP addresses using reverse DNS (rDNS) look-ups, and collect certificates with and without the Server Name Indication (SNI) extension using these domain names and compare them against each other. SNI can be used by the client in the TLS handshake in order to specify a hostname for which a connection should be established. This might be necessary in cases where multiple domain names are hosted on a single address. Finally, we test if the servers are susceptible to the _Heartbleed_[50] vulnerability as well as a series of TLS downgrade attacks. The Heartbleed attack is based on the Heartbeat Extension [49] of the OpenSSL library. In TLS downgrade attacks, we try to force a server to establish a connection using an outdated SSL/TLS version or using insecure cipher suites by suggesting those outdated primitives in the TLS handshake. Table 8 summarizes all the vulnerabilities and their requirements, i.e., what we have to test for or the version or cipher suite to which we try to downgrade the TLS connection. For instance, in order to check if a server is vulnerable to the FREAK attack, we suggest any SSL/TLS version and only RSA_EXPORT cipher suites in the TLS handshake. ### Fingerprinting We try to infer more information on the VPN servers based on our connection initiation requests as well as from follow-up measurements in order to further categorize them. One aspect we examine is the server software deployment. For SSTP and PPTP, we can extract information on the software vendor directly from the responses to our initiation requests. Furthermore, we perform OS detection measurements on a subset of 1000 VPN servers for each protocol using _Nmap_[34], a network scanner that can be used for network discovery among other things. We use Nmap's _fast_ option and target 100 instead of 1000 ports to decrease runtime and parse the results for the most common open ports and OS guesses. With those results, we can learn more about the VPN server infrastructure and potential other services running on the same servers. ### VPN Traffic Analysis The active measurement in Section 3.1 provides us with a list of IP addresses, namely VPN hitlist, which are responsive to at least one VPN protocol initiation request. We look for these IP addresses in the DNS records gathered from the DNS resolvers at a large European ISP during a 1-hour period to learn about the domain names these IP addresses are associated with. We do not expect to find all the detected IP addresses in these DNS responses. Therefore, for any remaining IP address, we use reverse DNS resolution to find the corresponding domain names. Then, we look for the IP addresses from our VPN hitlist on over a week of network flow data from the ISP to find out the amount of traffic associated with the VPN hitlist and compare the results with a port-based VPN traffic detection, and also a state-of-the-art approach. ### Ethical Considerations **Active scanning.** We follow best current practices [28, 43] to avoid potential harm to the networks we scan. We make sure that our proper IP address has a meaningful DNS PTR record pointing to our Web server which allows for requesting an opt-out from being scanned. We also limit our scanning rate and perform probing in a randomized order. We plan to notify the VPN providers about their servers' vulnerabilities. **ISP data.** All the data related to the ISP is processed on the ISP's premises. We do not copy, transfer, or store any data outside the dedicated servers that the ISP uses for its NetFlow analysis. ## 4 Active Measurements of the VPN Server Ecosystem In this section, we go through the results from our Internet-wide active measurement using different VPN protocols. We discuss the characteristics of the responsive servers such as geographical locations, VPN protocols, etc. Then, we analyze their vulnerabilities and try to fingerprint them based on the gathered information. Figure 1: Cumulative distribution of number of ASes (left) and number of countries (right) corresponding to the responsive IPs. ### Responsive Servers In total, we find 9,817,450 responsive IPv4 addresses with our probes that we can identify as VPN servers. **rDNS.** We investigate the reverse DNS records corresponding to the responsive IPv4 addresses. We aggregate results on the second-level domain and sort them based on the number of responsive IPs that they correspond to. We find that all the top 10 domain names belong to telecommunication companies (e.g., Open Computer Network, a large Japanese ISP, and Telstra, an Australian telecommunications company). Next, we filter all rDNS records which contain _vpn_ in their second-level domain names in order to detect commercial VPN providers. We find a single domain related to PacketHub which manages IP addresses for several companies, including NordVPN, a major commercial VPN provider. This domain name ranks 60th among all rDNS second level domains. **AS analysis**. Figure 1 shows the distribution of ASes to which our responsive IP addresses belong. The responsive IP addresses are originated by 49625 and 334 ASes in total, while top 10 ASes contribute to 22% and 38% of the IP addresses, for IPv4 and IPv6 respectively, as shown in Figure 1. Top 10 ASes for IPv4 responsive addresses are all telecommunication companies, while out of the top 10 ASes for IPv6 responsive addresses, 8 are telecommunication companies and 2 are academy-related ASes. Tables 2 and 3 further summarize the top 10 AS numbers as well as the AS names or organizations and the number of VPN servers that are registered within the respective AS. As can be seen, most top ASes are large ISP networks. Moreover, we investigate the top ASes for commercial VPN providers. As shown by Ramesh et al. [47] it is quite common for commercial VPN providers to use shared infrastructure. 27 providers, including popular companies such as NordVPN, Norton Secure VPN, or Mozilla VPN, use the same AS, namely AS 9009 operated by M247 Ltd. This AS is also visible in our measurements and it ranks 14th with 74,894 identified VPN servers (0.76% of all addresses). Furthermore, Ramesh et al. [47] find that some IP blocks in AS 16509 (Amazon) are shared across Norton Secure VPN and SurfEasy VPN. AS 16509 lands on rank 20 of our list being shared by almost 60,000 VPN servers (0.6% of all addresses). Another AS known to be used by VPN providers is AS 60068--again operated by M247 Ltd.-- which is used by NordVPN and CyberGhost VPN. It ranks on place 178 of our list with 6,898 VPN servers (0.07% of all addresses). Overall, we find that although the top ASes are dominated by large ISPs, a considerable number of VPN servers are located in ASes used by commercial VPN providers. **Geolocation**. We use Geolite Country Database [51] to determine the location of the responsive IP addresses. Figure 2 shows a heatmap of the number of responsive IPv4 addresses per country. We observe that responsive IP addresses are scattered all over the world, in total over 241 and 52 countries for IPv4 and IPv6, respectively. However, 64% and 86% of IP addresses belong to the top 10 countries for IPv4 and IPv6 respectively. Top 3 countries contributing to IPv4 responsive addresses are the United States, China, and UK, while top 3 countries for IPv6 are the United States, Japan, and Germany. ### VPN Protocols We are able to detect servers for IPsec, PPTP, OpenVPN without tls-auth, and SSTP. Table 4 summarizes our findings. Our IPsec UDP probes yield by far the most responsive VPN servers. It might seem surprising that we find such a large number of PPTP servers in contrast to OpenVPN and SSTP considering that PPTP is far more outdated and OpenVPN is one of the most prominent VPN protocols. However, we have to keep in mind that we can only detect a subset of the whole OpenVPN ecosystem since some configurations require knowledge of cryptographic key material as explained in Section 3.1. Apart from that, SSTP can only be used for remote access connections, whereas PPTP used to be the \begin{table} \begin{tabular}{r l r} \hline \hline AS number & AS name & VPN servers \\ \hline 4134 & ChinaNet & 515,830 \\ 7922 & Concast & 356,327 \\ 1221 & Tebtra & 257,821 \\ 3320 & Deutsche Telekom & 242,433 \\ 4766 & Korea Telecom & 228,863 \\ 4713 & NTT Communications & 145,286 \\ 7018 & ATkT & 137,698 \\ 4837 & China Unicom & 133,861 \\ 3462 & HINet & 119,612 \\ 20115 & Charter Communications & 97,109 \\ \hline \hline \end{tabular} \end{table} Table 2: IPv4: AS numbers, AS names and number of VPN servers belonging to the ASes. most widely deployed VPN protocol. We can assume that a large number of the detected PPTP servers are quite outdated, yet still running. Out of the around 1.4 million OpenVPN servers, 1,011,178 were detected over UDP and 482,956 over TCP. Considering that the TCP version of OpenVPN is generally rather considered as a fallback option, this disparity is to be expected. Figure 3 visualizes the intersection of those two address sets in a Venn diagram. We can see that the majority of the servers supports only a single transport protocol. **Overlap between protocols**. In the next step, we compare the IP address sets for the four protocols to depict their intersections and to find out how many of the servers support more than one VPN protocol. Figure 4 summarizes those findings in an upset plot. The horizontal bars on the left visualize the sizes of the four protocol sets. The vertical bars represent the different intersections and the sets to be considered are indicated by the black dots below the vertical bars. The first bar on the left, e.g., represents the number of VPN servers supporting both PPTP and IPsec with roughly 550,000 servers making up for around 5.7% of the whole detected VPN server ecosystem. The second bar on the right, on the other hand, represents the number of servers supporting all four protocols, which is close to zero with only around 2.8 thousand servers. \begin{table} \begin{tabular}{l c} \hline \hline VPN protocol & Detected servers \\ \hline IPsec & 7,008,298 \\ PPTP & 2,424,317 \\ OpenVPN & 1,436,667 \\ SSTP & 187,214 \\ \hline \hline \end{tabular} \end{table} Table 4: Number of detected VPN servers per protocol. Figure 3: Intersection of OpenVPN UDP and TCP servers. We can see that the majority of all VPN servers support only one of the four protocols we consider in this work. Since commercial VPN providers usually offer a variety of different VPN protocols to choose from, it is possible that a large percentage of the servers supporting several protocols are commercial. This might be the case especially for the ones supporting three or four protocols. We investigate the rDNS records corresponding to the servers supporting all the four protocols, and find that there are no commercial VPN provider in the top 10 second-level domains. All in all, we find that commercial VPN providers account for only a fraction of the entire VPN server ecosystem considering the supported protocols. **Different protocol versions**. Some VPN protocols might include different versions or configurations, like OpenVPN, for instance. We therefore try to trigger VPN responses from OpenVPN servers suggesting the outdated key exchange method key method 1. We also try to trigger responses with random HMAC signatures. We find that only 84 of the roughly 1.4 million servers accept our random signature. Apart from that, none of the detected servers support the insecure key exchange method. While most of the servers ignored our requests, we still received around 6,500 responses specifying the default key exchange method key method 2. We can therefore conclude that key method 1 is truly deprecated in the OpenVPN ecosystem. Figure 4: VPN protocol summary: Number of detected VPN servers for each protocol and the intersection between all protocols. ### Security Analysis **TLS certificate analysis**. We collect TLS certificates for the TLS-based VPN servers which include SSTP and OpenVPN over TCP and consider only unique certificates. For that, we compare the certificate fingerprints, i.e., the unique identifier of the certificate, to make sure we do not consider the same certificate more than once. Some certificates, however, do not include a fingerprint. Therefore, the number of certificates that we analyze in the end might be higher than the number of unique certificates. For OpenVPN, we find 129,143 unique certificates with a fingerprint for 312,095 servers. The most frequently occurring certificate is collected over 10,000 times and is issued for _www.update.microsoft.com_. For SSTP, there are 104,988 fingerprints for 184,047 servers. We detect a certificate issued for _*.vpnauction.com_ 2561 times and one for _*.trust.zone_ 1194 times. These are commercial VPN providers that seem to use the same certificate for all of their VPN servers. While we are able to collect certificates for nearly all of the SSTP servers, we only receive TLS certificates for around 65% of the detected OpenVPN servers. This is most likely caused by the fact that OpenVPN performs a variation of the standard TLS handshake during connection establishment. Therefore, some of the servers might not respond when trying to initiate a regular TLS handshake. Table 5 summarizes the results of the certificate analysis and contains the number of certificates that we analyzed after filtering out unique certificates and certificates without fingerprints. We detect a large number of self-issued or self-signed certificates for both protocols. Out of the self-issued certificates, we characterize only around 4.7% as snake oil certificates for SSTP and close to zero for OpenVPN with around 0.4%. However, 33% of the self-issued SSTP certificates contain _softether_, an open-source and multi-protocol VPN software, in the CN fields. 13% specify an IPv4 address in the CN sections. Upon looking at the organization field, we find over 21,000 different organizations where almost 14,000 specify no organization at all. For the OpenVPN certificates, we find that around 77% of the self-issued certificates include the _Fireware web CA_ as CNs specifying _WatchGuard_ as organization. For the rest, we detect more than 21,000 different organizations. \begin{table} \begin{tabular}{l l l} \hline \hline & OpenVPN TCP & SSTP \\ \hline Expired & 6080 (3.8\%) & 13,370 (9\%) \\ Self-issued & 109,965 (69\%) & 39,889 (28\%) \\ Self-signed & 109,825 (69\%) & 34,725 (24\%) \\ \hline All certificates & 158,705 & 143,517 \\ \hline \hline \end{tabular} \end{table} Table 5: Expired, self-issued, and self-signed TLS certificates for OpenVPN and SSTP. Looking at the organization fields of the CA-signed certificates, we can learn more about the signing authorities. Considering SSTP, we filter out 2502 different organizations for almost 100,000 CA-signed certificates. Table 7 contains the top five organizations accounting for 87% of all signings. We examine the issuer CNs for the certificates that do not specify an organization, yet we could not find any meaningful information with 7,531 different issuers and the most frequently occurring CN being _CA_ with 159 signings. For OpenVPN, the organizations are a lot more heterogeneous with 14,548 organizations in total. The top five organizations in Table 6 account for only around 50% of all signings. Since we also detect a quite significant number of expired certificates, we examine the date of their expiry more thoroughly. Figure 5 shows ECDFs for the time that has passed since the dates of expiry and the 15th of August, 2022. In general, over half of the SSTP certificates expired over a year ago. For OpenVPN it is even around 70%. It is possible that those certificates belong to outdated, forgotten VPN servers. **TLS vulnerability analysis**. The results of our TLS vulnerability analysis for the TLS-based VPN protocols can be found in the last two columns of Table 8 where we count the occurrences of susceptible servers. We detect a larger number of vulnerable servers for RC4, Poodle and ROBOT for both protocols, yet only a few outliers for the rest. SSTP is much more likely to show signs of vulnerability for all three attacks with over 90% of the servers being susceptible to ROBOT. This is most likely caused by the fact that SSTP is based on an outdated version of SSL and highlights why SSTP is not recommended to be used anymore. **The effect of not using SNI**. As we target only IP addresses in our follow-up TLS measurements without the SNI extension, we want to investigate the effect of not using SNI. Therefore, we first perform an rDNS resolution for our IP addresses and find 259,910 domain names for about 480,000 OpenVPN TCP servers and 86,630 domain names for roughly 180,000 SSTP servers. We now collect certificates with the SNI extension and then re-run the TLS scans without SNI for the respective addresses for whose domains we could gather certificates. Table 9 shows the results of the comparison of those two types of certificates and the number of certificates we could collect. Two certificates mismatch when the fingerprints differ. We then compare different fields and summarize the mismatch occurrences in the table. If those fields match and the certificate has only been renewed, we do not count it as a mismatch. While the results for both protocols are similar, relatively speaking, we find more mismatches for SSTP. About 3% mismatch for OpenVPN, whereas for SSTP 5.5% mismatch. To confirm that those mismatches are caused by using SNI in the TLS handshakes, we perform a second measurement without SNI and compare the certificates with the other non-SNI results. Without SNI, we find less than half as many mismatches for SSTP and more than three times fewer mismatches for OpenVPN. \begin{table} \begin{tabular}{l l l l l l} \hline \hline & TLS version & Cipher suites & Other requirements & OpenVPN & SSTP \\ \hline RC4 [5] & All & RC4 & None & 32,294 & 84,892 \\ Heartbleed [50] & All & All & OpenSSL Heartbeat & 232 & 10 \\ Poodle [40] & SSL 3.0 & All & None & 7,005 & 24,917 \\ FREAK [16] & All & RSA\_EXPORT & None & 31 & 1 \\ Logjam [3] & All & DHE/512-bit export & None & 8 & 0 \\ DROWN [8] & SSLv2 & All & None & 0 & 0 \\ ROBOT [10] & All & TLS\_RSA & None & 95,301 & 174,986 \\ Raccoon [37] & TLS \(\leq 1.2\) & TLS\_DH & None & 0 & 0 \\ \hline \hline \end{tabular} \end{table} Table 8: Requirements for TLS vulnerabilities and number of vulnerable servers per protocol. Considering the overall number of certificates from our large-scale measurements compared to the ones we collected with SNI and keeping in mind the mismatches we detected in the two non-SNI measurements, we can conclude that not using SNI affects less than 1% of the certificates for both protocols and the effect is therefore negligible. ### Fingerprinting **Server Software**. For SSTP and PPTP, we can infer the server-side software from the responses we receive to our initiation requests. For SSTP, we find that around 80% of all detected servers use _Microsoft HTTPAPI 2.0_. Around 19% use _MikroTik-SSTP_ and less than 1% use something else or specify nothing at all. However, the PPTP vendor software is a lot more heterogeneous compared to SSTP. Table 10 shows the different software vendors we detect in the VPN server responses. While there are four prominent vendors, over 15% of the PPTP servers rely on 183 different types. This can have potential security implications on the PPTP ecosystem. Assuming there was some kind of new vulnerability, the rollout of a security update to counter this vulnerability would be significantly slower compared to SSTP with fewer software vendors. A similar phenomenon where vendor fragmentation leads to slower update rollout can also be observed in the Android ecosystem. Thomas et al. [54] showed that almost 60% of all devices ran insecure Android versions in July 2015. This share declines only slowly after the discovery of a major vulnerability. They found out that the bottleneck of this issue lies with the manufacturers and results in 87.7% of all devices being exposed to at least 11 critical vulnerabilities. Jones et al. [26] considered manufacturers between 2015 and 2019 and further showed that the median latency of a security update is 24 days with an additional latency of 11 days before an end-user update. **Nmap OS detection and port scans**. In our Nmap OS detection measurements, we first have a look at the most common ports for all four protocols. Figure 6 summarizes the most frequently occurring open ports. As expected, the default HTTP(S) ports 443 and 80 are the most common ports, with the \begin{table} \begin{tabular}{l l l} \hline \hline & OpenVPN & SSTP \\ \hline SNI Certificates & 84,212 & 45,405 \\ no SNI Certificates & 81,379 & 45,026 \\ Certificate Mismatches & 2491 & 2515 \\ Authority Key ID Mismatches & 2051 & 1463 \\ Subject Key ID Mismatches & 2407 & 2379 \\ Subject SANs Mismatches & 2008 & 1677 \\ Issuer CN Mismatches & 1933 & 1476 \\ Subject CN Mismatches & 2021 & 1627 \\ \hline \hline \end{tabular} \end{table} Table 9: Comparison of Certificates Collected with and without SNI exception of the PPTP servers for which the default PPTP port TCP/1723 obviously is the most widely used port. As Ramesh et al. [47] pointed out, specific open ports do not pose security risks by themselves, yet, they might still be abused in order to identify and exploit particular services [23]. For the OS detection, we filter out the first guesses for every target and look at the most common OSes and version ranges: * **IPsec:** We receive 48 unique first guesses for 126 hosts out of 722 responsive IPsec servers. Out of those, 40 guess the Linux Kernel ranging from version 2.6.32-3.10. In general, Linux is the most common OS with 67 guesses. However, Microsoft was barely guessed as an OS vendor with only nine guesses. * **PPTP:** For 792 responsive hosts, Nmap was able to guess an OS for 216 addresses with 56 unique guesses. Linux was once again the primary occurrence. Out of those guesses, 88 specified Linux 2.6.32-3.10, where the majority newlie below version 3.2, however. As for IPsec, we have very few results for Microsoft with only 15 guesses. For the PPTP servers, there were more hardware guesses compared to the other protocols with 36 guesses specifying some kind of hardware device. * **OpenVPN:** The most frequent guesses are almost exclusively Linux again in 33 unique guesses for 89 out of the 763 responsive hosts. 39 specify Linux ranging from 3.2-4.11, i.e., the versions are not quite as outdated as for PPTP and IPsec. We received only a single guess for Microsoft products. * **SSTP:** The SSTP scans result in 44 different guesses for 178 out of 948 responsive hosts. This time, we have more results for Microsoft products with a total of 49 guesses. The most prominent vendor is Linux again, however, with 101 guesses where 53 range from Linux versions 2.6.32-3.10. ### IPv6 **VPN server detection**. Targeting roughly 530 million IPv6 addresses in our ZMapv6 port scans, we could detect 1,195,510 responsive hosts on port TCP/443 \begin{table} \begin{tabular}{l r} \hline \hline Vendor & Percentage \\ \hline Linux & 32.3\% \\ MikroTik & 30.6\% \\ Draytek & 21.1\% \\ Microsoft & 6.9\% \\ Cananian & 2.0\% \\ Fortinet PPTP & 1.4\% \\ Yamaha Corporation & 1.4\% \\ Cisco Systems, Inc. & 1.2\% \\ \hline Others (162) & 3.2\% \\ \hline \hline \end{tabular} \end{table} Table 10: Software vendors for detected PPTP servers. which we target in our follow-up ZGrab2 scans for SSTP and OpenVPN over TCP. We could not find any responsive addresses on port TCP/1723, the default PPTP port. Since port TCP/1723 is used exclusively for PPTP and the protocol is very outdated, it is not too surprising that there are no IPv6 servers supporting PPTP. Apart from that, we do not get any responses on the UDP ports 500 (IPsec) and 1194 (OpenVPN over UDP). Out of the roughly 1.2 million hits on port TCP/443, we could identify 2070 addresses as OpenVPN servers and 949 as SSTP servers with a total of 2221 VPN servers supporting IPv6. While those results seem very low, we have to keep in mind that the rollout of IPv6 is still very slow in general. IPv6 is also not yet supported by most commercial VPN providers. As also observed in IPv4 results in Section 4.2, none of the OpenVPN servers accepted our OpenVPN key method 1 requests with only 11 servers still responding with the secure key exchange method. Additionally, of the overall IPv6 VPN servers we detect, around 36% support both protocols, i.e., compared to IPv4, the overlap is higher. Investigating the rDNS records corresponding to the responsive IPv6 addresses, we observe that the top 10 domains belong to hosting providers, cloud providers, and research networks. Similar to the IPv4 results, we do not find a domain name belonging to a commercial VPN provider among the top 10 domains. By filtering second-level domains to match _*vpn*_ we find the commercial VPN provider WhiteLabel VPN, ranking 25th among the top domains. Figure 6: Heatmap of most frequently detected open ports per VPN server. Therefore, we infer that most of the VPN servers that support IPv6 are, in fact, not commercial VPN providers. **TLS certificate analysis**. The results of the TLS certificate analysis are similar to IPv4. We could collect certificates for around 75% of the identified OpenVPN servers with 816 unique fingerprints. Combined with the certificates that do not contain a fingerprint, we analyze a total of 1882 certificates. We collected certificates for every SSTP server resulting in 747 certificates after filtering out 207 unique fingerprints. Less certificates are expired this time with only 3.3% for OpenVPN and 2.1% for SSTP. This time, only 29% of the OpenVPN certificates are self-signed. For SSTP, more certificates are self-signed for the IPv6 servers with over 70% of all certificates. Out of those, we characterize roughly 2% as snake oil certificates for both protocols. Furthermore, about two thirds of the self-signed certificates for both protocols were issued by softether. When examining the signing organizations for the CA-signed certificates, we find that around 85% (709 certificates) of the OpenVPN certificates are signed by Let's Encrypt with a total of 43 organizations. For SSTP, around 73% are signed by Let's Encrypt (153 certificates). Here, we find a total of only 16 organizations. **TLS vulnerability analysis**. The results of the TLS vulnerability analysis are very similar to the IPv4 VPN servers. For both protocols, we are only able to detect vulnerable servers for the same three prominent attacks as for the IPv4 analysis. Out of the 2070 OpenVPN servers, 31% are vulnerable to RC4 biases, 6% to Poodle and 74% to Robot. When analyzing the 949 SSTP servers, we find that 67% are vulnerable to RC4 biases, 13% to the Poodle attack and roughly 98% to ROBOT. While the results are similar to our large-scale measurements, we can conclude that the VPN servers supporting IPv6 are much more likely to show any signs of vulnerability with the vast majority being vulnerable to the ROBOT attack. **The effect of not using SNI**. The rDNS measurements for the IPv6 servers resulted in 410 domain names for SSTP and 813 domain names for OpenVPN over TCP. Again, we first collect TLS certificates using the SNI extension and then try the same without SNI and compare the results. We find that only around 3% of the certificates for both protocols mismatch in terms of fingerprints and important certificate fields including authority and subject key IDs, subject SANs, and CNs. When comparing those results by running a second TLS scan without SNI, we find that only around 2.5% of the OpenVPN and less than 1% of the SSTP certificates differ. Considering the overall number of certificates, the effect of not using SNI is even less significant compared to IPv4 and is therefore negligible. **VPN server software**. Since we could not analyze the PPTP server software ecosystem this time, we can only compare the results for SSTP. The results are similar again with 91% of the SSTP servers specifying the Microsoft HTTP API 2.0. However, the rest did not specify any vendor, i.e., the IPv6 SSTP servers seem to not use MikroTik-SSTP with Microsoft being the only vendor. **Nmap OS detection and port scans**. As for IPv4, we perform Nmap measurements on the detected IPv6 VPN servers including 1000 random OpenVPN TCP servers and all 949 SSTP servers. Out of those servers, 874 OpenVPN servers and 852 SSTP servers are responsive. The most commonly used open port is TCP/443 with 838 occurrences (96%) for OpenVPN and 852 (97%) for SSTP. Compared to IPv4, the number of open HTTPS ports is much higher for OpenVPN. Here, we have to keep in mind that we can only consider OpenVPN servers over TCP. Thus, this disparity is to be expected. The second most frequently open port for both protocols, in contrast to IPv4, is TCP/22, the default SSH port. This port occurs 245 times (28%) for OpenVPN and even 391 times (46%) for SSTP. Other common ports for both protocols are ports TCP/8000 for OpenVPN (21%) and TCP/80 accounting for around 17% of the open ports for both protocols. We receive more OS guesses for the IPv6 servers compared to IPv4. As was the case for IPv4, we filter out the first guesses for every target: * **OpenVPN**: The measurement results in only four unique guesses for a total of 481 hosts. 93% specify Linux with 416 guessing Linux 3.X and 33 guessing version 2.6. Only 19 predictions include a Microsoft OS and only 13 an Apple product. * **SSTP**: For SSTP, there are five unique predictions for 406 addresses. The majority specifies Linux again with 91%. Out of those, 333 guesses specify Linux version 3.X and only 36 specify version 2.6. Microsoft OSes are predicted 36 times and only a single guess specifies a macOS. In contrast to IPv4, Nmap was able to predict an OS for a much larger percentage of our targets with an OS guess for almost half of the targets. Additionally, the predictions are a lot more homogeneous. Linux is again the most prominent vendor, however, the predicted versions are not quite as outdated as for the IPv4 servers. ## 5 Passive VPN Traffic Analysis It is important for network operators and ISPs to gain insight over the volume and daily patterns of VPN traffic. In a previous study, Feldmann et al. [18] try to find the VPN traffic based on the domain names corresponding to the IP addresses observed in the traffic. For detecting VPN traffic, Feldmann et al. use domain names to infer whether the IP addresses corresponding to them carry VPN traffic. They exclude any domain name that starts with _www._, and does not have _*vpn*_ to the left of the public suffix. Finally, they consider the remaining domain names as VPN domain names and count the traffic that relate to these domain names as VPN traffic. To compare our methodology with the state of the art, we apply the methodology used by Feldmann et al. [18] on our results. We use DNS responses gathered by DNS resolvers at a large European ISP, and look for those DNS responses that include the IP addresses from our VPN hitlist. We find 13% of the IP addresses from the VPN hitlist in the above-mentioned DNS responses. Therefore, we complement our DNS data with reverse DNS look-ups for all the remaining IP addresses. To refine the reverse DNS results, we exclude any domain names containing any order of the corresponding IP address bytes or octets in decimal or hexadecimal format. Overall, we end up with the domain names corresponding to 23.6% of the IP addresses from the VPN hitlist. Then, we apply the methodology used by Feldmann et al. on the resulting domain names, i.e., we extract those domain names that contain _*vpn*_ on the left side of the public suffix [45], while excluding any domain starting with www. to exclude web servers. We observe that this methodology captures only 4.8% of our VPN hitlist. Therefore, our approach can detect 4 times more VPN servers compared to the methodology by Feldmann et al. Finally, we look at a one-week snapshot of all the network flow traffic from the large European ISP to find out the amount of traffic that can be attributed to VPN. To this end, we compare the amount of VPN traffic detected with three methodologies: 1. _VPN Hitlist_: the methodology proposed in this paper, i.e. sending active probes, including the responsive IP addresses in a hitlist, excluding those IP addresses that answer to web requests, i.e. HTTP GET requests, then measuring the traffic volume originated by or destined to these IP addresses. 2. _Port-based_: this methodology captures the traffic only based on port numbers, considering traffic with port numbers 500 (IPsec), 4500 (IPsec), 1194 (OpenVPN), 1701 (L2TP), 1723 (1723) both on UDP and TCP as VPN traffic. 3. _Domain-based_: the methodology proposed by Feldmann et al, i.e. filtering domain names based on certain keywords, then measuring the traffic volume originated or destined to the IP addresses corresponding to these domain names. Figure 7 shows the traffic volume considered as VPN traffic by each of the above-mentioned methodologies. The solid black line shows the total amount of VPN traffic detected by either of the three approaches. The dashed line shows the total traffic volume in the ISP. The left Y axis shows the VPN traffic volume (including all the three approaches), and the right Y axis shows the total ISP traffic volume. All the traffic values are normalized. While normalizing, we keep the ratio between the VPN traffic and total traffic intact. Therefore, comparing the left and right axis values shows that the total traffic is roughly 25 times as much as all VPN traffic. Compared to the _Port-based_ approach, we detect twice as much traffic, and compared to the _Domain-based_ approach, we detect 8 times as much using the _VPN Hitlist_. The mean VPN traffic volume detected by all three approaches is 4.1% of the mean total ISP traffic over the week, with _VPN Hitlist_ contributing to 2.6%, _Port-based_ 1.3%, and _Domain-based_ 0.3%. Looking at the overlap between every two approaches, we find that only 2.7% of all the traffic detected by all three approaches is detected both by _VPN Hitlist_ and _Domain-based_. We observe 1.2% overlap between the traffic detected by _VPN Hitlist_ and _Port-based_ approaches. We observe a diurnal pattern in the VPN traffic detected by all of the three approaches. We find that VPN traffic pattern in the weekdays differs from that of weekends. It peaks at noon in the weekdays, and at night in the weekend, while the total ISP traffic always follows the same pattern, i.e. peaks at night. It could indicate the fact that the VPN traffic is mostly work-related through weekdays, while mostly entertainment-related throughout the weekend. In the domain-based approach the amount of VPN traffic detected by the _Domain-based_ approach is much less in the weekends than in the weekdays. This could indicate that the _Domain-based_ approach detect mostly work-related VPN servers. We investigate the domain names corresponding to the traffic we detect using our approach and find that _vpn._, _mail._, _www._, and _remote_. are among the most common prefixes left to the public suffix part of the domain names, with _vpn_. being the most common prefix. The fact that we observe _mail._ and _www._ might be either re-use of the same domain name for other purposes by the network operators, or a mislabeling effect from our approach caused by not answering our HTTP Get requests. Also, looking at the DNS records corresponding to the IP addresses from our hitlist, using FlowDNS--a system to correlate DNS and Netflow data at scale [36]--we find that 5 out of 10 top domains are related to commercial VPN providers and the rest are CDN domains. We observe that the most common source port/destination port combination is 4500/4500 which belongs to IPsec, also port number 1194 which is registered for OpenVPN, and at the same time 1193, which is practically used for VPN [42]. We also observe that 51820/51820 and 1337/1337 which belongs to WireGuard protocol are among the top port number pairs observed in the traffic detected by our approach. Port Figure 7: Normalized VPN traffic volume for different traffic detection techniques. 51820 also falls into the range of ephemeral ports numbers (49152 to 65535) which can be temporarily used by many applications. However, due to the prominent existence of this port number in our results accompanied with port 1337, we infer that we can possibly detect some WireGuard traffic, although in our active measurement approach we cannot scan the WireGuard protocol. This might be due to the co-existence of multiple protocols in one VPN server. This shows that although in our approach we cannot scan WireGuard protocol, we can still detect some WireGuard traffic which might be due to the co-existence of multiple protocols on one VPN server. Traffic related to WireGuard protocol contributes to 8.6% of the detected VPN traffic by our VPN hitlist, while contributing to only 2% of the traffic detected by the _Domain-based_ approach. ## 6 Discussion In this work, we detect VPN servers in the wild by sending Internet-wide active probes using different VPN protocols. We can distinguish between VPN servers and Web servers by excluding those servers that respond to a Web request. We compare the amount of traffic detected by our approach and two other approaches over a week of traffic from a large European ISP and find out that the approach proposed by this work detects much more VPN servers compared to the state-of-the-art domain-based approach. In addition, our approach benefits from detecting VPN servers that do not use any domain name, and can also detect VPN traffic that is using unusual ports in case these servers answer VPN probe on the usual VPN port numbers. Also, to be the best of our knowledge, this is the first work to perform an Internet-wide active measurement of VPN servers in the wild. **VPN hitlist.** We send active probes according to the specification of VPN protocols including SSTP, PPTP, OpenVPN, and IPsec to the whole IPv4 address space and to an IPv6 hitlist. We make our list of detected VPN servers, namely the VPN hitlist, publicly available at vpnecosystem.github.io. This VPN hitlist can be useful for network operators to find out about the amount and patterns of VPN traffic in their networks. The VPN hitlist can also be used by fellow researchers to investigate different behaviors of the VPN servers and VPN traffic, e.g. investigating actual attacks to these servers. **Security.** We also investigate the security of the OpenVPN and SSTP protocols in terms of different security aspects, including heartbleed attack, TLS certificates security, and TLS downgrade attacks. We find that SSTP servers use expired certificates 3x more than OpenVPN servers. We also find that 90% of the SSTP servers are vulnerable to ROBOT attack. Therefore, we find SSTP to be the most vulnerable protocol. This striking high percentage of vulnerable servers for some of the protocols shows, that the VPN server ecosystem is not as secure as some users believe it to be. Therefore, we hope that our analysis can highlight these security risks with using each VPN protocol and also helps network operators choose the right VPN protocols for their networks. **Limitations.** Our approach builds upon receiving answers from the servers in the wild and therefore, has its limitations. If there is a VPN protocol which uses a pre-shared key in the first VPN request and does not respond otherwise, we are unable to detect it. Examples of such VPN protocols are WireGuard and Cisco AnyConnect. Therefore, we are unable to detect any VPN server which offers only these two protocols. However, we observe that 8.6% of the detected traffic is related to WireGuard which might be due to multiple protocols being served by one VPN server. In addition, certain VPN servers might only work on non-registered port numbers for better anonymization. Since in our work, we only send probes to the port numbers registered for the VPN protocols by IANA [7], we cannot detect VPN servers that work on unusual port numbers. Therefore, our list of detected VPN servers is limited to those using the supported VPN protocols and working on their registered port numbers. **Future work.** In the future, our work can be complemented by including more port numbers in the active scans. Results from previous studies on predicting the services across all ports [25] can be used together with our approach to gain more coverage. Despite the above-mentioned limitations, our proposed approach detects much more VPN servers compared to the state-of-the-art domain-based approach, and also, to the best of our knowledge, is the first work to perform an Internet-wide active measurement of VPN servers in the wild. **Reproducibility.** We make our analysis code and data [19], customized ZGrab2 modules [56], and our VPN hitlist publicly available3 for fellow researchers to be able to reproduce our work and build upon it. Footnote 3: [https://vpnecosystem.github.io/](https://vpnecosystem.github.io/) ## 7 Related Work VPN traffic classification is an open research problem, particularly challenging due to its encrypted nature. There are several studies trying to tackle this problem using machine learning approaches. Some are able to categorize the traffic into VPN and non-VPN only [32], and some provide more detailed subcategories [57, 4, 59]. Zou et al. [59] identify encrypted traffic by combining a deep neural network to extract features of single packets and a recurrent network to analyze features of the traffic flow based on features of three consecutive packets. Though the model classifies some traffic incorrectly regarding sub-categories, it could achieve almost 99% accuracy when only considering VPN and non-VPN traffic. Alfayoumi et al. [4], on the other hand, also consider time-related features and subdivide traffic by also identifying applications. All of these works require previously captured unencrypted VPN traffic to train. Previous studies have also tried to detect VPN traffic using the DNS records corresponding to the IP addresses observed in the traffic [18]. In this paper, we propose a different approach, i.e. Internet-wide active measurements, to detect VPN servers in the Internet. Internet-wide measurements have been previously applied for several intents including finding IPv6 responsive addresses [20], responsive IPs to abnormal traffic [35], the usage of DNS over encryption [33], and so on. However, to the best of our knowledge, this is the first work applying active measurements to detect VPN servers in the wild and detecting the traffic based on a VPN hitlist. Investigating the security of the VPN servers is also an interesting research problem which is already addressed by several studies. For example, Xue et al. investigate the possibility and practicality of fingerprinting OpenVPN flows [1]. Tolley et al. investigate the vulnerability of known VPN servers to spoofed traffic [55]. Crawshaw [13] addresses vulnerabilities that come with some of the protocols themselves, such as outdated cryptographic cipher suites used in PPTP. In his proposal for WireGuard [14], Donenfeld talks about disadvantages in current popular VPN protocols. _VPNalyzer_ requires a tool to be installed on the user's device to measure and collect data on the active VPN connections in terms different security aspects including data leakage, open ports, and DNSSEC validation [47]. Appelbaum et al. also identified vulnerabilities of commercial and public online VPN servers [6]. A large body of literature also exists that empirically examines TLS vulnerabilities including self-signed root CA injection to intercept TLS connection [24; 46], and improper implementation of the protocol making version downgrade attacks possible even with new TLS 1.3 [30]. We mainly focus on potential vulnerabilities that come with VPN protocols which are built on top of SSL/TLS. Thus, we investigate SSL/TLS related features of those protocols. For some identified OpenVPN servers, we can also make assumptions on their security based on information we can infer about their server configurations. All the previous works study the security of known VPN servers, while in this paper, we measure the vulnerability of our detected VPN server in the Internet. ## 8 Conclusion In this paper, we performed the first Internet-wide active measurement on the VPN server ecosystem for OpenVPN, SSTP, PPTP, and IPsec both in IPv4 and IPv6 to detect VPN servers in the wild. We detected 9.8 million VPN servers distributed globally. 10% of the detected VPN servers offered more than one VPN protocol with very few serving all the four protocols we studied. We also send active Web probes to the detected VPN servers and observed that 2% were both VPN and Web servers. Analyzing the TLS-based VPN protocols, i.e. OpenVPN and SSTP, we found that SSTP was the most vulnerable to a version downgrade attack, and certificates of OpenVPN servers had the most self-signed and self-issued certificates. We also tried to fingerprint the detected VPN servers in terms of server software vendors and operating systems. Finally, using our VPN hitlist, excluding the servers that were both VPN and Web servers, we observed that VPN traffic constitutes 2.6% of the total traffic volume in a large European ISP, which is 8x as much as that of a state-of-the-art domain-based approach, and twice as much as the trivial port-based approach. We publish our VPN hitlist, our customized ZGrab2 modules for VPN scans, and the code to our analysis for future researchers and network operators to use.
2308.13656
Trace formulas revisited and a new representation of KdV solutions with short-range initial data
We put forward a new approach to Deift-Trubowitz type trace formulas for the 1D Schrodinger operator with potentials that are summable with the first moment (short-range potentials). We prove that these formulas are preserved under the KdV flow whereas the class of short-range potentials is not. Finally, we show that our formulas are well-suited to study the dispersive smoothing effect.
Alexei Rybkin
2023-08-25T20:05:04Z
http://arxiv.org/abs/2308.13656v2
# Trace formulas revisited and a new representation of KdV solutions with short-range initial data ###### Abstract. We put forward a new approach to Deift-Trubowitz type trace formulas for the 1D Schrodinger operator with potentials that are summable with the first moment (short-range potentials). We prove that these formulas are preserved under the KdV flow whereas the class of short-range potentials is not. Finally, we show that our formulas are well-suited to study the dispersive smoothing effect. Key words and phrases:trace formula, KdV equation, Hankel operator 2020 Mathematics Subject Classification: 34L25, 37K15, 47B35 The author is supported in part by the NSF grant DMS-2009980. We dedicate this paper to Vladimir Marchenko on the occasion of his centennial birthday. This paper is also dedicated to the memory of Vladimir Zakharov who has just left us. ## 1. Introduction We are concerned with the Cauchy problem for the Korteweg-de Vries (KdV) equation \[\begin{cases}\partial_{t}q-6q\partial_{x}q+\partial_{x}^{3}q=0,\quad x\in \mathbb{R},t\geq 0\\ q(x,0)=q(x).\end{cases} \tag{1.1}\] As is well-known, (1.1) is the first nonlinear evolution PDE solved in the seminal 1967 Gardner-Greene-Kruskal-Miura paper [7] by the method which is now referred to as the inverse scattering transform (IST). Conceptually, the IST is similar to the Fourier method but is based on the direct/inverse scattering (spectral) theory for the 1D Schrodinger operator \(\mathbb{L}_{q}=-\partial_{x}^{2}+q(x)\). Explicit formulas, however, are in short supply and trace formulas are among a few available. Historically, for short-range potentials \(q\left(x\right)\) (i.e. summable with the first moment) such a formula (see (5.7)) was put forward by Deift-Trubowitz in [5] in the late 70s (we call it the Deift-Trubowitz trace formula). However, no adaptation of the trace formula (5.7) to the solution \(q\left(x,t\right)\) to (1.1) is offered in [5] and, to the best of our knowledge, it has not been done in the literature. We emphasize that, as we show below (see Corollary 7.3), \(q\left(x,t\right)\) need not be short-range for \(t>0\) and therefore the approach of [5] breaks down in a serious way. In this contribution we put forward an elementary approach to Deift-Trubowitz type trace formulas which is based on Hardy space arguments and Hankel operators. This way we generate trace type formulas like (5.2), (5.4), (5.8), (5.12) that may serve different purposes. E.g. (5.2) remains valid for certain long-range potentials (will be done elsewhere), (5.12) is a convenient starting point to include time evolution \(q\left(x,t\right)\) under the KdV flow (section 6), and (5.8) is well-suited for subtle analysis of the gain of regularity (aka dispersive smoothing) phenomenon for the KdV equation (section 7). Our trace formulas are made of the essentially same ingredients as (5.7) and therefore equivalent to the latter, which is demonstrated in Appendix. Note that in our [12], [13], [14] we rely on the Dyson formula (aka the second log determinant formula) for extension of the IST to initial data \(q\left(x\right)\) that is essentially arbitrary at \(-\infty\) but still short-range at \(+\infty\). Comparing with the Dyson formula considerations, our trace approach is more robust for analysis of KdV solutions (see Remark 7.2). To the best of our knowledge Theorem 7.1 is new. Note that for periodic potentials the trace formula was studied in great detail the 70s by McKean-Moerbeke [21], Trubowitz [27] and many others (see e.g. [10] for a nice historic review) before (5.7). It was generalized by Craig in [4] in the late 80s to arbitrary bounded continuous potentials (the so-called Craig's trace formula). In the 90s Gesztesy et al [9] developed a general approach to Craig type trace formulas based on the Krein trace formula (the "true" trace formula) under the only condition of essential boundedness from below. The general trace formula put forward in [9] yields previously known ones. In the 2000s we [24] introduced a new way of generating trace-type formulas that is not based upon Krein's trace formula but rests on the Titchmarsh-Weyl theory for second order differential equations and asymptotics of the Titchmarsh-Weyl m-function. The approach is quite elementary and essentially free of any conditions. Recently, in Binder et al [1] Craig's trace formulas was used in the KdV context to address some open problems related to almost periodic initial data. The paper is organized as follows. In section 2 we introduce our notations Section 3 is devoted to basics of Hardy spaces and Hankel operator our approach is based upon. In section 4 we review the classical direct/inverse scattering theory for Schrodinger operators on the line using the language of Hankel operators. Section 5 is where our trace formulas are introduced. We do not claim their originality but believe that the approach is new. In Section 6 we derive a representation for the solution to the KdV equation with short-range initial data. To the best of our knowledge it is new. In the final section 7 we demonstrate how our trace formula for the KdV is well-suited for the analysis of dispersive smoothing. The approach builds upon our recent [14] and suggests an effective way to understanding how the KdV flow trades the decay of initial data for gain of regularity. In Appendix we demonstrate that the Deift-Trubowitz trace formula is actually a "nonlinearization" of ours. ## 2. Notations Our notations are quite standard: * Unless otherwise stated, all integrals are Lebesgue and, as is commonly done, we drop limits of integration if the integral (absolutely convergent) is over the whole line. For convergent integrals that are not absolutely convergent we always use the Cauchy principal value \[\left(PV\right)\int=\lim_{a\to\infty}\int_{-a}^{a}.\] * \(\chi_{S}\) is the characteristic function of a (measurable) set \(S\). * As usual, \(L^{p}\left(S\right),\ 0<p\leq\infty,\) is the Lebesgue space on a (measurable) set \(S\). If \(S=\mathbb{R}\) then we abbreviate \(L^{p}\left(\mathbb{R}\right)=L^{p}\). We include \(L^{p}\) in the family of weighted \(L^{p}\) spaces defined by \[L^{p}_{\alpha}=\left\{f\ |\ \int\left|f\left(x\right)\right|^{p}\left\langle x \right\rangle^{\alpha}\mathrm{d}x<\infty\right\},\ \ \alpha>0.\] where \(\left\langle x\right\rangle=\sqrt{1+x^{2}}\). The class \(L^{1}_{1}\) is basic to scattering theory for 1D Schrodinger operators (short-range potentials). * \(\left\|\cdot\right\|_{X}\) stands for a norm in a Banach space \(X\). The most common space is \(X=L^{2}\). We merely write \(\left\|\cdot\right\|\) in this case and also \[\left\|f\right\|^{2}=\left\langle f,f\right\rangle\ \text{where}\ \left\langle f,g\right\rangle=\int f\left(x\right) \overline{g}\left(x\right)\mathrm{d}x.\] * We write \(x\simeq y\) if \(x=Cy\) for some universal constant \(C\); \(x\lesssim_{a}y\) if \(x,y\geq 0\) and \(x\leq C\left(a\right)y\) with a positive \(C\) dependent on \(a\). We drop \(a\) if \(C\) is a universal constant. * We do not distinguish between classical and distributional derivatives. * A statement \(A_{\pm}\) means two separate statements: \(A_{-}\) and \(A_{+}\). ## 3. Hardy spaces and Hankel operators To fix our notation we review some basics of Hardy spaces and Hankel operators following [22]. A function \(f\) analytic in \(\mathbb{C}^{\pm}=\left\{z\in\mathbb{C}:\pm\operatorname{Im}z>0\right\}\) is in the Hardy space \(H^{p}_{\pm}\) for some \(0<p\leq\infty\) if \[\left\|f\right\|_{H^{p}_{\pm}}^{p}\overset{\mathrm{def}}{=}\sup_{y>0}\left\|f (\cdot\pm iy)\right\|_{p}<\infty.\] We set \(H^{p}=H^{p}_{+}.\) It is a fundamental fact of the theory of Hardy spaces that any \(f\left(z\right)\in H^{p}_{\pm}\) with \(0<p\leq\infty\) has non-tangential boundary values \(f\left(x\pm\mathrm{i}0\right)\) for almost every (a.e.) \(x\in\mathbb{R}\) and \[\left\|f\right\|_{H^{p}_{\pm}}=\left\|f\left(\cdot\pm\mathrm{i}0\right) \right\|_{L^{p}}=\left\|f\right\|_{L^{p}}. \tag{3.1}\] Classes \(H^{\infty}_{\pm}\) and \(H^{2}_{\pm}\) will be particularly important. \(H^{\infty}_{\pm}\) is the algebra of uniformly bounded in \(\mathbb{C}^{\pm}\) functions and \(H^{2}_{\pm}\) is the Hilbert space with the inner product induced from \(L^{2}\). It is well-known that \(L^{2}=H^{2}_{+}\oplus H^{2}_{-}\), the orthogonal (Riesz) projection \(\mathbb{P}_{\pm}\) onto \(H^{2}_{\pm}\) being given by \[\left(\mathbb{P}_{\pm}f\right)(x)=\pm\frac{1}{2\pi\mathrm{i}}\lim_{\varepsilon \to 0+}\int\frac{f(s)\mathrm{d}s}{s-\left(x\pm\mathrm{i}\varepsilon\right)}=: \pm\frac{1}{2\pi\mathrm{i}}\int\frac{f(s)\ \mathrm{d}s}{s-\left(x\pm\mathrm{i}0\right)}. \tag{3.2}\] Observe that the Riesz projections can also be rewritten in the form \[\left(\mathbb{P}_{\pm}f\right)(x)=\left(\widetilde{\mathbb{P}}_{\pm}f\right) (x)\mp\frac{1}{2\pi\mathrm{i}}\int\frac{f(s)}{s+\mathrm{i}}\mathrm{d}s, \tag{3.3}\] where \[\left(\widetilde{\mathbb{P}}_{\pm}f\right)(x):=\left(x+\mathrm{i}\right) \left(\mathbb{P}_{\pm}\frac{f}{\cdot+\mathrm{i}}\right)(x)\] is well-defined for any \(f\in L^{\infty}\). This representation is very important in what follows. If \(f\in L^{2}\) then \(\mathbb{P}_{-}f\) is by definition in \(H^{2}_{-}\) but of course not in \(L^{1}\). However under a stronger decay condition we have the following statement. **Lemma 3.1**.: _If \(\left\langle x\right\rangle f\left(x\right)\in L^{2}\) then_ \[\left(PV\right)\int\mathbb{P}_{-}f=\frac{1}{2}\int f. \tag{3.4}\] Proof.: Note first that if \(\left\langle x\right\rangle f\left(x\right)\in L^{2}\) then \(f\) is of course integrable as one sees from \[\int\left|f\right|=\int\left|\left\langle x\right\rangle f\left(x\right)\right| \frac{\mathrm{d}x}{\left\langle x\right\rangle}\leq\left\|\left\langle\cdot \right\rangle f\right\|\left\|\left\langle\cdot\right\rangle^{-1}\right\|<\infty.\] It follows then that for a finite \(a>0\) we have \[\int_{-a}^{a}\mathbb{P}_{-}f =\left\langle\mathbb{P}_{-}f,\chi_{\left|\cdot\right|\leq a} \right\rangle=\left\langle f,\mathbb{P}_{-}\chi_{\left|\cdot\right|\leq a} \right\rangle\ \ \text{(by (\ref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef We now define the Hankel operator on \(H^{2}\). Let \((\mathbb{J}f)(x)=f(-x)\) be the operator of reflection. Given \(\varphi\in L^{\infty}\) the operator \(\mathbb{H}(\varphi):H^{2}\to H^{2}\) given by the formula \[\mathbb{H}(\varphi)f=\mathbb{J}\mathbb{P}_{-}\varphi f,\quad f\in H_{+}^{2}, \tag{3.7}\] is called the Hankel operator with symbol \(\varphi\). Clearly \(\left\|\mathbb{H}(\varphi)\right\|\leq\left\|\varphi\right\|_{L^{\infty}}\), \(\mathbb{H}(\varphi)\) is self-adjoint if \((\mathbb{J}\varphi)(x)=\overline{\varphi\left(x\right)}\) (this is always our case), \(\mathbb{H}(\varphi)=0\) if \(\varphi\) is a constant, and \[\mathbb{H}(\varphi) =\mathbb{H}(\widetilde{\mathbb{P}}_{-}\varphi) \tag{3.8}\] \[=\mathbb{H}(\mathbb{P}_{-}\varphi)\ (\text{if }\varphi\in L^{2} \cap L^{\infty}).\] The relevance of the Hankel operator in our setting is on the surface as the Marchenko operator, the cornerstone of the IST, is a Hankel operator. However, while in the literature on integrable systems it is rarely used in the form (3.7), we find it particularly convenient due, among others, to the property (3.8), which is less transparent in the integral representation. Finally we note that reliance on the theory of Hankel operator in the study of completely integrable systems has recently picked up momentum (see e.g. [2], [6], [8], [11], [19] and the references cited therein). ## 4. Overview of short-range scattering Unless otherwise stated all facts are taken from [20]. Through this section we assume that \(q\) is short-range, i.e. \(q\in L_{1}^{1}\). Associate with \(q\) the full line Schrodinger operator \(\mathbb{L}_{q}=-\partial_{x}^{2}+q(x)\). As is well-known, \(\mathbb{L}_{q}\) is self-adjoint on \(L^{2}\) and its spectrum consists of \(J\) simple negative eigenvalues \(\{-\kappa_{j}^{2}:1\leq j\leq J\}\), called bound states (\(J=0\) if there are no bound states), and two fold absolutely continuous component filling \((0,\infty)\). There is no singular continuous spectrum. Two linearly independent (generalized) eigenfunctions of the a.c. spectrum \(\psi_{\pm}(x,k),\ k\in\mathbb{R}\), can be chosen to satisfy \[\psi_{\pm}(x,k)=e^{\pm\mathrm{i}kx}+o(1),\ \partial_{x}\psi_{\pm}(x,k)\mp \mathrm{i}k\psi_{\pm}(x,k)=o(1),\ \ x\to\pm\infty. \tag{4.1}\] The function \(\psi_{\pm}\), referred to as right/left Jost solution of the Schrodinger equation \[\mathbb{L}_{q}\psi=k^{2}\psi, \tag{4.2}\] is analytic for \(\mathrm{Im}\,k>0\). It is convenient to introduce \[y_{\pm}\left(k,x\right):=e^{\mp\mathrm{i}kx}\psi_{\pm}\left(x,k\right)-1,\] \((1+y_{\pm}\left(k,x\right)\) is sometimes referred to as the Faddeev function), which is \(H^{2}\) for each \(x\). Since \(q\) is real, \(\overline{\psi}_{\pm}\) also solves (4.2) and one can easily see that the pairs \(\{\psi_{+},\overline{\psi}_{+}\}\) and \(\{\psi_{-},\overline{\psi}_{-}\}\) form fundamental sets for (4.2). Hence \(\psi_{\mp}\) is a linear combination of \(\{\psi_{\pm},\overline{\psi}_{\pm}\}\). We write this fact as follows \[T(k)\psi_{\mp}(x,k)=\overline{\psi_{\pm}(x,k)}+R_{\pm}(k)\psi_{\pm}(x,k),\quad k \in\mathbb{R}, \tag{4.3}\] where \(T\) and \(R_{\pm}\), are called transmission, right/left reflection coefficients respectively. The function \(T\left(k\right)\) is meromorphic for \(\mathrm{Im}\,k>0\) with simple poles at \(\left(\mathrm{i}\kappa_{j}\right)\) and continuous for \(\mathrm{Im}\,k=0\). Generically, \(T\left(0\right)=0\). The reflection coefficient \(R_{\pm}\left(k\right)\in L^{2}\) but need not admit be analytic. In the context of the IST Zakharov-Faddeev trace formulas [29] (conservation laws) play very important role. For Schwarz potentials \(q\) they are infinitely many. Explicitly, \[\frac{8}{\pi}\int\log\left(1-\left|R\left(k\right)\right|^{2}\right)^{-1} \mathrm{d}k=\int q+\sum\kappa_{n}\quad\text{(first trace formula)} \tag{4.4}\] \[\frac{8}{\pi}\int k^{2}\log\left(1-\left|R_{\pm}\left(k\right)\right|^{2} \right)^{-1}\mathrm{d}k=\int q^{2}-\frac{16}{3}\sum\kappa_{n}^{3}\quad\text{( second trace formula)} \tag{4.5}\] It is shown in the recent [15] that (4.4) holds for any \(q\in L^{1}\), each term being finite. Since \(\left|R_{\pm}\left(k\right)\right|\leq 1\) and \[\log\left(1-\left|R_{\pm}\left(k\right)\right|^{2}\right)^{-1}\geq\left|R_{\pm }\left(k\right)\right|^{2},\] one concludes that \(R_{\pm}\left(k\right)\in L^{2}\) for \(q\in L^{1}\). The second one (4.5) holds for \(q\in L^{1}\cap L^{2}\)[17] and readily implies that \(kR_{\pm}\left(k\right)\in L^{2}\). Note, that Zakharov-Faddeev trace formulas are not directly related to the trace formulas we discuss in Introduction but they are also related to the trace of some operators. The identities (4.3) are totally elementary but serve as a basis for inverse scattering theory and for this reason they are commonly referred to as basic scattering relations. As is well-known (see, e.g. [20]), the triple \(\left\{R_{\pm},\left(\kappa_{j},c_{\pm,j}\right)\right\}\), where \(c_{\pm,j}=\left\|\psi_{\pm}(\cdot,\mathrm{i}\kappa_{j})\right\|^{-1}\), determines \(q\) uniquely and is called the scattering data for \(\mathbb{L}_{q}\). We emphasize that in order to come from a \(L_{1}^{1}\) potential the scattering data \(\left\{R_{\pm},\left(\kappa_{n},c_{\pm,n}\right)\right\}\) must satisfy some conditions known as Marchenko's characterization [20]. The actual process of solving the inverse scattering problem necessary for the IST is historically based on the Marchenko theory (also knows as Faddeev-Marchenko or Gelfand-Levitan-Marchenko). In fact, this procedure is quite transparent from the Hankel operator point of view. Indeed, replacing \(\psi_{\pm}\) in (4.3) with \(y_{\pm}\) and applying the operator \(\mathbb{J}\mathbb{P}_{-}\), a straightforward computation [12] leads to \[y_{\pm}+\mathbb{H}(\varphi_{\pm})y_{\pm}=-\mathbb{H}(\varphi_{\pm})1,\text{ (Marchenko's equation)} \tag{4.6}\] where \(\mathbb{H}(\varphi_{\pm})\) is the Hankel operator (3.7) with symbol \[\varphi_{\pm}\left(k,x\right)=\underset{n=1}{\overset{N}{\sum}}\frac{- \mathrm{i}c_{\pm,n}^{2}e^{\mp 2\kappa_{n}x}}{k-\mathrm{i}\kappa_{n}}+R_{\pm} \left(k\right)e^{\pm 2\mathrm{i}kx}, \tag{4.7}\] and \(\mathbb{H}(\varphi_{\pm})1\) is understood as \[\mathbb{H}(\varphi_{\pm})1=\mathbb{J}\mathbb{P}_{-}\varphi_{\pm}=\mathbb{P}_ {+}\mathbb{J}\varphi_{\pm}=\mathbb{P}_{+}\overline{\varphi}_{\pm}.\] We call (4.6) the Marchenko equation as its Fourier image is the Marchenko integral equation. It is proven in [12, Theorem 8.2] that \(I+\mathbb{H}(\varphi_{\pm})\) is positive definite and therefore \[y_{\pm}=-\left[I+\mathbb{H}(\varphi_{\pm})\right]^{-1}\mathbb{H}(\varphi_{\pm })1\in H^{2}. \tag{4.8}\] Thus, given data \(\left\{R_{\pm},\left(\kappa_{j},c_{\pm,j}\right)\right\}\) we compute \(\varphi_{\pm}\) by (4.7) and form the Hankel operator \(\mathbb{H}(\varphi_{\pm})\). The function \(y_{\pm}\left(k,x\right)\) is found by (4.8). The potential \(q\left(x\right)\) can then be recovered in a few ways. Our method is, of course, to apply a suitable trace formula, which we derive in the next section. Since many of our proofs below are based on limiting arguments we need to understand in what sense scattering data converges as we approximate \(q\) in the \(L_{1}^{1}\). In particular the following statement plays an important role. **Proposition 4.1**.: _If \(q_{n}\left(x\right)\) converges in \(L_{1}^{1}\) to \(q\left(x\right)\) then the sequence of reflection coefficients \(R_{\pm,n}\left(k\right)\) corresponding to \(q_{n}\left(x\right)\) converges in \(L^{2}\) to \(R_{\pm}\left(k\right).\)_ Proof.: We consider the \(+\) case only and we suppress \(+\) sign. We use the following a priori estimates (see e.g. [5]) \[\left|y_{-}\left(x,k\right)\right|\lesssim_{q}\left\langle x\right\rangle/ \left\langle k\right\rangle \tag{4.9}\] \[\left|y_{-}\left(x,k\right)-y_{-,n}\left(x,k\right)\right|\lesssim_{q}\left\langle x \right\rangle\left\|q-q_{n}\right\|_{L_{1}^{1}} \tag{4.10}\] \[\left|T\left(k\right)-T_{n}\left(k\right)\right|\lesssim_{q}\left|k\right|^{- 1}\left\|q-q_{n}\right\|_{L_{1}^{1}}. \tag{4.11}\] Consider \(\left\|R-R_{n}\right\|^{2}\) and rewrite it as (\(\varepsilon\) is any) \[\left\|R-R_{n}\right\|^{2} =\left\|\left(R-R_{n}\right)\chi_{\left|\cdot\right|\leq \varepsilon}\right\|^{2}+\left\|\left(R-R_{n}\right)\chi_{\left|\cdot\right|> \varepsilon}\right\|^{2} \tag{4.12}\] \[\leq 8\varepsilon+\left\|\left(R-R_{n}\right)\chi_{\left|\cdot \right|>\varepsilon}\right\|^{2}.\] It follows from the general formula [5] \[R\left(k\right) =\frac{T\left(k\right)}{2\mathrm{i}k}\int e^{-2\mathrm{i}kx}q \left(x\right)\left(1+y_{-}\left(x,k\right)\right)\mathrm{d}x \tag{4.13}\] that \[R\left(k\right)-R_{n}\left(k\right) =\frac{T\left(k\right)-T_{n}\left(k\right)}{2\mathrm{i}k}\int e ^{-2\mathrm{i}kx}q\left(x\right)\left(1+y_{-}\left(x,k\right)\right)\mathrm{d}x\] \[+\frac{T_{n}\left(k\right)}{2\mathrm{i}k}\int e^{-2\mathrm{i}kx }\left(q\left(x\right)-q_{n}\left(x\right)\right)\mathrm{d}x\] \[+\frac{T_{n}\left(k\right)}{2\mathrm{i}k}\int e^{-2\mathrm{i}kx }q\left(x\right)\left(y_{-}\left(x,k\right)-y_{-,n}\left(x,k\right)\right) \mathrm{d}x\] \[=I_{1}\left(k\right)+I_{2}\left(k\right)+I_{3}\left(k\right)\] and hence \[\left\|\left(R-R_{n}\right)\chi_{\left|\cdot\right|>\varepsilon}\right\|\leq \left\|I_{1}\chi_{\left|\cdot\right|>\varepsilon}\right\|+\left\|I_{2}\chi_{ \left|\cdot\right|>\varepsilon}\right\|+\left\|I_{3}\chi_{\left|\cdot\right|> \varepsilon}\right\|.\] Estimate each term separately. For \(\left\|I_{1}\chi_{\left|\cdot\right|>\varepsilon}\right\|\) we have \[\left\|I_{1}\chi_{\left|\cdot\right|>\varepsilon}\right\|^{2} \lesssim_{q}\int\left|q\right|\left(1+\left|y_{-}\left(x,k \right)\right|\right)\mathrm{d}x\cdot\int_{\left|k\right|>\varepsilon}\left| \frac{T\left(k\right)-T_{n}\left(k\right)}{k}\right|^{2}\mathrm{d}k\] \[\lesssim_{q}\left\|q-q_{n}\right\|_{L_{1}^{1}}^{2}\int_{\left|k \right|>\varepsilon}k^{-4}\mathrm{d}k\text{ \ (by \eqref{eq:q_n},\eqref{eq:q_n})}\] \[\lesssim\varepsilon^{-3}\left\|q-q_{n}\right\|_{L_{1}^{1}}^{2}.\] Thus \[\left\|I_{1}\chi_{\left|\cdot\right|>\varepsilon}\right\|\lesssim_{q} \varepsilon^{-3/2}\left\|q-q_{n}\right\|_{L_{1}^{1}}.\] For \(\left\|I_{2}\chi_{\left|\cdot\right|>\varepsilon}\right\|\) we have \[\left\|I_{2}\chi_{\left|\cdot\right|>\varepsilon}\right\|^{2} \leq\left\|q-q_{n}\right\|_{L^{1}}^{2}\int_{\left|k\right|> \varepsilon}\left|\frac{T_{n}\left(k\right)}{k}\right|^{2}\mathrm{d}k\] \[\lesssim\frac{1}{\varepsilon}\left\|q-q_{n}\right\|_{L_{1}^{1}}^{2}\] and hence \[\left\|I_{2}\chi_{\left|\cdot\right|>\varepsilon}\right\|\lesssim_{q} \varepsilon^{-1/2}\left\|q-q_{n}\right\|_{L^{1}}.\] Finally for \(\left\|I_{2\chi|\cdot|>\varepsilon}\right\|\) one has in a similar manner \[\left\|I_{3}\chi_{|\cdot|>\varepsilon}\right\|\leq_{q}\varepsilon^{-1/2}\sup_{k} \left|\int e^{-2\mathrm{i}kx}q\left(x\right)\left(y_{-}\left(x,k\right)-y_{-,n} \left(x,k\right)\right)\mathrm{d}x\right|\] and hence \[\left\|I_{3}\chi_{|\cdot|>\varepsilon}\right\|\lesssim_{q}\varepsilon^{-1/2} \left\|q-q_{n}\right\|_{L^{1}_{1}}.\] One can now sees that each \(\left\|I_{j}\chi_{|\cdot|>\varepsilon}\right\|\), \(j=1,2,3\), vanishes as \(\left\|q-q_{n}\right\|_{L^{1}_{1}}\) does and hence, since \(\varepsilon\) is arbitrary, it follows from (4.12) that \[\left\|R-R_{n}\right\|\to 0,n\to\infty.\] Note that the question in what sense the reflection coefficient converges when we approximate the potential in a certain way is a subtle one [23]. Finally we observe that \(\psi_{\pm},y_{\pm},T,R_{\pm}\) as functions of \(k\) (momentum) satisfy \[\left(\mathbb{J}f\right)\left(k\right)=f\left(-k\right)=\overline{f}\left(k \right)\text{ \ \ \ (symmetry property).} \tag{4.14}\] ## 5. Trace formulas In this section we put forward a new approach to generate Deift-Trubowitz type trace formulas. It is based on Hardy spaces and Hankel operators. **Theorem 5.1**.: _Suppose that \(q\in L^{1}\) and_ \[Q_{+}\left(x\right):=\int_{x}^{\infty}q\left(s\right)\mathrm{d}s,Q_{-}\left( x\right):=\int_{-\infty}^{x}q\left(s\right)\mathrm{d}s.\] _Let \(\psi_{\pm}(x,k)\) be right/left Joint Jost solution and_ \[y_{\pm}\left(k,x\right)=e^{\mp\mathrm{i}kx}\psi_{\pm}(x,k)-1.\] _If for all real \(x\)_ \[2\mathrm{i}ky_{\pm}\left(k,x\right)+Q_{\pm}\left(x\right)\in H^{2}, \tag{5.1}\] _then for any \(\alpha>0\) and a.e. \(x\)_ \[q\left(x\right)=\mp\frac{2}{\pi}\partial_{x}\int\mathrm{Re}\,\frac{y_{\pm} \left(k,x\right)}{k+\mathrm{i}\alpha}k\mathrm{d}k\text{ (trace formula).} \tag{5.2}\] _If for every real \(x\)_ \[y_{\pm}\left(\cdot,x\right)\in H^{2} \tag{5.3}\] _then (5.2) simplifies to read_ \[q\left(x\right)=\mp\frac{2}{\pi}\partial_{x}\int\mathrm{Re}\,y_{\pm}\left(k,x \right)\mathrm{d}k. \tag{5.4}\] Proof.: Note first that both Jost solutions exist for \(q\in L^{1}\) (not only for \(L^{1}_{1}\)). Multiplying (5.1) by \(\mathrm{i}/\left(k+\mathrm{i}\alpha\right)\in H^{2}\) (\(\alpha>0\)) and recalling that a product of two \(H^{2}\) functions is in \(H^{1}\), we have \[\frac{2k}{k+\mathrm{i}\alpha}y_{\pm}\left(k,x\right)-\frac{\mathrm{i}}{k+ \mathrm{i}\alpha}Q_{\pm}\left(x\right)\in H^{1}. \tag{5.5}\] But it is well-known that \[f\in H^{1}\Longrightarrow\int f\left(k+\mathrm{i}0\right)\mathrm{d}k=0 \tag{5.6}\] and therefore \[\int\left[\frac{2k}{k+\mathrm{i}\alpha}y_{\pm}\left(k,x\right)-\frac{\mathrm{i}}{k +\mathrm{i}\alpha}Q_{\pm}\left(x\right)\right]\mathrm{d}k=0.\] For its real part we have \[\int\left\{\mathrm{Re}\left[\frac{2k}{k+\mathrm{i}\alpha}y_{\pm}\left(k,x \right)\right]-\frac{\alpha}{k^{2}+\alpha^{2}}Q_{\pm}\left(x\right)\right\} \mathrm{d}k=0,\] which can be rearranged to read \[\pi Q_{\pm}\left(x\right)=2\int\mathrm{Re}\left[\frac{y_{\pm}\left(k,x\right)} {k+\mathrm{i}\alpha}\right]k\mathrm{d}k\] and (5.2) follows upon differentiating in \(x\). We show now (5.4). To this end, we just split (5.2) as \[\int\mathrm{Re}\,\frac{ky_{\pm}\left(k,x\right)}{k+\mathrm{i} \alpha}\mathrm{d}k =\int\mathrm{Re}\,y_{\pm}\left(k,x\right)\mathrm{d}k\] \[+\alpha\,\mathrm{Im}\int\frac{y_{\pm}\left(k,x\right)}{k+ \mathrm{i}\alpha}\mathrm{d}k\] and observe that by (5.6) the second integral on the right hand side is zero (both \(y_{\pm}\) and \(1/\left(k+\mathrm{i}\alpha\right)\) are in \(H^{2}\)). **Remark 5.2**.: _Under the condition \(q\in L_{1}^{1}\) the following formula is proven in [5] (only \(+\) sign is considered):_ \[q\left(x\right) =-4\underset{n=1}{\overset{N}{\sum}}\kappa_{n}c_{+,n}^{2}\psi_{+ }\left(x,\mathrm{i}\kappa_{n}\right)^{2} \tag{5.7}\] \[+\frac{2\mathrm{i}}{\pi}\left(PV\right)\int R_{+}\left(k\right) \psi_{+}\left(x,k\right)^{2}k\mathrm{d}k.\text{ (Deift-Trubowitz trace formula)}\] _Visually it is very different from (5.4) (\(\psi_{+}\left(x,k\right)\) appears in (5.7) squared whereas in (5.4) it does not). One can however show that (5.4) implies (5.7). We demonstrate this fact in Appendix. Theorem 5.1 is an extension of (5.7) as it accepts certain singularities of \(\psi_{\pm}\left(k,x\right)\) at \(k=0\). The latter may occur if \(q\notin L_{1}^{1}\). Thus following the terminology of [5] we may refer to our (5.2) and (5.4) as trace formulas._ The next statement offers a version of (5.7) that is linear with respect to the Jost solution \(\psi_{\pm}\). **Corollary 5.3**.: _Suppose \(q\in L_{1}^{1}\cap L^{2}\) and let \(\left\{R_{\pm},\kappa_{n},c_{\pm,n}\right\}\) be its scattering data. Then_ \[q\left(x\right)=\pm\partial_{x}\left\{2\underset{n=1}{\overset{N}{\sum}}c_{ \pm,n}^{2}e^{\mp\kappa_{n}x}\psi_{\pm}(x,\mathrm{i}\kappa_{n})+\frac{1}{\pi} \int e^{\pm\mathrm{i}kx}R_{\pm}\left(k\right)\psi_{\pm}(x,k)\mathrm{d}k \right\}. \tag{5.8}\] Proof.: As is well-known (see e.g. [20]), for \(q\in L^{1}\) \[y_{\pm}\left(k,x\right)=\frac{\mathrm{i}}{2k}Q_{\pm}\left(x\right)+O\left(k^{- 2}\right),\quad k\to\pm\infty, \tag{5.9}\] and furthermore \(y_{\pm}\left(k,x\right)\) is bounded at \(k=0\) for \(q\in L_{1}^{1}\). It immediately follows that the condition (5.1) is satisfied. Also, the condition (5.3) holds due to (4.8). Therefore (5.4) holds for short-range \(q\). To show (5.8) we turn to the Marchenko (4.6). Applying the operator of reflection \(\mathbb{J}\) to this equation and recalling the symmetry property (4.14) we have \[\overline{y}_{\pm}+\mathbb{P}_{-}(\varphi_{\pm}y_{\pm})=-\mathbb{P}_{-}\varphi_{ \pm},\] which together with (4.6) yield \[2\operatorname{Re}y_{\pm}=-\mathbb{J}\mathbb{P}_{-}(\varphi_{\pm}y_{\pm})- \mathbb{P}_{-}(\varphi_{\pm}y_{\pm})-\mathbb{J}\mathbb{P}_{-}\varphi_{\pm}- \mathbb{P}_{-}\varphi_{\pm}.\] Since obviously \[\int_{-a}^{a}\mathbb{J}f=\int_{-a}^{a}f\] we have \[\int_{-a}^{a}\operatorname{Re}y_{\pm}=-\int_{-a}^{a}\mathbb{P}_{-}\varphi_{\pm }-\int_{-a}^{a}\mathbb{P}_{-}\left(\varphi_{\pm}y_{\pm}\right). \tag{5.10}\] Consider each term on the right hand side of (5.10). Observing that \(\left(k-\mathrm{i}\kappa_{n}\right)^{-1}\in H_{-}^{2}\) and hence \(\mathbb{P}_{-}\left(k-\mathrm{i}\kappa_{n}\right)^{-1}=\left(k-\mathrm{i} \kappa_{n}\right)^{-1}\), we have \[\mathbb{P}_{-}\varphi_{\pm} =\mathbb{P}_{-}\left[\sum_{n=1}^{N}\!\frac{-\mathrm{i}c_{\pm,n}^ {2}e^{\mp 2\kappa_{n}x}}{k-\mathrm{i}\kappa_{n}}\right]+\mathbb{P}_{-}\left[R_{ \pm}\left(k\right)e^{\pm 2\mathrm{i}kx}\right]\] \[=\sum_{n=1}^{N}\!\frac{-\mathrm{i}c_{\pm,n}^{2}e^{\mp 2\kappa_{n} x}}{k-\mathrm{i}\kappa_{n}}+\mathbb{P}_{-}\left[R_{\pm}\left(k\right)e^{\pm 2 \mathrm{i}kx}\right]\] and thus \[\int_{-a}^{a}\mathbb{P}_{-}\varphi_{\pm} =\int_{-a}^{a}\sum_{n=1}^{N}\!\frac{-\mathrm{i}c_{\pm,n}^{2}e^{ \mp 2\kappa_{n}x}}{k-\mathrm{i}\kappa_{n}}\mathrm{d}k+\int_{-a}^{a}\mathbb{P}_ {-}\left[R_{\pm}\left(k\right)e^{\pm 2\mathrm{i}kx}\right]\mathrm{d}k\] \[=-\mathrm{i}\!\sum_{n=1}^{N}\!c_{\pm,n}^{2}e^{\mp 2\kappa_{n} x}\int_{-a}^{a}\frac{\mathrm{d}k}{k-\mathrm{i}\kappa_{n}}+\int_{-a}^{a} \mathbb{P}_{-}\left[R_{\pm}\left(k\right)e^{\pm 2\mathrm{i}kx}\right]\mathrm{d}k. \tag{5.11}\] Pass in (5.11) now to the limit as \(a\to\infty\). By (3.6) \[\lim_{a\to\infty}\int_{-a}^{a}\frac{\mathrm{d}k}{k-\mathrm{i}\kappa_{n}}=\lim_ {a\to\infty}\int_{-a}^{a}\frac{\mathrm{d}k}{k-\mathrm{i}}=\pi\mathrm{i}\] and hence substituting this into (5.11) one has \[\left(PV\right)\int_{-a}^{a}\mathbb{P}_{-}\varphi_{\pm} =\pi\!\sum_{n=1}^{N}\!c_{\pm,n}^{2}e^{\mp 2\kappa_{n}x}\] \[+\left(PV\right)\int_{-a}^{a}\mathbb{P}_{-}\left[R_{\pm}\left(k \right)e^{\pm 2\mathrm{i}kx}\right]\mathrm{d}k.\] It remains to evaluate the integral on the right hand side. As we have shown in Section 4, \(\left\langle k\right\rangle R_{\pm}\left(k\right)\in L^{2}\). By Lemma 3.1 then \[\left(PV\right)\int_{-a}^{a}\mathbb{P}_{-}\left[R_{\pm}\left(k\right)e^{\pm 2 \mathrm{i}kx}\right]\mathrm{d}k=\frac{1}{2}\int R_{\pm}\left(k\right)e^{\pm 2 \mathrm{i}kx}\mathrm{d}k\] and finally \[\left(PV\right)\int\mathbb{P}_{-}\varphi_{\pm} =\pi{\sum\limits_{n=1}^{N}}c_{\pm,n}^{2}e^{\mp 2\kappa_{n}x}\] \[+\frac{1}{2}\int R_{\pm}\left(k\right)e^{\pm 2{\rm i}kx}{\rm d}k.\] Similarly, \[\left(PV\right)\int\mathbb{P}_{-}\left(\varphi_{\pm}y_{\pm}\right)=\frac{1}{2 }\int\varphi_{\pm}y_{\pm}\] and from (5.10) we obtain \[\int\mathop{\rm Re}\nolimits y_{\pm} =-\int\mathbb{P}_{-}\varphi_{\pm}-\int\mathbb{P}_{-}\left( \varphi_{\pm}y_{\pm}\right)\] \[=-\pi{\sum\limits_{n=1}^{N}}c_{\pm,n}^{2}e^{\mp 2\kappa_{n}x}- \frac{1}{2}\int R_{\pm}\left(k\right)e^{\pm 2{\rm i}kx}{\rm d}k\] \[-\frac{1}{2}\int\varphi_{\pm}y_{\pm}.\] Inserting this into (5.4) one has \[q\left(x\right) =\mp\frac{2}{\pi}\partial_{x}\int\mathop{\rm Re}\nolimits y_{\pm }\left(k,x\right){\rm d}k\] \[=\pm\partial_{x}\left\{2{\sum\limits_{n=1}^{N}}c_{\pm,n}^{2}e^{ \mp 2\kappa_{n}x}+\int R_{\pm}\left(k\right)e^{\pm 2{\rm i}kx}\frac{{\rm d}k}{ \pi}+\int\varphi_{\pm}\left(k,x\right)y_{\pm}\left(k,x\right)\frac{{\rm d}k}{ \pi}\right\}.\] It remains to evaluate the last integral on the right hand side. By the Cauchy formula \[\int\varphi_{\pm}y_{\pm} =\int{\sum\limits_{n=1}^{N}}\frac{-{\rm i}c_{\pm,n}^{2}e^{\mp 2 \kappa_{n}x}}{k-{\rm i}\kappa_{n}}y_{\pm}\left(k,x\right)\] \[+\int R_{\pm}\left(k\right)e^{\pm 2{\rm i}kx}y_{\pm}\left(k,x \right){\rm d}k\] \[=2\pi{\sum\limits_{n=1}^{N}}c_{\pm,n}^{2}e^{\mp 2\kappa_{n}x}y_{ \pm}\left({\rm i}\kappa_{n},x\right)+\int R_{\pm}\left(k\right)e^{\pm 2{\rm i}kx }y_{\pm}\left(k,x\right){\rm d}k\] and hence \[q\left(x\right)=\pm\partial_{x}\left\{2{\sum\limits_{n=1}^{N}}c_{\pm,n}^{2}e^ {\mp 2\kappa_{n}x}\left[1+y_{\pm}\left({\rm i}\kappa_{n},x\right)\right]+\int R _{\pm}\left(k\right)e^{\pm 2{\rm i}kx}\left[1+y_{\pm}\left(k,x\right)\right] \frac{{\rm d}k}{\pi}\right\}.\] Recalling that \(y_{\pm}\left(k,x\right)=e^{\mp{\rm i}kx}\psi_{\pm}(x,k)-1\) we finally obtain \[q\left(x\right)=\pm\partial_{x}\left\{2{\sum\limits_{n=1}^{N}}c_{\pm,n}^{2}e^ {\mp\kappa_{n}x}\psi_{\pm}(x,{\rm i}\kappa_{n})+\int e^{\pm{\rm i}kx}R_{\pm} \left(k\right)\psi_{\pm}(x,k)\frac{{\rm d}k}{\pi}\right\},\] which is (5.8). **Remark 5.4**.: _It follows from (5.9) that \(y_{\pm}\left(k,x\right)\notin L^{1}\) but \(\mathop{\rm Re}\nolimits y_{\pm}\left(k,x\right)\in L^{1}\)._ An important corollary of Theorem 5.1 is the following **Theorem 5.5**.: _Suppose that \(q\in L_{1}^{1}\). Let \(\left\{R,\kappa_{n},c_{n}\right\}\) be its right scattering data and \(\psi_{0}(x,k)\) be the right Jost solution corresponding to the data \(\left\{R,\varnothing\right\}\). Denote_ \[\boldsymbol{\Psi}_{0}\left(x\right)=\left(\psi_{0}\left(x,\mathrm{i}\kappa_{n} \right)\right),\quad\boldsymbol{C}=\mathrm{diag}\left(c_{n}^{2}\right).\] _Then_ \[q\left(x\right) =q_{0}\left(x\right) \tag{5.12}\] \[+2\partial_{x}\boldsymbol{\Psi}_{0}\left(x\right)\left( \boldsymbol{C}^{-1}+\int_{x}^{\infty}\boldsymbol{\Psi}_{0}\left(s\right)^{T} \boldsymbol{\Psi}_{0}\left(s\right)\mathrm{d}s\right)^{-1}\boldsymbol{\Psi}_ {0}\left(x\right)^{T},\] _where \(q_{0}\left(x\right)\) admits the following representations_ \[q_{0}\left(x\right) =2\partial_{x}\left\{\int\mathrm{Re}\left[1-e^{-\mathrm{i}kx} \psi_{0}(x,k)\right]\frac{\mathrm{d}k}{\pi}\right\}\] \[=\partial_{x}\left(PV\right)\int e^{\mathrm{i}kx}R\left(k\right) \psi_{0}(x,k)\frac{\mathrm{d}k}{\pi}\] \[=\partial_{x}^{2}\int\frac{e^{2\mathrm{i}kx}-1}{2\mathrm{i}k}R \left(k\right)\frac{\mathrm{d}k}{\pi}+\partial_{x}\int e^{2\mathrm{i}kx}R \left(k\right)y_{0}\left(k,x\right)\frac{\mathrm{d}k}{\pi}.\] Proof.: We merely combine the formula (5.4) from Theorem 5.1 and the version of the binary Darboux transformation from our [26] \[q\left(x\right)=q_{0}\left(x\right)-2\partial_{x}^{2}\log\det\left( \boldsymbol{C}^{-1}+\int_{x}^{\infty}\boldsymbol{\Psi}_{0}^{T}\left(s\right) \boldsymbol{\Psi}_{0}\left(s\right)\right),\] where \(q_{0}\left(x\right)\) is the potential corresponding to \(\left\{R,\varnothing\right\}\). Indeed, by the Jacobi formula on differentiation of determinants one has \[\partial_{x}\log\det\left(\boldsymbol{C}^{-1}+\int_{x}^{\infty} \boldsymbol{\Psi}_{0}^{T}\left(s\right)\boldsymbol{\Psi}_{0}\left(s\right)\right)\] \[=-\boldsymbol{\Psi}_{0}\left(x\right)\left(\boldsymbol{C}^{-1}+ \int_{x}^{\infty}\boldsymbol{\Psi}_{0}\left(s\right)^{T}\boldsymbol{\Psi}_{0} \left(s\right)\mathrm{d}s\right)^{-1}\boldsymbol{\Psi}_{0}\left(x\right)^{T}.\] We will demonstrate below that the trace formula (5.12) is convenient for limiting arguments. Of course, a similar formula holds for the left scattering data. ## 6. Trace formula and KdV solutions In this section we show that our trace formulas yield new representations for solutions to the KdV equation with short-range initial data. Note that the condition \(q\left(x\right)\in L_{1}^{1}\) alone does not guaranty that \(q\left(x,t\right)\in L_{1}^{1}\) for \(t>0\) (see Corollary 7.3) and therefore (5.4) does not apply. We cannot even be sure that (5.1) holds for \(q\left(x,t\right).\) To overcome the problems we employ some limiting arguments. Through the rest of the paper we use the following convenient notation \[\xi_{x,t}\left(k\right):=\exp\mathrm{i}\left(8k^{3}t+2kx\right).\] While highly oscillatory on the real line, this function has a rapid decay along \(\mathbb{R}+\mathrm{i}a\) for any \(a>0\). **Theorem 6.1**.: _If \(q\left(x\right)\in L_{1}^{1}\) and \(\left\{R,\kappa_{j},c_{j}\right\}\) are the associated right scattering data then the Cauchy problem for the KdV equation (1.1) with initial data \(q\left(x\right)\) can be represented by_ \[q\left(x,t\right) =q_{0}\left(x,t\right) \tag{6.1}\] \[+2\partial_{x}\mathbf{\Psi}_{0}\left(x,t\right)\left(\boldsymbol{ C}\left(t\right)^{-1}+\int_{x}^{\infty}\mathbf{\Psi}_{0}\left(s,t\right)^{T} \mathbf{\Psi}_{0}\left(s,t\right)\mathrm{d}s\right)^{-1}\mathbf{\Psi}_{0} \left(x,t\right)^{T},\] _where_ \[\mathbf{\Psi}_{0}\left(x,t\right)=\left(\psi_{0}\left(x,t,\mathrm{i}\kappa_{j }\right)\right),\quad\boldsymbol{C}\left(t\right)=\left(c_{j}\exp 8\kappa_{j}^{3}t \right),\] \[\psi_{0}\left(x,t,k\right)=e^{\mathrm{i}kx}\left[1+y_{0}\left(k,x,t\right) \right],\] \(y_{0}\left(\cdot,x,t\right)\) _is the \(H^{2}\) solution of the singular integral equation_ \[y+\mathbb{H}(R\xi_{x,t})y=-\mathbb{H}(R\xi_{x,t})1,\] \[q_{0}\left(x,t\right)=\partial_{x}\left\{\left(PV\right)\int R\left(k\right) \xi_{x,t}\left(k\right)\frac{\mathrm{d}k}{\pi}+\int R\left(k\right)\xi_{x,t} \left(k\right)y_{0}\left(k,x,t\right)\frac{\mathrm{d}k}{\pi}\right\}. \tag{6.2}\] Proof.: For Schwarz \(q\left(x\right)\) there is nothing to prove as \(q\left(x,t\right)\) is also a Schwarz function. Since KdV is well-posed in any Sobolev space \(H^{-\varepsilon}\) with \(0<\varepsilon\leq 1\) (see e.g. [18]) and \(L_{1}^{1}\subset H^{-\varepsilon}\), for any sequence of (real) Schwarz functions \(q_{n}\left(x\right)\) approximating \(q\left(x\right)\) in \(L_{1}^{1}\) the sequence of \(q_{n}\left(x,t\right)\) converges in \(H^{-\varepsilon}\) to \(q\left(x,t\right)\), the solution to (1.1) with the initial profile \(q\left(x\right)\). Thus, we only need to compute \(\lim_{n\to\infty}q_{n}\left(x,t\right)\). Note that convergence of norming constants is somewhat inconvenient to deal with but results of our recent [26] offers a simple detour of this circumstance. Take the scattering data \(\left\{R,\varnothing\right\}\) (i.e. no bound states) and construct by (5.4) the corresponding potential \[q_{0}\left(x\right)=-2\partial_{x}\int\mathrm{Re}\,y_{0}\left(k,x\right) \mathrm{d}k.\] Since by construction \(\mathbb{L}_{q_{0}}\) is positive, \(q_{0}\) is the Miura transformation \[q_{0}\left(x\right)=\partial_{x}r\left(x\right)+r\left(x\right)^{2}\] of some real \(r\in L_{\mathrm{loc}}^{2}\)[16]. Choose a sequence \(\left(r_{n}\right)\) of Schwarz function such that the sequence \(q_{0,n}=r_{n}^{2}+\partial_{x}r_{n}\) approximates \(q_{0}\) in \(L_{1}^{1}\). As is well-known, each \(R_{n}\left(k\right)\) is also Schwarz and so is \(q_{0,n}\left(x,t\right)\) for \(t\geq 0\). Therefore, by (5.8) and recalling that \(\psi_{n}=e^{\mathrm{i}kx}\left(1+y_{n}\right)\) we have \[q_{0,n}\left(x,t\right) =\partial_{x}\int R_{n}\left(k\right)\xi_{x,t}\left(k\right) \left[1+y_{n}\left(k,x,t\right)\right]\frac{\mathrm{d}k}{\pi}\] \[=\partial_{x}^{2}\int\frac{\xi_{x,t}\left(k\right)-1}{2\mathrm{i }k}R_{n}\left(k\right)\frac{\mathrm{d}k}{\pi}+\partial_{x}\int R_{n}\left(k \right)\xi_{x,t}\left(k\right)y_{n}\left(k,x,t\right)\frac{\mathrm{d}k}{\pi}\] \[=:q_{0,n}^{\left(1\right)}\left(x,t\right)+q_{0,n}^{\left(2\right) }\left(x,t\right)\] Here we have used a well-known regularization of the Fourier integral. This representation is convenient for passing to the limit as \(n\to\infty\). By Proposition 4.1 the sequence of reflection coefficients \(R_{n}\) converges in \(L^{2}\) to \(R\). Paring this sequence, if needed, we may assume that \(R_{n}\to R\) a.e. Clearly \(R_{n}\xi_{x,t}\to R\xi_{x,t}\) a.e. too. But then, as is well-known (can also be easily shown), the corresponding sequence of Hankel operators \(\mathbb{H}(R_{n}\xi_{x,t})\) converges to \(\mathbb{H}(R\xi_{x,t})\) in the strong operator topology. Since (all) \(I+\mathbb{H}(R_{n}\xi_{x,t})\) and \(I+\mathbb{H}(R\xi_{x,t})\) are positive definite [12, Theorem 8.2] for all \(x,t\) we conclude that in \(H^{2}\) \[y_{n} =-\left(I+\mathbb{H}(R_{n}\xi_{x,t})\right)^{-1}\mathbb{H}(R_{n} \xi_{x,t})1\] \[\to-\left(I+\mathbb{H}(R\xi_{x,t})\right)^{-1}\mathbb{H}(R\xi_{x,t})1=:y_{0}\quad n\to\infty,\] where \(y_{0}\left(\cdot,x,t\right)\in H^{2}\). Therefore, for all \(x,t\) \[\int\frac{\xi_{x,t}\left(k\right)-1}{2\mathrm{i}k}R_{n}\left(k,t\right)\frac{ \mathrm{d}k}{\pi}\to\int\frac{\xi_{x,t}\left(k\right)-1}{2\mathrm{i}k}R\left(k,t\right)\frac{\mathrm{d}k}{\pi}\] and \[\int R\left(k\right)\xi_{x,t}\left(k\right)y_{n}\left(k,x,t\right)\frac{ \mathrm{d}k}{\pi}\to\int R\left(k\right)\xi_{x,t}\left(k\right)y_{0}\left(k,x,t\right)\frac{\mathrm{d}k}{\pi}.\] Thus we conclude that for each \(t\geq 0\) \[w^{*}-\lim_{n\to\infty}q_{0,n}^{\left(1\right)}\left(x,t\right)=\partial_{x}^ {2}\int\frac{\xi_{x,t}\left(k\right)-1}{2\mathrm{i}k}R\left(k\right)\frac{ \mathrm{d}k}{\pi}\] \[w^{*}-\lim_{n\to\infty}q_{0,n}^{\left(2\right)}\left(x,t\right)=\partial_{x} \int R_{n}\left(k\right)\xi_{x,t}\left(k\right)y_{n}\left(k,x,t\right)\frac{ \mathrm{d}k}{\pi}\] and (6.2) follows. Performing the binary Darboux transformation [26] we arrive at (6.1). **Remark 6.2**.: _Performing in (6.1) the inverse binary Darboux transformation [26], we can conclude that we also have_ \[q\left(x,t\right)=-\frac{2}{\pi}\partial_{x}\int\mathrm{Re}\,y\left(k,x,t \right)\mathrm{d}k \tag{6.3}\] _but cannot claim that this integral is absolutely convergent as it was in (5.4). This of course would be true if the asymptotic (5.9) held for \(q\left(x,t\right)\). The problem with (5.9) is that the error in (5.9) depends on \(\left\|q\left(\cdot,t\right)\right\|_{L^{1}}\), which need not be finite. We however conjecture that the integral in (6.3) is indeed absolutely convergent but we can no longer use tools and estimates from the short-range scattering theory._ **Corollary 6.3**.: _The trace Deift-Trubowitz trace formula (5.7) holds for \(t>0\) (not only for \(t=0\))._ Proof.: Indeed, the approximating sequence \(q_{n}\left(x,t\right)\) that corresponds to the sequence \(\left\{R_{n},\kappa_{j},c_{j}\right\}\) where \(R_{n}\) is the same as constructed in the proof of Theorem 6.1 will do the job. The only question is why the first term in (5.7) holds for \(t>0\). This easily follows from our arguments. Indeed, since \(y_{n}\left(\cdot,x,t\right)\to y\left(\cdot,x,t\right)\) in \(H^{2}\) we also have uniform convergence for \(\mathrm{Im}\,k>0\) on compacts. Therefore, \(\psi_{n}\left(\mathrm{i}\kappa_{j},x,t\right)\to\psi\left(\mathrm{i}\kappa_{ j},x,t\right)\) for all \(x,t\). **Remark 6.4**.: _Extension of the Deift-Trubowitz trace formula (5.7) to KdV solution would be a hard problem back in the 70s as the breakthrough in the understanding of wellposedness in the \(L^{2}\) based Sobolev spaces with negative indexes only occurred after the seminal 1993 Bourgain paper [3], where wellposedness was proven in \(L^{2}\). With no well-posedness at hand we cannot use limiting arguments even if \(q\left(x\right)\in L^{1}_{0}\cap L^{2}\)._ ## 7. How KdV trades decay for smoothness The goal of this section is to show how the results of the previous section could be useful in understanding the phenomenon of dispersive smoothing (aka gain of regularity). **Theorem 7.1**.: _If \(q\left(x\right)\in L_{1}^{1}\cap L^{2}\) then \(q\left(x,t\right)\in L_{\mathrm{loc}}^{\infty}\cap L^{2}\) for \(t>0\)._ Proof.: We first note that we may assume that the negative spectrum is absent. I.e. \(\mathbb{L}_{q}\) is positive. Split our initial profile as \[q=q_{-}+q_{+},\quad q_{\pm}:=\left.q\right|_{\mathbb{R}_{\pm}}.\] We may assume that \(\mathbb{L}_{q_{+}}\) is positive as possible appearance of a negative eigenvalue could only lead to minor technical complications. We use the following representation from [14]: \[R\left(k\right)=\phi_{1}\left(k\right)+\phi_{2}\left(k\right)+A\left(k\right), \tag{7.1}\] where \[\phi_{1}\left(k\right)=\frac{T_{0}\left(k\right)}{2\mathrm{i}k}\widehat{q} \left(k\right),\] \[\phi_{2}\left(k\right)=\frac{T_{0}\left(k\right)}{\left(2\mathrm{i}k\right)^{ 2}}\widehat{p}\left(k\right),\] \[\widehat{f}\left(k\right):=\int_{0}^{\infty}e^{-2\mathrm{i}ks}p\left(s\right) \mathrm{d}s;\] \(T_{0}\in H^{\infty}\) is the transmission coefficient for \(q_{+}\); \(p\) is the derivative of an absolutely continuous function and \[\left|p\left(x\right)\right|\lesssim_{\left\|q_{+}\right\|_{L^{1}}}\left|q \left(x\right)\right|+C\int_{x}^{\infty}\left|q\right|,\ \ x\geq 0; \tag{7.2}\] and \(A\in H^{\infty}\) (which form is not important). Note that \(\frac{T_{0}\left(k\right)}{2\mathrm{i}k}\) remains bounded at \(k=0\) as well as \(\frac{T_{0}\left(k\right)}{\left(2\mathrm{i}k\right)^{2}}\widehat{p} \left(k\right)\). It follows from (6.2) that \[q_{0}\left(x,t\right) =\partial_{x}\left\{\left(PV\right)\int R\left(k\right)\xi_{x,t} \left(k\right)\frac{\mathrm{d}k}{\pi}+\int R\left(k\right)\xi_{x,t}\left(k \right)y_{0}\left(k,x,t\right)\frac{\mathrm{d}k}{\pi}\right\}\] \[=\partial_{x}\left(PV\right)\int R\left(k\right)\xi_{x,t}\left(k \right)\frac{\mathrm{d}k}{\pi}\] \[+\int R\left(k\right)\left[\partial_{x}\xi_{x,t}\left(k\right) \right]y_{0}\left(k,x,t\right)\frac{\mathrm{d}k}{\pi}\] \[+\int R\left(k\right)\xi_{x,t}\left(k\right)\left[\partial_{x}y_ {0}\left(k,x,t\right)\right]\frac{\mathrm{d}k}{\pi}\] \[=:q_{1}\left(x,t\right)+q_{2}\left(x,t\right)+q_{3}\left(x,t \right). \tag{7.3}\] Consider each term separately. By (7.1) for \(q_{1}\left(x,t\right)\) we have \[q_{1}\left(x,t\right)=q_{11}\left(x,t\right)+q_{12}\left(x,t\right)+q_{13} \left(x,t\right)\] where \[q_{1n}\left(x,t\right):=\partial_{x}\left(PV\right)\int\xi_{x,t}\left(k \right)\phi_{n}\left(k\right)\frac{\mathrm{d}k}{\pi},\quad n=1,2\] \[q_{13}\left(x,t\right)=\partial_{x}\left(PV\right)\int\xi_{x,t}\left(k\right)A \left(k\right)\frac{\mathrm{d}k}{\pi}.\] The simplest term is \(q_{13}\). Since \(\xi_{x,t}\left(k+\mathrm{i}a\right)\) (and all its \(x\)-derivatives) rapidly decays along \(\mathbb{R}+\mathrm{i}a\) for any \(a>0\) we deform the contour of integration to \(\mathbb{R}+\mathrm{i}a\) that provides a rapid convergence of the integral (the original integral need not be absolutely convergent). The term \(q_{12}\) is also easy. Indeed, since \(\phi_{2}\) is clearly in \(L^{1}\) we have \[q_{12}\left(x,t\right) =\partial_{x}\int\phi_{2}\left(k\right)\xi_{x,t}\left(k\right) \frac{\mathrm{d}k}{\pi}\] \[=\int\frac{T_{0}\left(k\right)}{2\mathrm{i}k}\widehat{p}\left(k \right)\xi_{x,t}\left(k\right)\frac{\mathrm{d}k}{\pi}.\] It remains to show that this integral is absolutely convergent. It follows from (7.2) that \[\left\|p\right\| \lesssim_{\left\|q+\right\|_{L^{1}}}\left\|q\right\|+C\left(\int _{0}^{\infty}\left(\int_{x}^{\infty}\left|q\right|\right)^{2}\mathrm{d}x \right)^{1/2}\] \[\leq\left\|q\right\|+C\left(\int_{0}^{\infty}x\left|q\left(x \right)\right|\left(\int_{x}^{\infty}\left|q\right|\right)\mathrm{d}x\right)^ {1/2}\qquad\left(q\in L^{1}_{1}\right)\] \[\leq\left\|q\right\|+C\left\|q\right\|_{L^{1}}^{1/2}\left\|q \right\|_{L^{1}_{1}}^{1/2}<\infty\] and hence \(\widehat{p}\in L^{2}\). Therefore, \(q_{12}\left(x,t\right)\) is locally bounded for \(t\geq 0\). (In fact, continuous). Consider the remaining term \(q_{11}\left(x,t\right)\). In order to proceed we need first to regularize the improper integral. It cannot be done by merely deforming \(\mathbb{R}\) to \(\mathbb{R}+\mathrm{i}a\) as is done for \(q_{3}\) since \(\widehat{q}\left(k\right)\) need not admit analytic continuation into the upper half plane. To detour this circumstance we define \(\widehat{q}\left(k\right)\) by \[\widehat{q}\left(\overline{k}\right)=\int_{0}^{\infty}e^{-2\mathrm{i} \overline{k}s}q\left(s\right)\mathrm{d}s,\quad\mathrm{Im}\,k\geq 0,\] and apply the Cauchy-Green formula for the strip \(0\leq\mathrm{Im}\,k\leq 1\). We have \[q_{11}\left(x,t\right) =\partial_{x}\left(PV\right)\int\xi_{x,t}\left(k\right)\frac{T_{ 0}\left(k\right)}{2\mathrm{i}k}\widehat{q}\left(\overline{k}\right)\frac{ \mathrm{d}k}{\pi}\] \[=\partial_{x}\int_{0\leq\mathrm{Im}\,k\leq 1}\xi_{x,t}\left(k \right)\frac{T_{0}\left(k\right)}{2\mathrm{i}k}\partial_{\overline{k}} \widehat{q}\left(\overline{k}\right)\frac{\mathrm{d}u\mathrm{d}v}{\pi^{2}} \quad\left(k=u+\mathrm{i}v\right)\] \[+\partial_{x}\int_{\mathbb{R}+\mathrm{i}}\xi_{x,t}\left(k\right) \frac{T_{0}\left(k\right)}{2\mathrm{i}k}\widehat{q}\left(\overline{k}\right) \frac{\mathrm{d}k}{\pi}\] \[=:q_{111}\left(x,t\right)+q_{112}\left(x,t\right).\] The second term \(q_{112}\left(x,t\right)\) is treated the same way as \(q_{13}\left(x,t\right)\) and one immediately concludes that \(q_{112}\left(x,t\right)\) are bounded (in fact, smooth) for \(t>0\). Turn to \(q_{111}\). We rearrange it by observing that the double integral is absolutely convergent and the order of integration may be interchanged: \[q_{111}\left(x,t\right) =\partial_{x}\int_{0\leq\operatorname{Im}k\leq 1}\xi_{x,t}\left(k \right)\frac{T_{0}\left(k\right)}{2\mathrm{i}k}\left[\int_{0}^{\infty}\left(-2 \mathrm{i}s\right)e^{-2\mathrm{i}\overline{k}s}q\left(s\right)\mathrm{d}s \right]\frac{\mathrm{d}u\mathrm{d}v}{\pi^{2}}\] \[=-\partial_{x}\int_{0}^{\infty}sq\left(s\right)\left[\int_{0\leq \operatorname{Im}k\leq 1}\xi_{x,t}\left(k\right)\frac{T_{0}\left(k\right)}{k}e^{-2 \mathrm{i}\overline{k}s}\frac{\mathrm{d}u\mathrm{d}v}{\pi^{2}}\right]\mathrm{d}s\] \[=-\mathrm{i}\int_{0}^{\infty}\left[\int_{0\leq\operatorname{Im}k \leq 1}e^{-2vs}\xi_{x-s,t}\left(k\right)T_{0}\left(k\right)\frac{\mathrm{d}u \mathrm{d}v}{\pi^{2}}\right]sq\left(s\right)\mathrm{d}s,\quad\left(\overline{ k}=k-2\mathrm{i}v\right)\] \[\simeq\int_{0}^{\infty}\left\{\int_{0}^{1}e^{-2vs}I\left(s-x,t \right)\mathrm{d}v\right\}sq\left(s\right)\mathrm{d}s,\] where \[I\left(s-x,t\right) :=\int_{\mathbb{R}+\mathrm{i}v}\xi_{x-s,t}\left(k\right)T_{0} \left(k\right)\mathrm{d}k\] \[=\int_{\mathbb{R}+\mathrm{i}}\xi_{x-s,t}\left(k\right)T_{0}\left( k\right)\mathrm{d}k\] is independent of \(v\geq 0\). Thus \[q_{111}\left(x,t\right) \simeq\partial_{x}\int_{0}^{\infty}\left\{\int_{0}^{1}e^{-2vs} \mathrm{d}v\right\}I\left(s-x,t\right)sq\left(s\right)\mathrm{d}s\] \[=\int_{0}^{\infty}\frac{1-e^{-2s}}{2}I\left(s-x,t\right)q\left(s \right)\mathrm{d}s\] \[=\int_{0}^{\infty}I\left(s-x,t\right)\frac{1-e^{-2s}}{2}q\left(s \right)\mathrm{d}s.\] It remains to study the behavior of \(I\left(s-x,t\right)\) as \(s\to+\infty\). Since \(T_{0}\left(k\right)=1+O\left(k^{-1}\right)\) as \(k\to\infty\) and \(x\) is fixed we only need to worry about \[I_{0}\left(s,t\right)=\int_{\mathbb{R}+\mathrm{i}}\xi_{-s,t}\left(k\right) \mathrm{d}k,\] which is closely related to the Airy function. For the reader convenience we offer a direct treatment. Rewrite \[\xi_{-s,t}\left(k\right)=\exp\mathrm{i}\left[\Omega S\left(\lambda\right) \right],\] where \(S\left(\lambda\right)=\lambda^{3}/3-\lambda\) and \[\omega=2\left(s/3t\right)^{1/2},\Omega=3t\left(s/3t\right)^{3/2},\lambda=k/\omega\] Noticing that we need not adjust the contour of integration, we then have \[I_{0}\left(s,t\right):=\omega\int_{\mathbb{R}+\mathrm{i}}e^{\mathrm{i}\Omega S \left(\lambda\right)}\mathrm{d}\lambda. \tag{7.4}\] Apparently, the phase \(S\left(\lambda\right)=\lambda^{3}/3-\lambda\) has stationary points at \(\lambda=\pm 1\) and we need to deform the contour in (7.4) to pass through points \(\lambda=\pm 1\). We denote such a contour \(\Gamma\). To apply the steepest descent we need to make sure that \(\exp\mathrm{i}\left[\omega S\left(\lambda\right)\right]\) decay on \(\Gamma\) away from \(\pm 1\). To this end \(\Gamma\) must be in the lower half plane between points \(-1\) and \(1\). Noticing that \(\Omega=\left(3t/8\right)\omega^{3}\), \(\omega=O\left(s^{1/2}\right)\) by the steepest descent method (see e.g. [28]) one has \[I_{0}\left(s,t\right) =\omega\int_{\Gamma}e^{\mathrm{i}\Omega S\left(\lambda\right)} \mathrm{d}\lambda=\omega O\left(\Omega^{-1/2}\right),\quad\Omega\to+\infty,\] \[=O\left(s^{-1/4}\right),\quad s\to+\infty.\] Thus \(q_{111}\left(x,t\right)\) is bounded for \(t>0\) (even if \(q\left(x\right)\) decays slower than \(L^{1}\)). All four pieces \(q_{1}\left(x,t\right)\) is made of are bounded and so is \(q_{1}\left(x,t\right)\). There is now only one term \(q_{3}\) left in (7.3) to analyze. We are done if we show that \(\partial_{x}y_{0}\in H^{2}\). Differentiating \[y_{0}+\mathbb{H}y_{0}=-\mathbb{H}1,\quad\mathbb{H}:=\mathbb{H}\left(R\xi_{x,t }\right),\] in \(x\) one has \[\partial_{x}y_{0}+\mathbb{H}\left(\partial_{x}y_{0}\right)=-\partial_{x} \mathbb{H}1-\left(\partial_{x}\mathbb{H}\right)y_{0}.\] Thus \[\partial_{x}y_{0}=-\left(I+\mathbb{H}\right)^{-1}\left[\left(\partial_{x} \mathbb{H}\right)1+\left(\partial_{x}\mathbb{H}\right)y_{0}\right].\] It follows that we only need to show that \(\left(\partial_{x}\mathbb{H}\right)1\in H^{2}\) and \(\partial_{x}\mathbb{H}\) is a bounded operator. Note first that \[\left(\partial_{x}\mathbb{H}\right)=\mathbb{H}\left(2\mathrm{i}kR\xi_{x,t} \right).\] Since \(kR\left(k\right)\in L^{2}\) (from the second Zakharov-Faddeev trace formula), \[\left(\partial_{x}\mathbb{H}\right)1=\mathbb{J}\mathbb{P}_{-}\left(2\mathrm{ i}kR\left(k\right)\xi_{x,t}\left(k\right)\right)\in H^{2}\] as desired. The proof of boundedness of \(\left(\partial_{x}\mathbb{H}\right)\) is a bit more complicated. By (7.1) we have \[\mathbb{H}=\mathbb{H}_{1}+\mathbb{H}_{2}+\mathbb{H}_{3}\] where \[\mathbb{H}_{n}:=\mathbb{H}\left(\phi_{n}\xi_{x,t}\right),n=1,2,\quad\mathbb{ H}_{3}:=\mathbb{H}\left(A\xi_{x,t}\right).\] For \(n=1,2\) both \(\mathbb{H}_{n}\) admit a direct differentiation in \(x\). Indeed, one can easily see that \[\partial_{x}\mathbb{H}_{n}=\mathbb{H}\left(\phi_{n}\partial_{x}\xi_{x,t} \right)=\mathbb{H}\left(2\mathrm{i}k\phi_{n}\xi_{x,t}\right),n=1,2.\] Since \(q,p\in L^{1}\) \[2\mathrm{i}k\phi_{1}\left(k\right)=T_{0}\left(k\right)\widehat{q}\left(k \right)\in L^{\infty} \tag{7.5}\] and \[2\mathrm{i}k\phi_{2}\left(k\right)=\frac{T_{0}\left(k\right)}{2\mathrm{i}k} \widehat{p}\left(k\right)\in L^{\infty}\] and hence the operators \(\partial_{x}\mathbb{H}\left(\phi_{n}\right),n=1,2\), are bounded. To differentiate \(\mathbb{H}_{3}\) we need first to use (3.8). One has \[\mathbb{H}=\mathbb{H}\left(\bar{\mathbb{P}}_{-}\left(R\xi_{x,t}\right)\right).\] But \[\mathbb{P}_{-}\left[A\left(k\right)\xi_{x,t}\left(k\right)\right] =-\frac{1}{2\mathrm{i}\pi}\int\frac{A\left(\lambda\right)\xi_{x,t} \left(\lambda\right)}{\lambda-\left(k-\mathrm{i}0\right)}\mathrm{d}\lambda\] \[=-\frac{1}{2\mathrm{i}\pi}\int_{\mathbb{R}+\mathrm{i}}\frac{A \left(\lambda\right)\xi_{x,t}\left(\lambda\right)}{\lambda-k}\mathrm{d}\lambda,\] where the integral is absolutely convergent, and therefore we may differentiate under the integral sign \[\partial_{x}\mathbb{P}_{-}\left[A\left(k\right)\xi_{x,t}\left(k\right)\right]=- \frac{1}{2\mathrm{i}\pi}\int_{\mathbb{R}+\mathrm{i}}\frac{2\mathrm{i}\lambda A \left(\lambda\right)\xi_{x,t}\left(\lambda\right)}{\lambda-k}\mathrm{d}\lambda,\] which is well-defined and bounded. Consequently, \(\partial_{x}\mathbb{H}_{3}\) is a bounded operator and so is \(\partial_{x}\mathbb{H}\). Thus, indeed \(\partial_{x}y_{0}\in H^{2}\). **Remark 7.2**.: _Theorem 7.1 of [13], which proof is based on the Dyson formula, relates smoothness of \(q\left(x,t\right)\) with the decay of \(q\left(x\right)\). In particular, it follows from that result that if \(q\left(x\right)\in L_{3/2}^{1}\cap L^{2}\) then \(q\left(x,t\right)\in L_{\mathrm{loc}}^{\infty}\cap L^{2}\) for \(t>0.\) Stronger decay is due to the fact the Dyson formula involves \(\det\left(I+\mathbb{H}\right)\), which use requires to analyze differentiability of \(\mathbb{H}\) in trace norm. (The latter is also technically much more involved. It was our attempt to dispose of trace norm considerations that led us to our trace formulas, which require uniform norms only._ The following important consequence directly follows from Theorem 7.1 and invariance of the KdV with respect to \(\left(x,t\right)\rightarrow\left(-x,-t\right)\). **Corollary 7.3**.: _The class \(L_{1}^{1}\) is not preserved under the KdV flow._ Proof.: Suppose to the contrary that \(L_{1}^{1}\) is preserved under the KdV flow. I.e. if \(q\left(x\right)\in L_{1}^{1}\) then \(q\left(x,t\right)\in L_{1}^{1}\) for any \(t\). Take \(q\left(x\right)\in L_{1}^{1}\cap L^{2}\) but \(q\left(x\right)\notin L^{\infty}\) and fix \(t_{0}>0\) By Theorem 7.1, \(q_{0}\left(x\right):=q\left(x,t_{0}\right)\in L_{\mathrm{loc}}^{\infty}\cap L^ {2}\). Take \(q_{0}\left(x\right)\) as new initial data. By our assumption it is also in \(L_{1}^{1}\). Thus \(q_{0}\left(x\right)\in L_{\mathrm{loc}}^{\infty}\cap L_{1}^{1}\cap L^{2}\). But this leads us to a contradiction as \(q_{0}\left(x,t_{0}\right)=q\left(-x\right)\) was not assumed locally bounded. In the conclusion we mention that much more general and precise statements can be made regarding how the KdV solutions gain regularity (smoothness) in exchange for loss of decay. We plan on showing elsewhere how the results of [13], [14], and [25] may be improved to optimal statements. ## 8. Appendix We demonstrate that the Deift-Trubowitz trace formula is actually a "nonlinearization" of our trace formulas. Assume for simplicity that there are no bound states (non-empty negative spectrum merely complicates the computations) and do our computation for the \(+\) sign only. The reader who has been able to get to this point should be able to follow the calculations below. Denoting \(\mathbb{H}=\mathbb{H}\left(R\xi_{x,t}\right)\), \(h:=\mathbb{H}1\), \(1_{a}:=\chi_{\left|\cdot\right|\leq a}\), we have \[\pi q =-2\partial_{x}\int\mathrm{Re}\,y=\frac{1}{\pi}\partial_{x}\int \mathrm{Re}\left(I+\mathbb{H}\right)^{-1}h\] \[=2\partial_{x}\int\mathrm{Re}\left(I+\mathbb{H}\right)^{-1}h\] \[=-2\int\left(I+\mathbb{H}\right)^{-1}\left(\partial_{x}\mathbb{H }\right)\left(\mathbb{I}+\mathbb{H}\right)^{-1}h+2\int\left(I+\mathbb{H} \right)^{-1}\partial_{x}h\] \[=:q_{1}+q_{2}.\] For \(q_{1}\) we have \[q_{1}=-2\lim_{a\rightarrow\infty}\left\langle\left(I+\mathbb{H}\right)^{-1} \left(\partial_{x}\mathbb{H}\right)\left(\mathbb{I}+\mathbb{H}\right)^{-1}h,1_ {a}\right\rangle.\] For the inner product one has \[\left\langle\left(I+\mathbb{H}\right)^{-1}\left(\partial_{x}\mathbb{H }\right)\left(\mathbb{I}+\mathbb{H}\right)^{-1}h,\mathbb{P}_{+}1_{a}\right\rangle\] \[=-\left\langle\left(\partial_{x}\mathbb{H}\right)y,\left(I+ \mathbb{H}\right)^{-1}\mathbb{P}_{+}1_{a}\right\rangle\] \[=-\left\langle\left(\partial_{x}\mathbb{H}\right)y,\mathbb{P}_{+} \left[1_{a}-\left(I+\mathbb{H}\right)^{-1}\mathbb{H}\mathbb{P}_{+}1_{a}\right]\right\rangle\] \[=-\left\langle\left(\partial_{x}\mathbb{H}\right)y,\mathbb{P}_{+} 1_{a}\right\rangle+\left\langle\left(\partial_{x}\mathbb{H}\right),\left(I+ \mathbb{H}\right)^{-1}\mathbb{H}\mathbb{P}_{+}1_{a}\right\rangle.\] Passing to the limit yields \[q_{1}=2\int\left(\partial_{x}\mathbb{H}\right)y+\left\langle\left(\partial_{x }\mathbb{H}\right)y,y\right\rangle.\] One may now see how "nonlinear" dependence on \(y\) in (5.7) comes about. Indeed, the second term \(\left\langle\left(\partial_{x}\mathbb{H}\right)y,y\right\rangle\) is a quadratic form. For \(q_{2}\) we similarly have \[q_{2} =2\int\left(I+\mathbb{H}\right)^{-1}\partial_{x}h\] \[=2\lim_{a\to\infty}\left\langle\left(I+\mathbb{H}\right)^{-1} \left(\partial_{x}h\right),\mathbb{P}_{+}1_{a}\right\rangle\] \[=2\lim_{a\to\infty}\left\langle\partial_{x}h,\mathbb{P}_{+}1_{a}- \left(I+\mathbb{H}\right)^{-1}\mathbb{H}\mathbb{P}_{+}1_{a}\right\rangle\] \[=\int\partial_{x}h+\left\langle\partial_{x}h,y\right\rangle.\] Since \[\partial_{x}\mathbb{H}f\left(k\right)\!=\!\!2\mathrm{i}\mathbb{P}_{-}\left[kR \left(k\right)e^{2\mathrm{i}kx}f\left(k\right)\right]\] we have \[\pi q =q_{1}+q_{2}\] \[=\left\langle\left(\partial_{x}\mathbb{H}\right)y,y\right\rangle+ 2\int\left(\partial_{x}\mathbb{H}\right)y+\left\langle\partial_{x}h,y\right\rangle +2\int\partial_{x}h\] \[=2\mathrm{i}\int kR\left(k\right)e^{2\mathrm{i}kx}y\left(k,x \right)^{2}\mathrm{d}k\] \[+4\mathrm{i}\int kR\left(k\right)e^{2\mathrm{i}kx}y\left(k,x \right)\mathrm{d}k+2\mathrm{i}\int kR\left(k\right)e^{2\mathrm{i}kx}\mathrm{ d}k\] \[=2\mathrm{i}\int kR\left(k\right)e^{2\mathrm{i}kx}\left[1+y\left( k,x\right)\right]^{2}\mathrm{d}k\] \[=2\mathrm{i}\int kR\left(k\right)\psi\left(x,k\right)^{2}\mathrm{ d}k\] and (5.7) with \(c_{n}=0\) follows.
2303.06129
Single-branch Network for Multimodal Training
With the rapid growth of social media platforms, users are sharing billions of multimedia posts containing audio, images, and text. Researchers have focused on building autonomous systems capable of processing such multimedia data to solve challenging multimodal tasks including cross-modal retrieval, matching, and verification. Existing works use separate networks to extract embeddings of each modality to bridge the gap between them. The modular structure of their branched networks is fundamental in creating numerous multimodal applications and has become a defacto standard to handle multiple modalities. In contrast, we propose a novel single-branch network capable of learning discriminative representation of unimodal as well as multimodal tasks without changing the network. An important feature of our single-branch network is that it can be trained either using single or multiple modalities without sacrificing performance. We evaluated our proposed single-branch network on the challenging multimodal problem (face-voice association) for cross-modal verification and matching tasks with various loss formulations. Experimental results demonstrate the superiority of our proposed single-branch network over the existing methods in a wide range of experiments. Code: https://github.com/msaadsaeed/SBNet
Muhammad Saad Saeed, Shah Nawaz, Muhammad Haris Khan, Muhammad Zaigham Zaheer, Karthik Nandakumar, Muhammad Haroon Yousaf, Arif Mahmood
2023-03-10T18:48:40Z
http://arxiv.org/abs/2303.06129v1
# Single-branch Network for Multimodal Training ###### Abstract With the rapid growth of social media platforms, users are sharing billions of multimedia posts containing audio, images, and text. Researchers have focused on building autonomous systems capable of processing such multimedia data to solve challenging multimodal tasks including cross-modal retrieval, matching, and verification. Existing works use separate networks to extract embeddings of each modality to bridge the gap between them. The modular structure of their branched networks is fundamental in creating numerous multimodal applications and has become a defacto standard to handle multiple modalities. In contrast, we propose a novel single-branch network capable of learning discriminative representation of unimodal as well as multimodal tasks without changing the network. An important feature of our single-branch network is that it can be trained either using single or multiple modalities without sacrificing performance. We evaluated our proposed single-branch network on the challenging multimodal problem (face-voice association) for cross-modal verification and matching tasks with various loss formulations. Experimental results demonstrate the superiority of our proposed single-branch network over the existing methods in a wide range of experiments. Code: [https://github.com/msadasheet/SBNet](https://github.com/msadasheet/SBNet) Multimodal data, Two-branch networks, Face-voice association, Cross-modal verification and matching ## I Introduction Recent years have seen a surge in multimodal data containing various modalities. Generally, users combine image, text, audio, or video data to express opinions on social media platforms. The combinations of these media types have been extensively studied to solve several multimodal tasks including cross-modal retrieval [1, 2], cross-modal verification [3, 4], multimodal named entity recognition [5, 6], visual question answering [7, 8], image captioning [9, 10], and multimodal classification [11, 12, 13]. In the existing multimodal systems, neural network based mappings have been commonly used to bridge the semantic gap between multiple modalities by building a joint representation. For example, separate independent networks are leveraged to predict features of each modality and a supervision signal is employed to learn joint representation in a two-branch network [14, 15, 16, 17, 18, 19, 2, 4]. In addition, multimodal systems have leveraged Transformers to learn joint representation in a two-branch setting [20, 21]. The modular nature of the two-branch network is instrumental in developing various multimodal applications and has become a common practice to process multiple modalities. However, embeddings extracted from modality-specific networks share many semantic similarities. For example, the gender, nationality, and age of speakers are represented by their audio and visual signatures [15, 16]. Therefore a fundamental question arises: _can multimodal joint representations be learned with only a single-branch network?_ To investigate it, we introduce **S**ingle **B**ranch **N**etwork (SBNet), a method to learn discriminative joint representation for multimodal tasks. The proposed SBNet consists of the following three components: 1) embedding extraction of each modality with task-specific pre-trained networks, 2) a series of modality-invariant fully connected layers in a single-branch to learn joint multimodal representations, and 3) an extensive evaluation of proposed single-branch network after employing different loss formulations. By leveraging the fact that extracted embeddings share the same semantics, our formulation processes multiple modalities with a single branch to learn joint representations for multimodal tasks. This also makes it possible to learn unimodal representations without modifying the underlying network. Fig. 1 compares a typical two-branch network with our proposed single-branch network. We use the same input features under various loss formulations for both single-branch and two-branch networks on a popular multimodal task, namely face-voice (F-V) association. Nagrani Fig. 1: (Top) The existing two-branch networks employ independent modality-specific branches to learn a joint representation from the embeddings of modality X and Y. (Bottom) In contrast, the proposed single-branch network leverages only one branch to learn similar representations. et al. [22] introduced F-V association to multimodal community with the creation of a large-scale audio-visual dataset (VoxCeleb1 [22]). In addition, Nawaz et al. [2] extended F-V association by analyzing the impact of language on the association. Since then, it has gained significant research interest [3, 4, 15, 16, 17, 23, 24, 25, 26, 27, 28, 29]. All these existing methods also leverage two-branch networks to establish the correlation between faces and voices with the help of cross-modal verification and matching tasks. Thus, F-V association is considered a suitable benchmark application for comparison among single and two-branch networks. In addition, F-V association datasets provide unimodal tasks, for example, speaker verification which is important to showcase the capability of our single-branch network to train on unimodal or multimodal data without changing the underlying network. We summarize our key contributions as follows: 1) We propose a single-branch network to learn multimodal discriminative joint representations. 2) We present rigorous comparisons of our single with two-branch networks under same inputs and various loss formulations on cross-modal verification and matching tasks on a large-scale VoxCeleb1 dataset. Experimental results show that the single-branch network outperforms two-branch network on F-V association task. Further, we note that our method performs favourably against the existing state-of-the-art methods on the F-V association task. 3) We perform a thorough ablation study to analyze the impact of different components. ## II Overall Framework We develop a single-branch network to learn a joint representation for a multimodal task of F-V association (see Fig. 2). ### _Single-branch Network_ Our aim is to learn joint representation to perform a multimodal task that establishes an association between the faces and voices of different speakers in cross-modal verification and matching. Given that we have \(N\) occurrences of face-voice pairs, \(\mathcal{D}=\{(x_{i}^{f},x_{i}^{v})\}_{i=1}^{N}\), where \(x_{i}^{f}\) and \(x_{i}^{v}\) are the face and voice samples of the \(i_{th}\) occurrence, respectively. Each face-voice pair \((x_{i}^{f},x_{i}^{v})\) has a label \(y_{i}\in\{0,1\}\), where \(y_{i}=1\) if a pair belongs to the same speaker and \(y_{i}=0\) if it belongs to different speakers. Cross-modal learning aims at mapping both faces and voices into a common but discriminative embedding space, where they are adequately aligned and occurrences from the same identity are nearby while those from a different identity are far apart. Previous works approach the problem by assuming that modalities have different representations and structures, and have therefore leveraged independent modality-specific branches to learn discriminative joint representation [2, 3, 4, 15, 16, 17]. We, on the other hand, approach the problem by assuming shared semantics, such as gender, nationality and age for each modality. To this end, the face and voice embeddings are extracted from modality-specific networks. Precisely, face embeddings (\(\mathbf{b}_{i}\in\mathbb{R}^{F}\)) and voice embeddings (\(\mathbf{e}_{i}\in\mathbb{R}^{V}\)) are extracted from the penultimate layers of pre-trained CNNs [30, 31]. Afterwards, face (\(\mathbf{b}_{i}\)) and voice embeddings (\(\mathbf{e}_{i}\)) are projected with a single modality-invariant network consisting of two fully-connected layers with ReLU activation function to a new \(d\)-dimensional embedding space \(\mathbf{I}_{i}\). In addition, batch normalization [32] is applied right after the last linear layer. Using this configuration, it becomes unnecessary to create a separate branch for each modality because embeddings taken from pre-trained CNNs share the same semantics. In other words, our SBNet learns representation irrespective of the input modality. Moreover, the proposed configuration is useful for learning unimodal representation with the same two fully connected layers for either audio or face recognition tasks. ### _Two-branch Network._ In order to fairly compare our SBNet, we also use a well known two-branch network [3, 15, 16] for comparison purposes. In this configuration, face and voice embeddings are first extracted from pre-trained CNNs, denoted as \(\mathbf{b}_{i}\) and \(\mathbf{e}_{i}\) respectively. Afterwards, face and voice embeddings are input to two independent branches, with each modality specific branch \(\mathbf{u}_{i}\) and \(\mathbf{v}_{i}\) respectively. Both \(\mathbf{u}_{i}\) and \(\mathbf{v}_{i}\) are then L2 normalized and later fused, e.g., using an attention mechanism [3, 16], to obtain \(\mathbf{l}_{i}\). Fig. 2: (a) Two independent modality-specific embedding networks to extract features (left) and a conventional two-branch network (right) having **two** independent modality-specific branches to learn discriminative joint representations of the multimodal task. (b) Proposed network with a **single** modality-invariant branch. ### _Loss Formulations_ In this section, we formally overview several existing loss formulations typically employed in existing face-voice association methods. **Fusion and Orthogonal Projection.** It imposes orthogonality constraints on the fused embeddings to explicitly minimize intra-identity variation while maximizing inter-identity separability [3, 16, 33]. These constraints complement better with the innate angular characteristic of cross entropy (CE) loss. The loss is formulated as follow: \[\mathcal{L}_{OC}=1-\sum_{i,j\in B,y_{i}=y_{j}}\langle\mathbf{l}_{i},\mathbf{l} _{j}\rangle+\left|\sum_{i,j\in B,y_{i}\neq y_{k}}\langle\mathbf{l}_{i}, \mathbf{l}_{k}\rangle\right|, \tag{1}\] where \(\langle.,.\rangle\) is the cosine similarity operator, and \(B\) represents the mini-batch size. The first term in Eq. 1 ensures intra-identity compactness, while the second term enforces inter-identity separation. Note that, the cosine similarity involves the normalization of fused embeddings, thereby projecting them to a unit hyper-sphere: \[\langle\mathbf{l}_{i},\mathbf{l}_{j}\rangle=\frac{\mathbf{l}_{i}.\mathbf{l}_{ j}}{\|\mathbf{l}_{i}\|_{2}.\|\mathbf{l}_{j}\|_{2}}. \tag{2}\] The joint loss formulation, comprising of \(\mathcal{L}_{CE}\) and \(\mathcal{L}_{OC}\) as: \[\mathcal{L}=\mathcal{L}_{CE}+\alpha\mathcal{L}_{OC}, \tag{3}\] where \(\alpha\) balances the contribution of two terms in \(\mathcal{L}\). **Center Loss.** It simultaneously learns class centers from features in a mini-batch and penalizes the distance between each class center and corresponding features [1]. Recently, Nawaz et al. [24] introduces a Deep Latent Space architecture to extract audio-visual information to bridge the gap between them, leveraging center loss. The loss is formulated as follow: \[\mathcal{L}_{C}=\frac{1}{2}\sum_{i=1}^{b}\left\|\mathbf{l}_{i}-\mathbf{c}_{y_ {i}}\right\|_{2}^{2} \tag{4}\] The \(\mathbf{c}_{y_{i}}\) denotes \(\mathbf{y}_{i}\)th class center of features. Wen et al. [34] observed that the center loss is very small which may degrade to zeros, thus, it is jointly trained with CE loss as follow: \[\mathcal{L}=\mathcal{L}_{CE}+\alpha_{c}\mathcal{L}_{C} \tag{5}\] A scalar value \(\alpha_{c}\) is used for balancing center loss and CE loss. **Git Loss.** It improves center loss by maximizing the distance between features belonging to different classes (push) while keeping features of the same class compact (pull) [35]. The loss is formulated as follow: \[\mathcal{L}_{G}=\sum_{i,j=1,i\neq j}^{b}\frac{1}{1+\left\|\mathbf{l}_{i}- \mathbf{c}_{y_{j}}\right\|_{2}^{2}} \tag{6}\] Git loss is trained jointly trained with center loss and CE loss as follow: \[\mathcal{L}=\mathcal{L}_{CE}+\alpha_{c}\mathcal{L}_{C}+\alpha_{g}\mathcal{L}_ {G} \tag{7}\] Scalar values \(\alpha_{c}\) and \(\alpha_{g}\) are used for balancing center loss, git loss and CE loss. ## III Experiments **Training Details and Dataset.** We train our method on Quadro \(\mathrm{P5000}\) GPU for \(50\) epochs using a batch-size of \(128\) using Adam optimizer with exponentially decaying learning rate (initialised to \(10^{-5}\)). We extract face and voice embeddings from Facenet [30] and Utterance Level Aggregation [31]. We perform experiments on _cross-modal verification_ and _cross-modal matching_ tasks on the large-scale VoxCeleb1 dataset [22]. We follow the same train, validation and test split configurations as used in [15] to evaluate on _unseen-unheard_ identities. We report results on standard verification metrics i.e. ROC curve (AUC) and equal error rate (EER). ### _Results_ **Comparison with two-branch network.** We compare our single-branch with a two-branch network under various loss formulations typically employed in face-voice association, including _fusion and orthogonal projection_[3, 16], _center loss_[16, 24, 34] and _Git loss_[16, 35]. In addition, we evaluate single-branch network on verification task for face and voice modalities. Table I reveals that our proposed SBNet outperforms two-branch network on all loss formulations. In addition, we examine effects of Gender (G), Nationality (N), Age (A) and their combination (GNA) separately, which influence both face and voice verification (Table II). SBNet achieves consistently better performance on G, N, A, and the combination (GNA) with all loss formulations. Furthermore, we compare our SBNet against two-branch network with various loss functions on a cross-modal matching task, \(1:n_{c}\) with \(n_{c}=2,4,6,8,10\) in Fig. 3. We see that it outperforms the counterpart two-branch network with Center and Git loss for all values of \(n_{c}\) whereas comparable performance with FOP. **Comparison with state-of-the-art.** Table III compares our method against existing state-of-the-art (SOTA) works (DIMNet [26], Learnable Pins [15], MAV-Celeb [2], Deep Latent Space [24], Multi-view Approach [36], Adversarial-Metric \begin{table} \begin{tabular}{|l|c|c c|c c|} \hline Train/Test Paradigm & Loss & \multicolumn{2}{c|}{Single-branch} & \multicolumn{2}{c|}{Two-branch} \\ \hline & & EER & AUC & EER & AUC \\ \hline \hline Face+Voice & & **27.5** & 79.7 & 28.0 & **80.0** \\ Voice Only & FOP & 8.6 & 97.1 & - & - \\ Face Only & & 14.4 & 93.1 & - & - \\ \hline Face+Voice & & **25.8** & **81.6** & 31.4 & 75.3 \\ Voice Only & Center & 9.6 & 96.7 & - & - \\ Face Only & & 13.1 & 93.9 & - & - \\ \hline Face+Voice & & **25.7** & **82.4** & 31.3 & 75.5 \\ Voice Only & Git & 9.6 & 96.7 & - & - \\ Face Only & & 13.1 & 93.9 & - & - \\ \hline \end{tabular} \end{table} TABLE I: Unimodal and cross-modal verification results on our SBNet and two-branch network using same underlying architecture with various loss formulations. Learning [28], Disentangled Representation Learning [27], and Voice-face Discriminative Network [37]). Our SBNet demonstrates comparable performance while utilizing only single branch and similar training paradigm. Among the three losses we compare, Git loss outperforms the others two losses. We further compare our SBNet with SOTA works on cross-modal matching (Fig. 4). In particular, we perform the \(1:n_{c}\) matching tasks, where \(n_{c}=2,4,6,8,10\), and report the results. Our SBNet outperforms SVHF, Learnable PIN, and Deep Latent Space by a noticeable margin whereas performs comparably with the rest of the compared methods. **Ablation study and analysis.** We also analyze the impact of input order during training (Table IV). Several choices are possible including randomly selecting either one of the modalities within an epoch, passing each modality for a portion of epoch (e.g., half epoch face half epoch voice - HeFHeV), or training several epochs on either of the modality before switching to the other (e.g., one epoch voice one epoch face - VFVF). This also indicates that our single-branch network does not suffer the catastrophic forgetting. To evaluate it further under more challenging scenarios, we experimented by passing one modality for several epochs. Results in Table V demonstrate that our approach retains the performance. ## IV Conclusion We presented a novel single-branch network to learn both unimodal and multimodal representations. The single-branch uses a series of fully connected layers to extract embeddings of modalities from independent modality specific networks and learns discriminative representations. Our SBNet outperformed standard two-branch network on all loss formulations on cross-modal verification and matching tasks and performed favourably against the existing SOTA methods on face-voice association tasks.
2301.10132
The superadditivity effects of quantum capacity decrease with the dimension for qudit depolarizing channels
Quantum channel capacity is a fundamental quantity in order to understand how good can quantum information be transmitted or corrected when subjected to noise. However, it is generally not known how to compute such quantities, since the quantum channel coherent information is not additive for all channels, implying that it must be maximized over an unbounded number of channel uses. This leads to the phenomenon known as superadditivity, which refers to the fact that the regularized coherent information of $n$ channel uses exceeds one-shot coherent information. In this article, we study how the gain in quantum capacity of qudit depolarizing channels relates to the dimension of the systems considered. We make use of an argument based on the no-cloning bound in order to proof that the possible superadditive effects decrease as a function of the dimension for such family of channels. In addition, we prove that the capacity of the qudit depolarizing channel coincides with the coherent information when $d\rightarrow\infty$. We also discuss the private classical capacity and obain similar results. We conclude that when high dimensional qudits experiencing depolarizing noise are considered, the coherent information of the channel is not only an achievable rate but essentially the maximum possible rate for any quantum block code.
Josu Etxezarreta Martinez, Antonio deMarti iOlius, Pedro M. Crespo
2023-01-24T16:54:09Z
http://arxiv.org/abs/2301.10132v4
The superadditivity effects of quantum capacity decrease with the dimension for qudit depolarizing channels ###### Abstract Quantum channel capacity is a fundamental quantity in order to understand how good can quantum information be transmitted or corrected when subjected to noise. However, it is generally not known how to compute such quantities, since the quantum channel coherent information is not additive for all channels, implying that it must be maximized over an unbounded number of channel uses. This leads to the phenomenon known as superadditivity, which refers to the fact that the regularized coherent information of \(n\) channel uses exceeds one-shot coherent information. In this letter, we study how the gain in quantum capacity of qudit depolarizing channels relates to the dimension of the systems considered. We make use of an argument based on the no-cloning bound in order to proof that the possible superaditive effects decrease as a function of the dimension for such family of channels. In addition, we prove that the capacity of the qudit depolarizing channel coincides with the coherent information when \(d\rightarrow\infty\). We conclude that when high dimensional qudits experiencing depolarizing noise are considered, the coherent information of the channel is not only an achievable rate but essentially the maximum possible rate for any quantum block code. ## I Introduction Classical communications were revolutionized when Claude Shannon introduced the noisy-channel coding theorem in his groundbreaking work _A Mathematical Theory of Communication_[1]. In such theorem, Shannon introduced the concept of channel capacity, which refers to the maximum coding rate for which asymptotically error-free communications are possible over a noisy channel. The consequences to this result are momentous since it establishes the limit, in terms of rate, for which error correction makes sense and, thus, the target that coding theorists should seek when designing their codes. The computation of such quantity results to be simple due to the fact that the classical mutual information is additive, implying that the regularization over \(n\) channel uses needed to compute the capacity of the channel results in a single-letter formula, i.e. in the optimization of such quantity over a single use of the channel [1]. The development of quantum information theory followed the steps of Shannon, introducing the concept of quantum channel capacity similarly to its classical counterpart, i.e. the maximum quantum coding rate for communication/correction (note that in the quantum setting the noise can arise from temporal evolution) with error rates vanishing asymptotically when quantum information is subjected to noise. In general, the computation of the quantum channel capacity, \(\mathcal{C}_{\mathrm{Q}}\), is based on the following regularization [2; 3]: \[\mathcal{C}_{\mathrm{Q}}(\mathcal{N})=\lim_{n\rightarrow\infty}\frac{1}{n} \mathcal{Q}_{\mathrm{coh}}(\mathcal{N}^{\otimes n}), \tag{1}\] where \(\mathcal{N}\) denotes the quantum channel and \(\mathcal{Q}_{\mathrm{coh}}\) refers to the channel coherent information defined as \[\begin{split}\mathcal{Q}_{\mathrm{coh}}(\mathcal{N})& =\max_{\rho}I_{\mathrm{coh}}(\mathcal{N},\rho)\\ &=\max_{\rho}S(\mathcal{N}(\rho))-S(\mathcal{N}^{c}(\rho)),\end{split} \tag{2}\] with \(I_{\mathrm{coh}}(\mathcal{N},\rho)\) the channel coherent information when state \(\rho\) is the input, \(S\) the von Neumann entropy and \(\mathcal{N}^{c}\) is a complementary channel to the environment. However, in stark contrast to its classical counterpart, the channel coherent information has been proven not to be additive in general [2; 4], implying that the regularization in equation (1) involves optimizing over an infinite parameter space. Given two arbitrary quantum channels \(\mathcal{N}_{1}\), \(\mathcal{N}_{2}\), the most one can say about the coherent channel information of the parallel channel \(\mathcal{N}_{1}\otimes\mathcal{N}_{2}\) is \(\mathcal{Q}_{\mathrm{coh}}(\mathcal{N}_{1}\otimes\mathcal{N}_{2})\geq\mathcal{ Q}_{\mathrm{coh}}(\mathcal{N}_{1})+\mathcal{Q}_{\mathrm{coh}}(\mathcal{N}_{2})\). When strict inequality holds, the channels are said to exhibit superadditivity, otherwise are said to have additive coherent information [5]. Explicit examples of superadditivity have been found for several classes of quantum channels [2; 4; 6; 7; 8; 9; 10]. Therefore, an important question to be answered is what types of channels have additive channel coherent information so that their capacity reduces to single-letter expressions, i.e., \(\mathcal{C}_{\mathrm{Q}}(\mathcal{N})=\max_{\rho}I_{\mathrm{coh}}(\mathcal{N}, \rho)\). At the time of writing, quantum channels with additive channel coherent information belong to the classes of degradable [2; 11; 13], conjugate degradable [11; 12] and less noisy than the environment [11] channels. The quantum capacities of antidegradable, conjugate antidegradable and entanglement-binding channels are also single-letter characterized, but they are equal to zero [2; 4; 11; 13]. The depolarizing channel is a widely used quantum channel model in order to describe the noise that quantum information experiences [14]. This channel is characterized by the depolarizing probability, \(p\), and its quantum channel capacity is still unknown even if it is the simplest and most symmetric nonunitary quantum channel. In general, \(d\)-dimensional depolarizing channels (those acting on \(d\)-dimensional quantum states referred as qudits) are antidegradable for \(p\geq\frac{d}{2(d+1)}\), while they do not belong to any of the classes of channels previously mentioned for \(p<\frac{d}{2(d+1)}\)[15]. Several upper bounds on the quantum capacity of \(d\)-dimensional depolarizing channels for the non-trivial parameter region have been derived [16; 17; 18; 19; 20]. However, the quantum capacity of the family of \(d\)-dimensional depolarizing channels remains a mystery for such region. In this letter, we study how the potential superadditivity effects of the quantum channel capacity relate to the dimension of the depolarizing channel. We provide an argument based on the no-cloning bound in order to study how the quantum capacity gain (defined as the difference between the quantum capacity and the channel coherent information) caused by potential coherent information superadditivity relates to the dimension of the depolarizing channel. We conclude that such possible capacity gain is a monotonically decreasing function with the dimension and, thus, that the superadditive effects are less and less important when the dimension of the depolarizing channels increases. In addition, we determine that for the extremal case in which the dimension of the system is let to grow indefinitely (in the limit where the qudit becomes a quantum oscillator, i.e., a bosonic mode [21]), the depolarizing channel capacity coincides with the channel coherent information. ## II Qudit depolarizing channels The \(d\)-dimensional or qudit depolarzing channel, \(\Lambda_{p}^{d}:\mathcal{H}_{d}\rightarrow\mathcal{H}_{d}\), is the completely-positive, trace preserving (CPTP) map defined as [16] \[\Lambda_{p}^{d}(\rho)=(1-p)\rho+p\mathrm{Tr}(\rho)\frac{I_{d}}{d}, \tag{3}\] where the density matrices \(\rho\) are the so-called qudits or quantum states operating over a \(d\) dimensional Hilbert space \(\mathcal{H}_{d}\), \(I_{d}/d\) refers to the maximally mixed state of dimension \(d\) and \(p\in[0,1]\) refers to the depolarizing probability. Consequently, the operation of the qudit depolarizing channel leaves the state uncorrupted with probability \(1-p\) while transforming it to the maximally mixed state with probability \(p\). The channel coherent information, \(\mathcal{Q}_{\mathrm{coh}}\), defined in equation (2), for this channel is [16] \[\begin{split}&\mathcal{Q}_{\mathrm{coh}}(\Lambda_{p}^{d})=\max \left\{0,\log_{2}d\right.\\ &+\left(1-p\frac{d^{2}-1}{d^{2}}\right)\log_{2}\left(1-p\frac{d^ {2}-1}{d^{2}}\right)\\ &\left.+p\frac{d^{2}-1}{d^{2}}\log_{2}\left(\frac{p}{d^{2}} \right)\right\},\end{split} \tag{4}\] with units of qubits per channel use. It provides a lower bound for the quantum channel capacity, \(C_{\mathrm{Q}}(\mathcal{N})\geq\mathcal{Q}_{\mathrm{coh}}(\mathcal{N})\). Note that by changing the \(\log_{2}\) in the above expression by \(\log_{d}\), the units of \(\mathcal{Q}_{\mathrm{coh}}(\Lambda_{p}^{d})\) are qudits per channel use. For the sake of notation we will denote the channel coherent information in such units by \(\mathcal{Q}_{\mathrm{coh}}^{d}(\Lambda_{p}^{d})\). Recall that for \(p<\frac{d}{2(d+1)}\) the channel does not belong to any of the classes with proven additive channel coherent information [22], implying that the quantum channel capacity is not known and may exhibit superadditivity gains. In fact, these gains have been obtained in previous works [2; 6; 7; 9]. Several techniques have been developed in order to obtain upper bounds for the quantum channel capacity of \(d\)-dimensional depolarizing channels [16; 17; 18; 19; 20]. Each of those upper bounds are tighter depending on the region of depolarizing probability considered in \(p\in\left[0,\frac{d}{2(d+1)}\right]\). The tighter upper bound is usually obtained by using the fact that the convex hull of the upper bounds is itself an upper bound [17]. However, for the purposes of this work, we will consider the so called no-cloning bound, \(\mathcal{Q}_{\mathrm{nc}}\), which upper bounds the quantum channel capacity of qudit depolarizing channels as [17] \[C_{\mathrm{Q}}(\Lambda_{p}^{d})\leq\mathcal{Q}_{\mathrm{nc}}(\Lambda_{p}^{d} )=\left(1-2p\frac{d+1}{d}\right)\log_{2}d, \tag{5}\] with units qubits per channel use. Note that the expression of \(\mathcal{Q}_{\mathrm{nc}}(\Lambda_{p}^{d})\) in qudits per channel use reduces to \[\mathcal{Q}_{\mathrm{nc}}^{d}(\Lambda_{p}^{d})=\left(1-2p\frac{d+1}{d}\right). \tag{6}\] ## III Superadditivity gain As explained in the previous section, the potential superadditive nature of the coherent information may lead to quantum channel capacities that are higher than the one-shot channel coherent information. In other words, there exists a gain in quantum channel capacity if several quantum channel uses are considered. In order to quantify this gain we define the superadditivity gain, \(\xi\), as \[\xi(\mathcal{N})=\mathcal{C}_{\mathrm{Q}}(\mathcal{N})-\mathcal{Q}_{\mathrm{ coh}}(\mathcal{N}), \tag{7}\] which gives the additional qubits per channel that the channel capacity has when compared the achievable rate of the channel coherent information. Clearly, if the coherent information of the channel is additive, then \(\xi(\mathcal{N})=0\). Knowledge about the quantum channel capacity is needed in order to compute the superadditivity gain in equation (7) and, as stated before, the quantum capacity of qudit depolarizing channels is still unknown. However, upper bounds on such quantity can be obtained using the upper bounds derived in [16; 17; 18; 19; 20]. For the purposes of this work we will upper bound the superadditivity gain by using the no-cloning bound as \[\xi_{\mathrm{nc}}(\Lambda_{p}^{d})=\mathcal{Q}_{\mathrm{nc}}(\Lambda_{p}^{d})- \mathcal{Q}_{\mathrm{coh}}(\Lambda_{p}^{d})\geq\xi(\Lambda_{p}^{d}). \tag{8}\] The units in the above expression are qubits per channel use. However, we will study the capacity gain with qudits per channel use units in order to have a fair comparison of the extra capacity that is obtained via superadditive effects. In this way, we will be able to see how many more qudits per channel use can be potentially obtained due to superadditive effects, which is more fair to compare those effects for different dimensions, since operating in more dimensions trivially implies that more information (in terms of qubits) can be encoded in a single quantum state. For example, consider \(d_{1}<d_{2}\) and assume that their superadditivity gains in qudits per channel use for both cases is the same. That is, \(\xi(\Lambda_{p}^{d_{1}})=\xi(\Lambda_{p}^{d_{2}})=g\). However, these gains become \(g\log_{2}(d_{1})<g\log_{2}(d_{2})\) when expressed in qubits per channel use, making the impression that the capacity of for \(d_{2}\) increases more. Note that whenever qudit error correction codes are constructed, their coding rate will have logical qudits per physical qudits units, implying that the extra rate obtained via superadditivity should be quantified in such terms. Therefore, in what follows, the units of the superadditive gains will be given in qudits per channel use, that is \[\xi_{\mathrm{nc}}(\Lambda_{p}^{d})=Q_{\mathrm{nc}}^{d}(\Lambda_{p}^{d})-Q_{ \mathrm{coh}}^{d}(\Lambda_{p}^{d})\geq\xi(\Lambda_{p}^{d}). \tag{9}\] ## IV Superadditivity effects of quantum capacity decrease as a function of the dimension We now provide the main result of this letter. **Theorem 1**.: _Let \(d_{l}\) be an arbitrary positive integer higher than \(2\) and \(p_{0}^{d_{l}}\in\mathbb{R}\) defined as_ \[p_{0}^{d_{l}}=\min_{p}\left(\left\{p\in\left(0,\frac{d_{l}}{2(d_{l}+1)}\right) :Q_{\mathrm{coh}}^{d_{l}}(\Lambda_{p}^{d_{l}})=0\right\}\right). \tag{10}\] _That is, \(p_{0}^{d_{l}}\) is the smallest depolarizing probability that makes the coherent information of the \(d_{l}\)-dimensional depolarizing channel equal to zero. Then, for any depolarizing probability \(p\) in the range \(p\in(0,p_{0}^{d_{l}})\), the superadditivity gain, \(\xi_{\mathrm{nc}}(\Lambda_{p}^{d})\), in qudits per channel use units is a monotonically decreasing function of the channel dimension, \(d\), for \(d\geq d_{l}\)._ Proof.: To prove the theorem, we must prove that \[\frac{\partial\xi_{\mathrm{nc}}(\Lambda_{p}^{d})}{\partial d}<0,\quad\forall p \in\left(0,p_{0}^{d_{l}}\right). \tag{11}\] Thus, the derivative of \(\xi_{\mathrm{nc}}(\Lambda_{p}^{d})\) over the dimension in the range \(p\in\left(0,p_{0}^{d_{l}}\right)\) \[\begin{split}\frac{\partial\xi_{\mathrm{nc}}(\Lambda_{p}^{d})}{ \partial d}&=-1-4p\frac{p}{d}+4p\frac{(d^{2}-1)}{d^{3}}-p\frac{(d ^{2}-1)\log_{2}\left(\frac{p}{d^{2}}\right)}{d^{2}\log_{2}d}\\ &-\frac{(1-p\frac{d^{2}-1}{d^{2}})\log_{2}\left(1-p\frac{d^{2}-1 }{d^{2}}\right)}{\log_{2}(d)}\\ &=-4p\frac{p}{d}+4p\frac{(d^{2}-1)}{d^{3}}-Q_{\mathrm{coh}}^{d}( \Lambda_{p}^{d})<0.\end{split} \tag{12}\] The last inequality follows from the fact that \(4p\frac{p}{d}>4p\frac{(d^{2}-1)}{d^{3}},\forall d\) (this inequality reduces to \(\frac{1}{d}>\frac{1}{d}-1\) which is true for all \(d>0\)), and the fact that \(\forall d\geq d_{l}\), \(Q_{\mathrm{coh}}^{d}(\Lambda_{p}^{d})\geq 0\), since \(p_{0}^{d}\) increases with \(d\) and we are considering the range \(p\in\left(0,p_{0}^{d_{l}}\right)\). Figure 1 graphically shows the results of this theorem. It plots the no-cloning superadditivity gain versus depolarizing probability, \(p\), for four different \(d_{l}\) dimensions. For a given \(d_{l}\), the vertical dashed lines gives the value of the corresponding \(p_{0}^{d_{l}}\). Note the result of Theorem 1 states that for an initial dimension \(d_{l}\), the no-cloning superadditive gain, \(\xi_{\mathrm{nc}}(\Lambda_{p}^{d})\) is a decreasing function with respect to the dimension \(d\geq d_{l}\) in the depolarizing probability range \(p\in(0,p_{0}^{d_{l}})\). Additionally, the upper limit of such range, \(p_{0}^{d_{l}}\), increases with respect to the initial dimension in consideration. This value saturates to \(1/2\) when the dimension of the Figure 1: **No-cloning superadditivity gain as a function of depolarizing probability \(p\).** Channel dimensions \(d\in\{2,2^{2},2^{5},2^{20}\}\) are plotted. system is left to grow indefinitely since \[\begin{split}\lim_{d\to\infty}Q_{\mathrm{coh}}^{d}(\Lambda_{p}^{d})& =\lim_{d\to\infty}\!\!\left(1+\frac{\left(1-p\,\frac{d^{2}-1}{d^{2}} \right)\log_{2}\left(1-p\,\frac{d^{2}-1}{d^{2}}\right)}{\log_{2}d}\right.\\ &\qquad\qquad\left.+\frac{p\,\frac{d^{2}-1}{d^{2}}\log_{2}\left( \frac{p}{d^{2}}\right)}{\log_{2}d}\right)\!=1-2p,\end{split} \tag{13}\] which vanishes at the value of \(1/2\). In this way, by starting with the minimum dimension of a quantum system, i.e. a qubit \(d_{l}=2\), we can always find another initial higher dimension for which the no-cloning superadditive gain decreases in all the range of depolarizing probabilities \(p\in(0,1/2)\). For example, see that in Figure 1 we can change from \(d_{l}=2\) to \(d_{l}=4\) once we reach \(p_{0}^{d_{l}=2}\), and the gain will still be decreasing for \(d>d_{l}=4\). This can be done each time we reach a particular \(d_{l}\). Thus, we effectively prove that whenever the dimension of the system increases, the room left for superadditive effects in qudits per channel use units decreases. Note also that the region \(p\in(0,1/2)\) is actually the only region where superadditivity may happen for every \(d\)-dimensional depolarizing channels since for \(p=0\) there is no noise, implying that \(C_{\mathrm{Q}}^{d}(\Lambda_{p}^{d})=1\), while for \(p>1/2\) every qudit depolarizing channel is antidegradable since \(\lim_{d\to\infty}d/(2(d+1))=1/2\). Figure 2 showcases the decrease of the no-cloning superadditive gain for different depolarizing probabilities \(p\in\{0.01,0.05,0.1,0.2,0.25\}\) as a function of the dimension of the system considered. Two important conclusions are derived from Theorem 1, which are clearly appreciated in the above two figures. The first conclusion is that whenever quantum systems of high dimensions are corrupted by the operation of a qudit depolarizing channel, the non-additive behaviour of the coherent information is less relevant. That is, the potential superadditivity gain in terms of qudits per channel use decreases. This is an important result for the depolarizing channel since it implies that for very high dimensional systems, the channel coherent information and the quantum channel capacity will be close together. Note that tighter bounds than the no-cloning bound can be used to bound the superadditivity gain, implying that the actual gain will be much smaller. This yields to the second conclusion which states that for high dimensional systems, the capacity of the depolarizing channel is close to the single-letter coherent information of the channel, that is, one can state that \(C_{\mathrm{Q}}^{d}(\Lambda_{p}^{d})\approx Q_{\mathrm{coh}}^{d}(\Lambda_{p}^{ d})\). Therefore, we can conclude that for such high dimensional systems, random block codes on the typical subspace of the optimal input (for the one-shot coherent information) will essentially achieve quantum channel capacity [2; 23]. We have observed that the superadditive behaviour of coherent information loses importance when the dimensions of the qudit depolarizing channel increase. In particular, in the limit, when \(d\) is let to be infinite, the qudit becomes a quantum oscillator or bosonic mode [21], and the quantum channel capacity of the \(\infty\)-dimensional or bosonic depolarizing channel is given by \(1-2p\), as it is shown in the following Corollary. **Corollary 1**.: _The quantum channel capacity of the \(\infty\)-dimensional or bosonic depolarizing channel is_ \[C_{\mathrm{Q}}^{d}(\Lambda_{p}^{\infty})=Q_{\mathrm{coh}}^{d}(\Lambda_{p}^{ \infty})=1-2p, \tag{14}\] _with bosonic modes per channel use units for \(p\in[0,1/2]\) and \(0\) for \(p\in[1/2,1]\)._ Proof.: We use a sandwich argument to prove the corollary. We know from equation (13) that the coherent information of the depolarizing channel has the following asymptotic behaviour in the region \(p\in[0,1/2]\) \[C_{\mathrm{Q}}^{d}(\Lambda_{p}^{\infty})\geq Q_{\mathrm{coh}}^{d}(\Lambda_{p}^ {\infty})=\lim_{d\to\infty}Q_{\mathrm{coh}}^{d}(\Lambda_{p}^{d})=1-2p. \tag{15}\] In addition, if we study the asymptotic behaviour of the no-cloning bound in equation (6), then \[C_{\mathrm{Q}}^{d}(\Lambda_{p}^{\infty})\leq\lim_{d\to\infty}\left(1-2p\, \frac{d+1}{d}\right)=1-2p, \tag{16}\] which completes the sandwich and, thus, \[C_{\mathrm{Q}}^{d}(\Lambda_{p}^{\infty})=Q_{\mathrm{coh}}^{d}(\Lambda_{p}^{ \infty})=1-2p. \tag{17}\] For the complementary region, \(p\in[1/2,1]\), we know that this channel is antidegradable. Therefore, the quantum channel capacity vanishes. Figure 2: **No-cloning superadditivity gain as a function of dimension and depolarizing probability.** We plot the superadditivity gain in terms of qudits per channel use as a function of the dimension of the depolarizing channel for \(p\in\{0.01,0.05,0.1,0.2,0.25\}\). Consequently, it can be seen that the superadditive nature of the coherent information of the qudit depolarizing channel is lost when the dimension of the system is let to grow indefinitely, i.e. \(\xi(\Lambda_{p}^{\alpha})=0,\mathsf{V}p\). This result is specially interesting since it is an example of a channel not belonging to the degradable or conjugate degradable classes (the depolarizing channel does not belong to these families of channels), but showing channel coherent information with an additive behaviour. ## V Conclusion In this letter we have studied how the potential superadditivity effects of the quantum capacity of the qudit depolarizing channel relate to the dimension of the quantum systems in consideration. We proved that whenever the dimension of the \(d\)-dimensional depolarizig channel increases, the potential gain in terms of qudits per channel use decreases. This is an important result since it implies that for very high dimensional systems the channel coherent information and the quantum channel capacity will be very similar for the depolarizing channel, which results in the fact that random block codes on the typical subspace of the optimal input will be capacity achieving. We also observed that when \(\infty\)-dimensional or bosonic depolarizing channels are considered, the coherent information results to be an additive quantity, making the superadditivity gain to vanish for all depolarizing probabilities. We have conducted this analysis of the reduction of superadditivity effects for depolarizing channels, but we consider that this type of techniques can be used in order to study how superadditivity behaves in high dimensions for other quantum channels. This way it could be concluded if the gain in qudits per channel use decreases with respect to the dimension for every quantum channel implying that seeking such effects should be restricted for low dimensional quantum channels. ## VI Data availability The data that supports the findings of this study is available from the corresponding authors upon reasonable request. ## VII Code availability The code that supports the findings of this study is available from the corresponding authors upon reasonable request. ## VIII Acknowledgements This work was supported by the Spanish Ministry of Economy and Competitiveness through the ADELE project (Grant No. PID2019-104958RB-C44), by the Spanish Ministry of Science and Innovation through the provec Few-qubit quantum hardware, algorithms and codes, on photonic and solid-state systems (PLEC2021-008251), by the Ministry of Economic Affairs and Digital Transformation of the Spanish Government through the QUANTUM ENIA project call - QUANTUM SPAIN project, and by the European Union through the Recovery, Transformation and Resilience Plan - NextGenerationEU within the framework of the Digital Spain 2025 Agenda. ## IX Author contributions J.E.M. conceived the research. J.E.M. proved the results. J.E.M., A.deM.iO. and P.M.C. analyzed the results and drew the conclusions. J.E.M., A.deM.iO. and P.M.C. wrote the manuscript.
2310.16202
Existence of solution to a system of PDEs modeling the crystal growth inside lithium batteries
The life-cycle of electric batteries depends on a complex system of interacting electrochemical and growth phenomena that produce dendritic structures during the discharge cycle. We study herein a system of 3 partial differential equations combining an Allen--Cahn phase-field model (simulating the dendrite-electrolyte interface) with the Poisson--Nernst--Planck systems simulating the electrodynamics and leading to the formation of such dendritic structures. We prove novel existence, uniqueness and stability results for this system and use it to produce simulations based on a finite element code.
Omar Lakkis, Alexandros Skouras, Vanessa Styles
2023-10-24T21:39:16Z
http://arxiv.org/abs/2310.16202v1
# Existence of solution to a system of PDEs modeling the crystal growth inside lithium batteries ###### Abstract. The life of electric batteries depend on a complex system of interacting electrochemical and growth phenomena that produce dendritic structures during the discharge cycle. We study herein a system of 3 partial differential equations combining an Allen-Cahn phase-field model (simulating the dendrite-electrolyte interface) with the Poisson-Nernst-Planck systems simulating the electrodynamics and leading to the formation of such dendritic structures. We prove novel existence, uniqueness and stability results for this system and use it to produce simulations based on a finite element code. ## 1. Introduction As humanity aims at reaching "net-zero" by 2050, replacing fossil fuels with renewable alternatives has spurred the research in storage technologies. One of the favored technologies, the _lithium battery_ is the object of intense scrutiny by scientists and engineers [Akolkar, b, Chen et al., b, Cogswell, Ely et al., Hong and Viswanathan, Mu et al., a, Okajima et al., Yurkiv et al., e.g.]. One of the important phenomena in a Li-metal battery, i.e., a battery with a metallic lithium electrode, is the _electrodeposition_ process during which solid structures known as _dendrites_ may grow attached to the electrode into the electrolyte solution and lead to the battery's deterioration and ultimately its failure [Okajima et al., Akolkar, a, Liang and Chen, Chen et al., b, Yurkiv et al.]. Our aim herein is a rigorous analytical and computational study of a partial differential equation (PDE) based model that describes the electrodeposition process and captures the dynamics of resulting dendritic growth phenomena. The model we study consolidates various models already addressed by the enginnering community [Chen et al., b, Liang and Chen, Liang et al.], with some modifications resulting in a system of PDEs which we rigorously prove to be well-posed. The resulting system, which we fully describe in section 2.1, is captured by three interacting PDEs: 1. a nonlinear anisotropic _Allen-Cahn_ equation, thus _phase-field_ type PDE, with a forcing term that accounts for the Butler-Volmer electrochemical reaction kinetics, which models the concentration of Li atoms; 2. a reaction-diffusion _Nernst-Planck_ equation, which describes the dynamics of the concentration of Li\({}^{+}\); and 3. a _Poisson_ type equation that describes the electric potential that drives the dynamics of the Li\({}^{+}\) ions. An early application of phase-field modeling to electrochemistry was introduced in Guyer et al. [a]. In this work a new model was proposed, derived by a free energy functional that includes the electrostatic effect of charged particles leads to rich interactions between concentration, electrostatic potential and phase stability. This model was further studied by the same people in Guyer et al. [c,b]. These papers gave motivation, mostly to engineers, to study the model and use it to describe lithium batteries. There is extensive literature with papers using variations of the model described by Guyer et al. [a] together with numerical simulations that are trying to capture the behavior of the Li-ion or Li-metal batteries in two spatial dimensions [Okajima et al., Liang et al., Akolkar, a, Zhang et al., a, Liang and Chen, Ely et al., Akolkar, b, Chen et al., b, Cogswell, Yurkiv et al., Hong and Viswanathan, Liu and Guan, Mu et al., a, e.g.] and in three [Mu et al., b, e.g.], using a Nernst-Planck-Poisson system coupled with an anisotropic phase-field equation. The study of anisotropy in surfaces and interfaces goes back many decades, for example, in Hoffman and Cahn, a vector function was introduced as an alternative to the scalar function with which the anisotropic free energy of surfaces was usually described. Kobayashi pioneered research on anisotropic dendritic crystal growth phenomena since the mid 80s. He first introduced an anisotropic phase-field model to show the minimal set of factors which can yield a variety of typical dendritic patterns. This and later works were focused more on the solidification of pure material in two and three spatial dimensions [Kobayashi, a,b]. The proposed model was proven to be able to describe realistic dendritic patterns numerically. In two spatial dimensions, the anisotropy occurs from the derivation of the energy. In three spatial dimensions, though, the anisotropy was given by an artificial term rather than from the deriviation of the energy, in order to reduce significantly the computational cost of the phase-field equation. Other works address numerical simulations more thoroughly for two spatial dimensions in Wheeler et al.. A better attempt on numerical simulations in three spatial dimensions was shown in Karma and Rappel. It was the first attempt that actually computed the anisotropic diffusion tensor. It was not until 1993 that analytical results on these models were established. In McFadden et al. an asymptotic analysis in the sharp-interface limit of the model studied in Kobayashi [a], Wheeler et al. was established, including an anisotropic mobility. In Wheeler and McFadden, a \(\xi\)-vector formulation of anisotropic phase-field models was introduced, as in Hoffman and Cahn, in an attempt to investigate the free-boundary problem approached in the sharp interface limit of the phase-field model used to compute three-dimensional dendrites. Mathematical analysis on different viewpoints of the former mentions, yet significantly useful on expanding the knowledge on the anisotropic phase-field models, has been developed in Elliott and Schatzle [a,b], Taylor and Cahn. In the early 2000s, there were three papers, Burman and Rappaz, Burman et al., Burman and Picasso, that treated a coupled system of PDEs, including a phase-field equation with anisotropy, based on the model that was proposed by Warren and Boettinger for the dendritic growth and microsegregation patterns in a binary alloy at constant temperature. However, the proposed anisotropic diffusion tensor was implemented specifically for two spatial dimensions and so the analytical and numerical results were restricted in \(\mathbb{R}^{2}\). In Graser et al. the same model was studied and the analytical results were expanded in three spatial dimensions. In the same paper different time discretizations were studied for their stability. A 3D implementation of the anisotropy was introduced considering the regularized \(\ell^{1}\)-norm [Graser et al., Example 2.2], yet different than in Karma and Rappel, where the 3D anisotropy is given under the same principles that Kobayashi first introduced. More recently, in Li et al. new numerical schemes were introduced, but the numerical results that were presented were in two spatial dimensions. The study of the anisotropic phase-field equations is still in progress and one crucial challenge is the development of efficient numerical algorithms for three spatial dimensions. In Zhang et al. [b] numerical algorithms for the anisotropic Allen-Cahn equation with precise nonlocal mass conservation were proposed in this direction. To our best knowledge, there are no rigorous analysis results for the system of PDEs. The Nernst-Planck-Poisson system is often coupled with the Navier-Stokes equation [Zhang and Yin, Bothe et al., e.g.], or studied on its own [Kato, Chen et al., a, e.g.]. In one occasion, Liu and Eisenberg, the proposed model, called Poisson-Nernst-Planck-Fermi, is replacing the Poisson equation with a fourth order Cahn-Hilliard type PDE, but it is not sharing many similarities with the model in this paper. The purpose of this work is to establish well-posedness results for the Nernst-Planck-Poisson system, coupled with the anisotropic phase-field equation, and to provide with numerical simulations of the dendritic crystal growth. Our goal in this paper, is to show that the Nernst-Planck-Poisson system, coupled with the anisotropic phase-field equation, that was introduced in Liang et al., Liang and Chen, Chen et al. [b], is well posed. A source of techniques we use is found in Burman and Rappaz. The difference in our paper is that the solution of the Poisson equation is used to give the vector field in the convection term of the convection-diffusion PDE, in comparison with the model of Burman and Rappaz where they produce the vector field from the solution of the phase-field equation. We have also added forcing terms to our reaction-diffusion PDE and to the Poisson equation, which are dependent on the order parameter. Our forcing term in the Allen-Cahn PDE is also coming from the Butler-Volmer electrochemical reaction kinetics, which give quite different numerical simulations both at the isotropic and the anisotropic cases. For more details, see Section 5. The remainder of this paper is organised as follows. In Section 2 we present a rescaling of the equations (2.1a),(2.1b) and (2.1c), as well as a weak formulation of the system. We also introduce the anisotropy tensor and its properties. In Section 3 we will present the main result of this paper which is an existence result, using the Rothe's method, for a weak solution of the system. In Section 5 we present some numerical simulations. ## 2. The PDE model and its weak formulation In this section we present the model for the Lithium batteries. In Section 2.1 we start by presenting the equations that consist our PDE system. In Section 2.2 we "nondimensionalize" the model, so that we work without units and perform correct and efficient computations later. In Section 2.3 we pass to a weak formulation of the model. In Section 2.4 we introduce the anisotropic diffusion tensor, its derivation and useful properties. At last, we present the main result of this paper, in Section 2.5. ### A Nernst-Planck-Poisson-Allen-Cahn model for Lithium batteries Let \(\Omega\subseteq\mathbb{R}^{d},d=2,3\), be a bounded domain with Lipschitz boundary and \([0,T^{*}]\) is the time interval, with \(T^{*}>0\) being a target "final" time. The model under study consists of an initial-boundary value problem for the three unknowns \(u,c,\phi:\Omega\times[0,T^{*}]\rightarrow\mathbb{R}\) respectively indicating the phase-field variable (denoting the concentration of crystallized Li atoms), the concentration of Li\({}^{+}\) ions, and the electric potential. The model we consider is an adjusted version of the model that was introduced in Chen et al. [b]. Our addition is a relaxation parameter, denoted by \(\epsilon\), which is useful for keeping the numerical results in the appropriate range of values. The rest remain the same, but with a different set of values. See Section 5 for more information on the numerical results. The model takes the form: \[\epsilon^{2}\partial_{t}u=\nabla\cdot[\boldsymbol{A}(\nabla u)\nabla u ]-\gamma g^{\prime}(u)-\kappa h^{\prime}(u)\big{(}\operatorname{e}^{\frac{(1- \alpha)nF_{\text{in}}}{RT}}-c\operatorname{e}^{\frac{-\alpha nF_{\text{in}}}{RT }}\big{)}, \tag{2.1a}\] \[\partial_{t}c=\nabla\cdot\big{[}D(u)\nabla c+D_{1}(u,c)\frac{nF}{RT }\nabla\phi\big{]}-\mu\partial_{t}u,\] (2.1b) \[\nabla\cdot\big{[}\sigma(u)\nabla\phi\big{]}=nFc_{s}\partial_{t}u, \tag{2.1c}\] in \(\Omega\times[0,T^{*}]\), where \(\gamma,\kappa,\mu>0\), \(\alpha\in[0,1]\) is a parameter that affects the symmetry of the magnitude of the forcing term in (2.1a), \(n,F,R,T\) respectively are the valence of the chemical reaction of the \(\operatorname{Li}^{+}\) with the electrons \(\operatorname{e}^{-}\), the Faraday number, the gas constant and the temperature. The activation overpotential is given by \(\eta_{a}\) and it is negative, \(c_{s}>0\) represents the site density of Li-metal, \(g,h,\sigma\) and \(D\) are continuously differentiable functions and represent the double-well potential, the primitive of the double-well potential, the effective conductivity and the effective diffusion coefficient respectively and are formulated as follows, \[g(s)=\left\{\begin{array}{ll}s^{2}(1-s)^{2}/4,&0\leq s\leq 1,\\ 0,&\text{elsewhere},\end{array}\right. \tag{2.2}\] \[h(s)=\left\{\begin{array}{ll}1,&s>1,\\ s^{3}(6s^{2}-15s+10),&0\leq s\leq 1,\\ 0,&s<0,\end{array}\right. \tag{2.3}\] \[D(s)=D^{e}h(s)+D^{\text{s}}(1-h(s))\text{ for }s\in\mathbb{R}, \tag{2.4}\] \[\sigma(s)=\sigma^{e}h(s)+\sigma^{s}(1-h(s))\text{ for }s\in\mathbb{R}, \tag{2.5}\] where \(D^{\text{e}},D^{\text{s}}>0\) are the diffusion in the electrode and the electrolyte solution respectively. The same applies to \(\sigma^{\text{e}},\sigma^{\text{s}}>0\) for the conductivity. The double-well potential and its primitive are Lipschitz continuous functions in the way they are defined. The effective diffusion and conductivity functions are also monotone functions when \(0\leq u\leq 1\). The fact that the above coefficients are positive imply that these functions are also positive for these values of \(u\). Also, we define \(D_{1}:\mathbb{R}^{2}\to\mathbb{R}\) as \[D_{1}(w,s)=\left\{\begin{array}{ll}D_{\text{min}},&s>1,\\ D(w)s,&0\leq s\leq 1,\\ 0,&s<0,\end{array}\right. \tag{2.6}\] where \(D_{\text{min}}:=\min_{s\in\mathbb{R}}D(s)\). Although our analysis applies to general domains, \(\Omega\), we will focus on the specific cylindrical geometry \(\Omega=[0,L_{1}]\times B\subset\mathbb{R}^{d}\), \(d=2,3\), with \(L_{1}>0\), \(B\in\mathbb{R}^{d-1}\) e.g., \(B:=[0,L_{2}]^{d-1}\) for some \(L_{2}>0\) and boundary \(\partial\Omega=\Gamma_{1}\cup\Gamma_{2}\cup\Gamma^{\prime}\), with \(\Gamma_{1},\Gamma_{2}\) representing the segments of the boundary where we consider (sometimes two different) Dirichlet boundary conditions, and \(\Gamma^{\prime}\) where we take Neumann boundary conditions. In particular, in \(d=2\), we have \[\Gamma_{1} :=\left\{\mathbf{x}=(x,\mathbf{y})\in\Omega:\;x=0\right\}, \tag{2.7}\] \[\Gamma_{2} :=\left\{\mathbf{x}=(x,\mathbf{y})\in\Omega:\;x=L_{1}\right\}\] (2.8) \[\Gamma :=\Gamma_{1}\cup\Gamma_{2}\text{ and }\Gamma^{\prime}=\partial \Omega\smallsetminus\Gamma. \tag{2.9}\] The boundary conditions for (2.1a),(2.1b) and (2.1c) are the following, \[\mathbf{n}_{\Omega}\!\cdot\!\mathbf{A}(\nabla u)\nabla u=0\text{ on } \partial\Omega\times[0,T^{*}], \tag{2.10a}\] \[\mathbf{n}_{\Omega}\!\cdot\!D(u)\nabla c=\mathbf{n}_{\Omega}\! \cdot\!D_{1}(u,c)\nabla\phi\text{ on }\partial\Omega\times[0,T^{*}],\] (2.10b) \[\mathbf{n}_{\Omega}\!\cdot\!\sigma(u)\nabla\phi=0\text{ on } \Gamma^{\prime}\times[0,T^{*}],\] \[\phi=\phi_{-}\text{ on }\Gamma_{1}\times[0,T^{*}],\] (2.10c) \[\phi=0\text{ on }\Gamma_{2}\times[0,T^{*}],\] with \(\phi_{-}\in[-2,0)\) and \(\mathbf{n}_{\Omega}\) denoting the outward normal vector to \(\Omega\). For the equation (2.1b) we use the natural boundary conditions. Since the system is time dependent, we need to implement initial values for the order parameter and the concentration of \(\mathrm{Li}^{+}\). We choose \[u(\mathbf{x},0)=u_{0}(\mathbf{x})\text{ in }\Omega \tag{2.11a}\] \[c(\mathbf{x},0)=c_{0}(\mathbf{x})\text{ in }\Omega \tag{2.11b}\] so that \(|u_{0}(\mathbf{x})|\leq 1\) and \(|c_{0}(\mathbf{x})|\leq 1\). ### Rescaling We now rescale, by "nondimensionalizing" it, the system of PDEs (2.1a)-(2.1c), to ease the mathematical analysis and the associated numerical simulations. We start with the rescaling of the system (2.1a)-(2.1c). We set \(\tilde{T}=T^{*}/\Delta t_{0}\) and \(\tilde{L}_{i}=L_{i}/l_{0}\), \(i=1,2\), where \(l_{0},\Delta t_{0}\) are length and time reference values for the model and using Table 1 in Chen et al. [b] we define, \[\mathbf{x}=l_{0}\tilde{\mathbf{x}},\,t=\Delta t_{0}\tilde{t},\,D(u)= \frac{l_{0}^{2}}{\Delta t_{0}}\tilde{D}(u),\,\sigma(u)=\frac{l_{0}^{2}c_{0}F^{ 2}}{\Delta t_{0}RT}\tilde{\sigma}(u),\\ \phi=\frac{RT}{F}\tilde{\phi},\,\nabla\phi=\frac{RT}{l_{0}F} \nabla\tilde{\phi},\,\Delta\phi=\frac{RT}{l_{0}^{2}}\Delta\tilde{\phi}, \tag{2.12}\] \[\tilde{\Omega}\times[0,\tilde{T}]=[0,\tilde{L}_{1}]\times[0,\tilde{L}_{2}] \times[0,\tilde{T}]. \tag{2.13}\] We introduce the following coefficients \[C_{1}=\kappa\,\mathrm{e}^{-(1-a)nF|\eta_{a}|/RT} \tag{2.14a}\] \[C_{2}=\kappa\,\mathrm{e}^{anF|\eta_{a}|/RT}. \tag{2.14b}\] For the consistency of our problem, we consider the following cutoff function of the coefficient of the forcing term \[m(c)=\left\{\begin{array}{ll}C_{1}-C_{2},&\;\;\;c>1,\\ C_{1}-cC_{2},&\;\;0\leq c\leq 1,\\ C_{1},&\;\;c<0.\end{array}\right. \tag{2.15}\] Using the above rescaling, and dropping the tilde notation for ease of presentation, the dimensionless form of (2.1a)-(2.1c) is given by \[\epsilon^{2}\partial_{t}u=\nabla\cdot[\mathbf{A}(\nabla u)\nabla u]- \gamma g^{\prime}(u)+m(c)h^{\prime}(u)\text{ in }\Omega\times[0,T], \tag{2.16a}\] \[\partial_{t}c=\nabla\cdot\big{[}D(u)\nabla c+D_{1}(u,c)\nabla \phi\big{]}-\mu\partial_{t}u\text{ in }\Omega\times[0,T],\] (2.16b) \[\nabla\cdot\big{[}\sigma(u)\nabla\phi\big{]}=\nu_{1}\partial_{t}u \text{ in }\Omega\times[0,T], \tag{2.16c}\] together with the initial and boundary data (2.10a)-(2.10c) and (2.11a)-(2.11b). ### The weak formulation We next reformulate system the NPPAC system of 2.1 in weak form. To this aim, we briefly recall the functional space set-up, refering for details to standard texts [e.g. Evans]. We write \(\mathrm{L}_{p}(Q;\mathscr{X})\), for an open domain \(Q\in\mathbb{R}^{q}\) and a normed vector space \(\mathscr{X}\), as the space of (Lebesgue) \(p\)-summable functions \(\varphi:Q\to\mathscr{X}\), equipped with a norm such that \(\|\varphi\|_{\mathrm{L}_{p}(Q;\mathscr{X})}^{p}:=\int_{Q}\|\varphi\|_{ \mathscr{X}}^{p}\). We write \(\mathrm{L}_{p}(Q)^{k}\) when \(\mathscr{X}=\mathbb{R}^{k}\). When \(p=2\) and \(\mathscr{X}\) has an inner product, the space \(\mathrm{L}_{2}(Q;\mathscr{X})\) is also equipped with inner product \((\varphi,\psi)_{\mathrm{L}_{2}(Q;\mathscr{X})}:=\int_{Q}\left(\varphi,\psi \right)_{\mathscr{X}}\) for each \(\varphi,\psi\in\mathrm{L}_{2}(Q;\mathscr{X})\). Generalising, we denote Sobolev spaces of weakly differentiable functions \(\phi:Q\to\mathscr{X}\) up to order \(m\in\mathbb{N}\) with \(p\)-summable derivatives by \(\mathrm{W}_{p}^{m}(Q;\mathscr{X})\) with leading seminorm \(|\phi|_{\mathrm{W}_{p}^{m}(\Omega;\mathscr{X})}:=\|\operatorname{D}^{m}\! \phi\|_{\mathrm{L}_{p}(Q;\mathscr{X})}\) and full norm \(\|\phi\|_{\mathrm{W}_{p}^{m}(\Omega;\mathscr{X})}^{p}:=\sum_{j=0}^{m}|\phi|_{ \mathrm{W}_{p}^{j}(\Omega;\mathscr{X})}^{p}\). When \(p=2\) we shorten notation \(\mathrm{W}_{2}^{m}(Q;\mathscr{X})\) to \(\mathrm{H}^{m}(Q;\mathscr{X})\). As for the Lebesgue spaces, when \(\mathscr{X}=\mathbb{R}^{k}\) we write \(\mathrm{W}_{p}^{m}(Q)^{k}\) and \(\mathrm{H}^{m}(Q)^{k}\) and drop \(k=1\). For a Hilbert Sobolev space with Dirichlet boundary condition on \(\Sigma\subseteq\partial Q\) we write \(\mathrm{H}_{0|\Sigma}^{m}(Q)\) and \(\mathrm{H}_{0}^{m}(Q)\) if \(\Sigma=\partial Q\). First, we reformulate the nonhomogeneous Dirichlet boundary conditions for both (2.16b) and (2.16c). To this end we introduce the decomposition \[\bar{\phi}=\phi-\phi_{-}(L_{1}-x/L_{1}). \tag{2.17}\] It is clear that \(\phi_{-}(L_{1}-x/L_{1})x\) has boundary conditions that coincide with the boundary conditions (2.10c) of \(\phi\). A direct calculation leads to \[\nabla\cdot\big{[}\sigma(u)\nabla((L_{1}-x/L_{1})\phi_{-})\big{]}=-\nu_{2} \sigma^{\prime}(u)\partial_{x}u, \tag{2.18}\] where \(\nu_{2}=(\phi_{-}/L_{1})\). So we get \[\nabla\cdot\big{[}\sigma(u)\nabla\bar{\phi}\big{]}=\nu_{1}\partial _{t}u-\nu_{2}\sigma^{\prime}(u)\partial_{x}u\text{ in }\Omega\times[0,T],\] \[\mathbf{n}_{\Omega}\cdot\!\sigma(u)\nabla\bar{\phi}=0\text{ on } \Gamma^{\prime}\times[0,T], \tag{2.19}\] \[\bar{\phi}|_{\Gamma}=0\text{ on }\Gamma\times[0,T].\] For simplicity of presentation we drop the bar notation for \(\phi\) and the system in its final form is the following: \[\epsilon^{2}\partial_{t}u=\nabla\cdot[\mathbf{A}(\nabla u)\nabla u]- \gamma g^{\prime}(u)+m(c)h^{\prime}(u)\text{ in }\Omega\times[0,T], \tag{2.20a}\] \[\mathbf{n}_{\Omega}\cdot\!\mathbf{A}(\nabla u)\nabla u=0\text{ on } \partial\Omega\times[0,T],\] (2.20b) \[u(0)=u_{0}\text{ in }\Omega,\] \[\partial_{t}c=\nabla\cdot\big{[}D(u)\nabla c+D_{1}(u,c)\nabla\phi \big{]}-\mu\partial_{t}u\text{ in }\Omega\times[0,T],\] \[\mathbf{n}_{\Omega}\cdot\!D(u)\nabla c=\mathbf{n}_{\Omega}\cdot \!D_{1}(u,c)\nabla\phi\text{ on }\partial\Omega\times[0,T],\] (2.20c) \[c(0)=c_{0}\text{ in }\Omega,\] \[\nabla\cdot\big{[}\sigma(u)\nabla\phi\big{]}=\nu_{1}\partial_{t}u -\nu_{2}\sigma^{\prime}(u)\partial_{x}u\text{ in }\Omega\times[0,T],\] \[\mathbf{n}_{\Omega}\cdot\!\sigma(u)\nabla\phi=0\text{ on } \Gamma^{\prime}\times[0,T],\] (2.20d) \[\phi|_{\Gamma}=0\text{ on }\Gamma\times[0,T],\] with the initial data as described in (2.11a) and (2.11b) and with \[g^{\prime}(s)=\left\{\begin{array}{ll}s(s-1)(s-\frac{1}{2}),&0\leq s\leq 1, \\ 0,&\text{elsewhere},\end{array}\right. \tag{2.21}\] \[h^{\prime}(s)=\left\{\begin{array}{ll}30s^{2}(s-1)^{2},&0\leq s\leq 1, \\ 0,&\text{elsewhere}.\end{array}\right. \tag{2.22}\] A weak formulation of the above problem is given as follows: Let \(u(0)=u_{0}\in\mathrm{H}^{1}(\Omega)\) and \(c(0)=c_{0}\in\mathrm{L}_{2}(\Omega)\). Then, find \((u,c,\phi)\in\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))^{2}\times(\mathrm{L }_{2}([0,T];\mathrm{H}^{1}_{0|\Gamma}(\Omega)))\) with \((\partial_{t}u,\partial_{t}c)\in\big{(}\,\mathrm{L}_{2}([0,T];\mathrm{L}_{2}( \Omega))\big{)}^{2}\) such that \[\epsilon^{2}(\partial_{t}u,v)+(\mathbf{A}(\nabla u)\nabla u,\nabla v)+ \gamma(g^{\prime}(u),v)=(m(c)h^{\prime}(u),v), \tag{2.23a}\] \[(\partial_{t}c,\chi)+(D(u)\nabla c,\nabla\chi)+(D_{1}(u,c)\nabla \phi,\nabla\chi)=-\mu(\partial_{t}u,\chi),\] (2.23b) \[(\sigma(u)\nabla\phi,\nabla\eta)=(f(u),\eta), \tag{2.23c}\] for all \((v,\chi,\eta)\in\mathrm{H}^{1}(\Omega)^{2}\times\mathrm{H}^{1}_{0|\Gamma}(\Omega)\), where \(f(u)=-\nu_{1}\partial_{t}u+\nu_{2}\sigma^{\prime}(u)\partial_{x}u\). ### Anisotropic diffusion tensor We use an anisotropic diffusion tensor for the order parameter that is related to the anisotropic function \(a\), as proposed by Karma and Rappel, is given by \[a(\mathbf{p})=a_{0}(1-3\delta)\left(1+\frac{4\delta}{1-3\delta}\frac{\sum_{j=1}^{d }{p_{j}}^{4}}{\left|\mathbf{p}\right|^{4}}\right) \tag{2.24}\] for some material dependent parameters \(a_{0},\delta>0\). This leads to the diffusion term of the phase-field equation, which is obtained from the anisotropic Dirichlet energy \[J_{a}(w)=\int_{\Omega}\frac{a^{2}(\nabla w)}{2}|\nabla w|^{2}, \tag{2.25}\] with \(w\in\mathrm{H}^{1}(\Omega)\). In two spatial dimensions, this functional is proven in Burman and Rappaz to be strictly convex in \(w\) for all \(w\in\mathrm{H}^{1}(\Omega)\) under the condition of \(\delta<\delta_{0}=1/15\). The anisotropic Dirichlet energy is differentiable and its derivative is an operator \(J_{a}^{\prime}:\mathrm{H}^{1}(\Omega)\to\mathrm{H}^{1}(\Omega)^{\prime}\) that exists for every \(v\in\mathrm{H}^{1}(\Omega)\) and is given by \[J_{a}^{\prime}(w)[v]=\int_{\Omega}\mathbf{A}(\nabla w)\nabla w\cdot\nabla v, \tag{2.26}\] for all \(v\in\mathrm{H}^{1}(\Omega)\). The tensor-valued field \(\mathbf{A}(\mathbf{p})\) of \(\mathbf{p}\in\mathbb{R}^{d}\smallsetminus\{\mathbf{0}\}\), which models the anisotropy of the growing crystals due to the Lithium's cubic crystalline structure as in Karma and Rappel, Burman and Rappaz, is defined by \[\mathbf{A}(\mathbf{p})=a(\mathbf{p})\nabla a(\mathbf{p})\mathbf{p}^{\intercal}+a^{2} (\mathbf{p})\mathbf{I}\\ =16a(\mathbf{p})\delta|\mathbf{p}|^{-6}\left[p_{i}p_{j}({p_{i}}^{2}|\mathbf{ p}|^{2}-\sum_{k=1}^{d}{p_{k}}^{4})\right]_{i=1,\ldots,d}^{j=1,\ldots,d}+a^{2}( \mathbf{p})\mathbf{I}. \tag{2.27}\] Since the matrix \(\mathbf{A}\) has all its entries bounded, the derivative is bounded by the following upper and lower bounds \[c_{A}|w|^{2}_{\mathrm{H}^{1}(\Omega)}\leq\int_{\Omega}\mathbf{A}(\nabla w)|\nabla w |^{2}\leq C_{A}|w|^{2}_{\mathrm{H}^{1}(\Omega)}. \tag{2.28}\] Due to the convexity of the tensor, the following inequality holds \[J_{a}(w)-J_{a}(v)\leq J_{a}^{\prime}(w)[w-v]=\int_{\Omega}\mathbf{A}(\nabla w) \nabla w\nabla(w-v). \tag{2.29}\] Also, the mapping \(w\in\mathrm{H}^{1}(\Omega)\mapsto J^{\prime}_{a}(w)\in\mathrm{H}^{1}(\Omega)^{\prime}\) is monotone and hemicontinuous. We refer to Burman and Rappaz for further details on the proofs of the properties (2.28) and (2.29). ### Main result We present here the main result of this paper. ### Theorem (Existence) For every \((u_{0},c_{0})\in(\mathrm{H}^{1}(\Omega))^{2}\) and \(T>0\) there exist \[u\in\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))\cap\mathrm{H}^{1}([0,T]; \mathrm{L}_{2}(\Omega))\cap\mathrm{L}_{\infty}([0,T];\mathrm{L}_{2}(\Omega)) \cap\mathrm{L}_{\infty}([0,T];\mathrm{L}_{\infty}(\Omega)),\] \[c\in\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))\cap\mathrm{H}^{1}([0,T]; \mathrm{L}_{2}(\Omega))\cap\mathrm{L}_{\infty}([0,T];\mathrm{L}_{2}(\Omega))\] and \[\phi\in\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega)),\] such that the vector \((u,c,\phi)\) is a unique weak solution that satisfies \((u(0),c(0))=(u_{0},c_{0})\) and (2.23a)-(2.23c) for almost every \(t\in[0,T]\) and every \((v,\eta)\in\mathrm{H}^{1}(\Omega)\times\mathrm{H}^{1}_{0|\Gamma}(\Omega)\). The proof of Theorem 2.6 will be split in Sections 3 and 4, after establishing all the auxiliaries, and can be summarized as: 1. Write an implicit Euler time semidiscretization for which we prove existence of a solution, called _semidiscrete approximation_. 2. Show maximum principle results. 3. Derive energy estimates that allow to find sequences of the semidiscrete approximation that are weakly compact in the timestep. 4. Show that the weak limit of the semidiscrete approximation solves system (2.23a)-(2.23c). ## 3. Time discretization In this Section we focus on the time discretization of the weak formualtion, on existence results of the system of PDEs for each time-step as well as on energy estimates for each PDE. We first start by discretizing in time equations (2.23a)-(2.23c) with a time step \(\tau>0\). Then, we continue by proving existence of a solution \((u^{k},c^{k},\phi^{k})_{k=0,\dots,K}\) for a time-discrete approximation of the weak formulation (2.23a)-(2.23c) in Theorem 3.1. The proof of Theorem 3.1 is broken into smaller parts. We first apply operator splitting to prove existence of each time-discrete equation separately, assuming the two other unknowns are given from the previous time instant. In Lemma 3.2 we prove the existence of a unique solution \(u^{k}\) to (3.1a), with \(c^{k-1}\) given, and in Lemma 3.3 we show that \(u^{k}\) is bounded in \(k\). Then, in Lemma 3.4 we prove the existence of a unique solution \(\phi^{k}\) to (3.1b) with given \(u^{k}\). In Lemma 3.6 we show that there is a unique solution \(c^{k}\) to (3.1c) with given \(u^{k},\phi^{k}\). We, finally, prove uniform bounds for \(u^{k}\) in \(\mathrm{L}_{\infty}([0,T];\mathrm{H}^{1}(\Omega))\cap\mathrm{L}_{\infty}([0,T] ;\mathrm{L}_{\infty}(\Omega))\cap\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega)) \cap\mathrm{H}^{1}([0,T];\mathrm{L}_{2}(\Omega))\) in Lemma 3.7 and in Lemma 3.9 we show \(\phi^{k}\) is uniformly bounded in \(k\) in \(\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))\) and in Lemma 3.10 we show uniform energy estimates for \(c^{k}\) in \(\mathrm{L}_{\infty}([0,T];\mathrm{L}_{2}(\Omega))\cap\mathrm{L}_{2}([0,T]; \mathrm{H}^{1}(\Omega))\cap\mathrm{H}^{1}([0,T];\mathrm{H}^{1}(\Omega)^{\prime})\). We use the the backward difference quotient, \(d_{\tau}w^{k}=(w^{k}-w^{k-1})/\tau\) with \(\tau\) being the timestep and we linearize the convection term in (2.23b), to obtain the following discretization of (2.23a)-(2.23c) \[\epsilon^{2}(d_{\tau}u^{k},v)+(\boldsymbol{A}(\nabla u^{k})\nabla u ^{k},\nabla v)+\gamma(g^{\prime}(u^{k}),v)=(m(c^{k-1})h^{\prime}(u^{k}),v), \tag{3.1a}\] \[(\sigma(u^{k})\nabla\phi^{k},\nabla\eta)=(-\nu_{1}d_{\tau}u^{k}+ \nu_{2}\sigma^{\prime}(u^{k})\partial_{x}u^{k},\eta),\] (3.1b) \[(d_{\tau}c^{k},\chi)+(D(u^{k})\nabla c^{k},\nabla\chi)=-\mu(d_{\tau}u^{k}, \chi)-(D_{1}(u^{k},c^{k-1})\nabla\phi^{k},\nabla\chi), \tag{3.1c}\] for all \((v,\eta)\in\mathrm{H}^{1}(\Omega)\times\mathrm{H}^{1}_{0|\Gamma}(\Omega)\) and for all \(k\geq 1,k\in\mathbb{N}\) with \(u^{0}=u_{0}(\boldsymbol{x})\), \(c^{0}=c_{0}(\boldsymbol{x})\). Note that the equations in system (3.1a)-(3.1c) are not simultaneous, but will be solved in sequential order, first the phase-field equation (3.1a) with given \(c^{k-1}\), then the Poisson equation (3.1b) with given \(u^{k}\) and finally the convection-diffusion equation (3.1c) with given \(u^{k}\) and \(\phi^{k}\). ### Theorem (Existence for the time-discrete problem) For every \(\tau>0\) with given \(u^{k-1}\in\mathrm{L}_{\infty}(\Omega)\), \(c^{k-1}\in\mathrm{H}^{1}(\Omega)\) and \(\phi^{k-1}\in\mathrm{H}^{1}_{0|\Gamma}(\Omega)\), there exists a unique triple \((u^{k},c^{k},\phi^{k})\in(\mathrm{H}^{1}(\Omega))^{2}\times\mathrm{H}^{1}_{0| \Gamma}(\Omega)\), with \(k=1,...,K=\lceil T/\tau\rceil\), satisfying (3.1a)-(3.1b). ### Lemma (Semidiscrete scheme for the Allen-Cahn equation) For every \(\tau>0\) and \(k=1,...,K,\)\(K=\lceil T/\tau\rceil\), with \(u^{0}=u_{0}\in\mathrm{H}^{1}(\Omega)\) there is a unique solution \(u^{k}\in\mathrm{H}^{1}(\Omega)\) that satisfies (3.1a) for every \(v\in\mathrm{H}^{1}(\Omega)\). Proof.: We rewrite (3.1a) as follows \[\epsilon^{2}(d_{\tau}u^{k},v)+I^{\prime}(u^{k})v=0 \tag{3.2}\] for every \(v\in\mathrm{H}^{1}(\Omega)\), where \(I^{\prime}:\mathrm{H}^{1}(\Omega)\to\mathrm{H}^{1}(\Omega)^{\prime}\) is given by \[I^{\prime}(w)v=\int_{\Omega}\boldsymbol{A}(\nabla w)\nabla w\cdot\nabla v+ \gamma\int_{\Omega}g^{\prime}(w)v-\int_{\Omega}m(c^{k-1})h^{\prime}(w)v \tag{3.3}\] and is the (Frechet) derivative of the energy functional \(I:H^{1}(\Omega)\to\mathbb{R}\) with \[I(w)=\int_{\Omega}\Big{(}\frac{a(\nabla w)^{2}}{2}|\nabla w|^{2}+\gamma g^{ \prime}(w)-m(c^{k-1})h^{\prime}(w)\Big{)}. \tag{3.4}\] We now seek a minimiser to the functional \(I_{k}:\mathrm{H}^{1}(\Omega)\to\mathbb{R}\) \[I_{k}(w)=\frac{\epsilon^{2}}{2\tau}\|w-u^{k-1}\|^{2}_{\mathrm{L}_{2}(\Omega)}+ I(w), \tag{3.5}\] which is the functional from which we derive the elliptic equation (3.2). From Theorem 3.30 in Dacorogna, for the existence of a minimiser of (3.5) it is enough to prove that \(F:\Omega\times\mathbb{R}\times\mathbb{R}^{d}\to\mathbb{R}\), with \[F(x,\xi,\boldsymbol{p})=\frac{a(\boldsymbol{p})^{2}}{2}|\boldsymbol{p}|^{2}+ \frac{\epsilon^{2}}{2\tau}(\xi-u^{k-1})^{2}+\gamma g^{\prime}(\xi)-m(c^{k-1}) h^{\prime}(\xi), \tag{3.6}\] is a Caratheodory function which is coercive and satisfies a sufficient growth condition. The function (3.6) is a Caratheodory function according to the definition [Bartels, Remark 2.4(i)]. We continue by showing the coercivity condition. From the assumption of Theorem 3.1, \(u^{k-1}\in\mathrm{L}_{\infty}(\Omega)\) and from (2.2), (2.3) and (2.15) we deduce that there are constants \(\gamma_{1},\gamma_{2}>0\) such that \[-\gamma_{1}\leq\gamma g^{\prime}(\xi)-m(c^{k-1})h^{\prime}(\xi)\leq\gamma_{2}. \tag{3.7}\] Then, by using Cauchy-Schwartz and Young's inequalities, the lower bounds in (2.28) and (3.7) and that \(u^{k-1}\in\mathrm{L}_{\infty}(\Omega)\) we establish the following lower bound for \(F\): \[F(x,\xi,\boldsymbol{p}) =\frac{a(\boldsymbol{p})^{2}}{2}|\boldsymbol{p}|^{2}+\frac{ \epsilon^{2}}{2\tau}(\xi-u^{k-1})^{2}+\gamma g^{\prime}(\xi)-m(c^{k-1})h^{ \prime}(\xi) \tag{3.8}\] \[\geq\frac{c_{A}}{2}|\boldsymbol{p}|^{2}+\frac{\epsilon^{2}}{4\tau }\xi^{2}-\frac{\epsilon^{2}}{2\tau}(u^{k-1})^{2}-\gamma_{1}. \tag{3.9}\] Using the upper bounds in (2.28) and (3.7) we see that \(F\) fulfills the following growth condition \[F(x,\xi,\boldsymbol{p}) =\frac{a(\boldsymbol{p})^{2}}{2}|\boldsymbol{p}|^{2}+\frac{ \epsilon^{2}}{2\tau}(\xi-u^{k-1})^{2}+\gamma g^{\prime}(\xi)-m(c^{k-1})h^{ \prime}(\xi) \tag{3.10}\] \[\leq\frac{C_{A}}{2}|\boldsymbol{p}|^{2}+\frac{\epsilon^{2}}{\tau }\xi^{2}+\frac{\epsilon^{2}}{\tau}(u^{k-1})^{2}+\gamma_{2}. \tag{3.11}\] It only remains to show the uniqueness of the minimizer. If \(w_{1}\) and \(w_{2}\) both minimize \(I_{k}\), then they also satisfy (3.2) and we may write \[\int_{\Omega}\frac{\epsilon^{2}}{\tau}(w_{1}-w_{2})v+\int_{\Omega}(\mathbf{A}( \nabla w_{1})\nabla w_{1}-\mathbf{A}(\nabla w_{2})\nabla w_{2})\cdot\nabla v\,+\, \gamma\int_{\Omega}\Big{(}g^{\prime}(w_{1})-g^{\prime}(w_{2})\Big{)}v \tag{3.12}\] \[-\int_{\Omega}m(c^{k-1})\Big{(}h^{\prime}(w_{1})-h^{\prime}(w_{2})\Big{)}v=0. \tag{3.13}\] Choosing \(v=w_{1}-w_{2}\) in (3.12) and using the Lipschitz continuity of \(g\), (2.2), and \(h\), (2.3), as well as the lower bound (2.28), we obtain \[\frac{\epsilon^{2}}{\tau}\|w_{1}-w_{2}\|_{\mathrm{L}_{2}(\Omega)}^{2}+c_{A}|w _{1}-w_{2}|_{\mathrm{H}^{1}(\Omega)}^{2}\leq C\|w_{1}-w_{2}\|_{\mathrm{L}_{2}( \Omega)}^{2}, \tag{3.14}\] where \(C\) is independent of \(\tau\). By taking \(\tau\leq\epsilon^{2}/C\) we conclude that \(w_{1}=w_{2}\) and thus uniqueness of the minimizer. ### Lemma (Maximum principle) Assume that the initial value of (3.1a) satisfies \[0\leq u^{0}\leq 1 \tag{3.15}\] and the time-step \(\tau\) of the time discretization of (2.23a) is sufficiently small. Then, the solution \(u^{k}\) satisfies \[0\leq u^{k}\leq 1, \tag{3.16}\] almost everywhere for all \(k=1,...,K\). Proof.: From (3.1a) we obtain \[\epsilon^{2}(u^{k},v)+\tau(\mathbf{A}(\nabla u^{k})\nabla u^{k},\nabla v )+\tau\gamma(g^{\prime}(u^{k}),v)-\tau(m(c^{k-1})h^{\prime}(u^{k}),v)\\ =\epsilon^{2}(u^{k-1},v). \tag{3.17}\] We prove (3.16) by induction. Assumption (3.15) is the base case. We suppose that (3.16) holds for \(u^{k-1}\) and then prove it holds for \(u^{k}\). Choose \(v=(u^{k})^{-}\), where \((\xi)^{-}:=\min(0,\xi)\leq 0\) is the signed negative part of \(\xi\in\mathbb{R}\), so \[\epsilon^{2}(u^{k},(u^{k})^{-})+\tau(\mathbf{A}(\nabla u^{k})\nabla u ^{k},\nabla(u^{k})^{-})+\tau\gamma(g^{\prime}(u^{k}),(u^{k})^{-})\\ -\tau(m(c^{k-1})h^{\prime}(u^{k}),(u^{k})^{-})=\epsilon^{2}(u^{k- 1},(u^{k})^{-}) \tag{3.18}\] then, \[\epsilon^{2}\|(u^{k})^{-}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\tau c_{ A}|(u^{k})^{-}|_{\mathrm{H}^{1}(\Omega)}^{2}\leq\epsilon^{2}(u^{k-1},(u^{k})^{ -})\\ -\tau\gamma(g^{\prime}(u^{k}),(u^{k})^{-})+\tau(m(c^{k-1})h^{ \prime}(u^{k}),(u^{k})^{-}). \tag{3.19}\] Since by assumption \(u^{k-1}\geq 0\) and noting \((u^{k})^{-}\leq 0\), ot follows that the first term on the right hand side is zero or negative. Also, \(g^{\prime}\), (2.21), and \(h^{\prime}\), (2.22), are bounded. Therefore, for \[\tau\leq\frac{1}{|\gamma g^{\prime}(u^{k})|+|m(c^{k-1})h^{\prime}(u^{k})|} \tag{3.20}\] the right hand side remains zero or negative. So, we conclude that \[\|(u^{k})^{-}\|_{\mathrm{L}_{2}(\Omega)}^{2}\leq 0, \tag{3.21}\] which gives \[u^{k}\geq 0. \tag{3.22}\] Working similarly with \(v=(u^{k}-1)^{+}\) we obtain \[u^{k}\leq 1, \tag{3.23}\] which together with (3.22) proves the result. ### Lemma (Existence for the discretized Poisson equation) For every \(\tau>0\) and \(k=1,...,K=\lceil T/\tau\rceil\) there is a unique solution \(\phi^{k}\in\mathrm{H}^{1}_{0|\Gamma}(\Omega)\) that satisfies (3.1b) for each \(\eta\in\mathrm{H}^{1}_{0|\Gamma}(\Omega)\). Proof.: We introduce the form \(b_{1}:\mathrm{H}^{1}(\Omega)^{2}\to\mathbb{R}\) \[b_{1}(u^{k};v,w)=\int_{\Omega}\sigma(u^{k})\nabla v\cdot\nabla w \tag{3.24}\] for all \(w\in\mathrm{H}^{1}_{0|\Gamma}(\Omega)\), which is bilinear in respect to \(v\) and \(w\) and we apply the Lax-Milgram Theorem. We first show the coercivity for \(b_{1}\). Noting the equivalence of the \(\mathrm{H}^{1}\) semi-norm and the \(\mathrm{H}^{1}\) norm under a homogeneous Dirichlet boundary condition, (2.5) and Lemma 3.3, we have \[b_{1}(v,v)=\int_{\Omega}\sigma(u^{k})|\nabla v|^{2}\geq\sigma_{\min}|v|^{2}_{ \mathrm{H}^{1}(\Omega)}\geq\frac{\sigma_{\min}}{C}\|v\|^{2}_{\mathrm{H}^{1}( \Omega)}, \tag{3.25}\] where \(\sigma_{\min}:=\min_{s\in\mathbb{R}}\sigma(s)\). Next we show that \(a\) is bounded. \[|b_{1}(v,w)|\leq\sigma_{\max}|v|_{\mathrm{H}^{1}(\Omega)}|w|_{\mathrm{H}^{1}( \Omega)}\leq\sigma_{\max}\|v\|_{\mathrm{H}^{1}(\Omega)}\|w\|_{\mathrm{H}^{1}( \Omega)}, \tag{3.26}\] for all \(w\in\mathrm{H}^{1}_{0|\Gamma}(\Omega)\), where \(\sigma_{\max}:=\max_{s\in\mathbb{R}}\sigma(s)\). From Lemma 3.7 we have that \(\nabla u^{k}\in(\mathrm{L}_{2}(\Omega))^{d}\) and \(d_{\tau}u^{k}\in\mathrm{L}_{2}(\Omega)\), which implies that the right hand side of (3.1b) is in \(\mathrm{L}_{2}(\Omega)\). Thus, by the Lax-Milgram Theorem, (3.1b) has a unique solution \(\phi^{k}\in\mathrm{H}^{1}_{0|\Gamma}(\Omega)\). ### Remark (Regularity of the \(d_{\tau}u^{k}\).) We later show, in Lemma 3.7, that \(d_{\tau}u^{k}\in\mathrm{L}_{2}([0,T];\mathrm{L}_{2}(\Omega))\) and so, we note that in Lemma 3.4 when we claim that \(d_{\tau}u^{k}\in\mathrm{L}_{2}(\Omega)\), we mean it in the sense of a non uniform bound in \(k\), i.e. there might be a \(j\in\mathbb{N}\) with \(0\leq j\leq K\) such that, \[\|d_{\tau}u^{j}\|^{2}_{\mathrm{L}_{2}(\Omega)}\leq\frac{C}{\tau}.\] ### Lemma (Existence for the discretized convection-diffusion equation) For every \(\tau>0\) and \(k=1,2,...,K,K=\lceil T/\tau\rceil\), with \(c^{0}=c_{0}\in\mathrm{H}^{1}(\Omega)\) there is a unique solution \(c^{k}\in\mathrm{H}^{1}(\Omega)\) that satisfies (3.1c) for each \(\chi\in\mathrm{H}^{1}(\Omega)\). Proof.: We introduce the form \(b_{2}:\mathrm{H}^{1}(\Omega)^{2}\to\mathbb{R}\) \[b_{2}(u^{k};w,v):=\int_{\Omega}D(u^{k})\nabla w\cdot\nabla v+\frac{1}{\tau}wv, \tag{3.27}\] which is bilinear for \(w\) and \(v\). Now we can rewrite (3.1c) as \[b_{2}(c^{k},\chi)=-\mu(d_{\tau}u^{k},\chi)-(D(u^{k})c^{k-1}\nabla\phi^{k}, \nabla\chi)+\frac{1}{\tau}(c^{k-1},\chi) \tag{3.28}\] and we prove existence of a weak solution with the Lax-Milgram Theorem. Coercivity comes from the fact that, by definition, \(D\) has a lower bound, see (2.4), so \[b_{2}(w,w)=\int_{\Omega}D(u^{k})|\nabla w|^{2}+\frac{1}{\tau}|w| ^{2}\geq D_{\min}|w|^{2}_{\mathrm{H}^{1}(\Omega)}+\frac{1}{\tau}\|w\|^{2}_{ \mathrm{L}_{2}(\Omega)}\\ \geq D_{\min}\|w\|^{2}_{\mathrm{H}^{1}(\Omega)}. \tag{3.29}\] Next we show the boundedness of (3.27), so \[|b_{2}(w,v)|=\left|\int_{\Omega}D(u^{k})\nabla w\cdot\nabla v+\frac{ 1}{\tau}wv\right|\leq\int_{\Omega}|D(u^{k})||\nabla w||\nabla v|+\frac{1}{\tau}| w||v|\\ \leq D_{\max}|w|_{\mathrm{H}^{1}(\Omega)}|v|_{\mathrm{H}^{1}( \Omega)}+\frac{1}{\tau}\|w\|_{\mathrm{L}_{2}(\Omega)}\|v\|_{\mathrm{L}_{2}( \Omega)}\leq\frac{1}{\tau}\|w\|_{\mathrm{H}^{1}(\Omega)}\|v\|_{\mathrm{H}^{1} (\Omega)}. \tag{3.30}\] Finally, the functional \[J[v]:=\int_{\Omega}\Big{(}d_{\tau}u^{k}v-D_{1}(u^{k},c^{k-1})\nabla\phi^{k} \cdot\nabla v+\frac{1}{\tau}c^{k-1}v\Big{)} \tag{3.31}\] is bounded and linear, since \((d_{\tau}u^{k}+c^{k-1})\in\mathrm{L}_{2}(\Omega)\) and \(D_{1}(u^{k},c^{k-1})\nabla\phi^{k}\in(\mathrm{L}_{2}(\Omega))^{2}\). So, by Lax-Milgram Theorem we have a unique solution to (3.1c). We now proceed with the energy estimates for \((u^{k},c^{k},\phi^{k})\). At this point we note that both \(g\) and \(h:\mathbb{R}\to\mathbb{R}\) have linear growth, since they are polynomials in \([0,1]\), and constant functions outside \([0,1]\), in particular \[\gamma\int_{\Omega}|g^{\prime}(\xi)||\xi|\lesssim\|\xi\|^{2}_{\mathrm{L}_{2}( \Omega)},\ \ \int_{\Omega}|m(c^{k-1})||h^{\prime}(\xi)||\xi|\lesssim\|\xi\|^{2}_{\mathrm{L}_{ 2}(\Omega)}, \tag{3.32}\] for all \(\xi\in\mathbb{R}\) up to a positive constant. ### Lemma (Energy estimates for the order parameter) Suppose that \(u_{0}\in\mathrm{H}^{1}(\Omega)\). Then, for \(\tau\leq 1/4C\) with some positive constant \(C\) that is dependent on \(T,\epsilon,\nu_{0}\) and \(u_{0}\), we have \[\max_{l=0,...,K}\|u^{l}\|^{2}_{\mathrm{H}^{1}(\Omega)}+\tau\sum_{k=1}^{K}\|u^ {k}\|^{2}_{\mathrm{H}^{1}(\Omega)}+\tau\sum_{k=1}^{K}\|d_{\tau}u^{k}\|^{2}_{ \mathrm{L}_{2}(\Omega)}\leq C. \tag{3.33}\] Proof.: We make use of the fact that \[(d_{\tau}u^{k},u^{k})=\frac{\tau}{2}\|d_{\tau}u^{k}\|^{2}_{\mathrm{L}_{2}( \Omega)}+\frac{1}{2}d_{\tau}\|u^{k}\|^{2}_{\mathrm{L}_{2}(\Omega)}. \tag{3.34}\] Combining (3.34) with (3.2), (3.3), (3.32) and (2.28) yields \[\frac{\tau\epsilon^{2}}{2}\|d_{\tau}u^{k}\|^{2}_{\mathrm{L}_{2}(\Omega)}+ \frac{1}{2}d_{\tau}\|u^{k}\|^{2}_{\mathrm{L}_{2}(\Omega)}+c_{A}|u^{k}|^{2}_{ \mathrm{H}^{1}(\Omega)}\leq C\|u^{k}\|^{2}_{\mathrm{L}_{2}(\Omega)}. \tag{3.35}\] Then, we multiply by \(\tau\) and sum for \(k=1,2,...,l\), where \(l\in\mathbb{N}\) is arbitrary with \(0\leq l\leq K\), \[\tau\sum_{k=1}^{l}\frac{\tau\epsilon^{2}}{2}\|d_{\tau}u^{k}\|^{2 }_{\mathrm{L}_{2}(\Omega)}+\frac{\tau}{2}\sum_{k=1}^{l}d_{\tau}\|u^{k}\|^{2}_{ \mathrm{L}_{2}(\Omega)}+\tau c_{A}\sum_{k=1}^{l}|u^{k}|^{2}_{\mathrm{H}^{1}( \Omega)}\\ \leq\tau C\sum_{k=1}^{l}\|u^{k}\|^{2}_{\mathrm{L}_{2}(\Omega)}, \tag{3.36}\] hence, making use of the telescope effect for the second term and that \(\tau\leq 1/4C\) we have \[\frac{\epsilon^{2}}{4}\|u^{l}\|^{2}_{\mathrm{L}_{2}(\Omega)}+\tau c_{A}\sum_{k =1}^{l}|u^{k}|^{2}_{\mathrm{H}^{1}(\Omega)}\leq\frac{\epsilon^{2}}{2}\|u^{0} \|^{2}_{\mathrm{L}_{2}(\Omega)}+\tau C\sum_{k=1}^{l-1}\|u^{k}\|^{2}_{\mathrm{L }_{2}(\Omega)}. \tag{3.37}\] We can now use a generalized discretized Gronwall lemma, see Lemma 2.2 in Bartels, to show that \[\frac{\epsilon^{2}}{4}\max_{l=0,...,K}\|u^{l}\|^{2}_{\mathrm{L}_{2}(\Omega)}+ \tau c_{A}\sum_{k=1}^{K}|u^{k}|^{2}_{\mathrm{H}^{1}(\Omega)}\leq\frac{\epsilon^ {2}}{2}\|u^{0}\|^{2}_{\mathrm{L}_{2}(\Omega)}\,\mathrm{e}^{CT}, \tag{3.38}\] which implies \[\tau\sum_{k=1}^{K}\|u^{k}\|_{\mathrm{H}^{1}(\Omega)}^{2}\leq C, \tag{3.39}\] where \(C\) is dependent on the coefficients and the upper bound of (3.38). We now adopt the idea of Burman and Rappaz to prove an energy estimate for \(d_{\tau}u^{k}\) in \(\mathrm{L}_{2}([0,T],\mathrm{L}_{2}(\Omega))\). We test (3.2) with \(v=d_{\tau}u^{k}\), multiply by \(\tau\) and sum over \(k=1,2,...,K\), which leads to the following \[\tau\epsilon^{2}\sum_{k=1}^{K}\|d_{\tau}u^{k}\|_{\mathrm{L}_{2}( \Omega)}^{2}+\sum_{k=1}^{K}(\mathbf{A}(\nabla u^{k})\nabla u^{k},\nabla[u^{k}-u^{k- 1}])\\ +\tau\gamma\sum_{k=1}^{K}(g^{\prime}(u^{k}),d_{\tau}u^{k})-\tau \sum_{k=1}^{K}(m(c^{k-1})h^{\prime}(u^{k}),d_{\tau}u^{k})=0, \tag{3.40}\] thus, by using the linear growth of \(g^{\prime}\) and \(h^{\prime}\), (3.32), the boundedness of (2.15) on the interval \([0,1]\), inequality (2.29) and the generalized Young inequality we deduce that \[\tau\epsilon^{2}\sum_{k=1}^{K}\|d_{\tau}u^{k}\|_{\mathrm{L}_{2}( \Omega)}^{2}+J_{a}(u^{K})-J_{a}(u^{0})\leq\frac{\tau C}{4\epsilon^{2}}\sum_{k= 1}^{K}\|u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\frac{\tau\epsilon^{2}}{2}\sum_{k =1}^{K}\|d_{\tau}u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}. \tag{3.41}\] Therefore, \[\frac{\tau\epsilon^{2}}{2}\sum_{k=1}^{K}\|d_{\tau}u^{k}\|_{\mathrm{L}_{2}( \Omega)}^{2}+J_{a}(u^{K})\leq J_{a}(u^{0})+\frac{\tau C}{4\epsilon^{2}}\sum_{k =1}^{K}\|u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}. \tag{3.42}\] ### Remark (Regularity of the anisotropic diffusion term) The energy estimate for the time derivative and given that \(g^{\prime}(u)\) and \(h^{\prime}(u)\) are polynomials in \(u\) imply that \[\nabla\cdot(\mathbf{A}(\nabla u)\nabla u)\in\mathrm{L}_{2}([0,T];\mathrm{L}_{2}( \Omega)). \tag{3.43}\] ### Lemma (Energy estimates for the electric potential) Let \(\phi^{k}\) be the solution to equation (3.1b). Then, for some positive constant \(C\) that is dependent on \(T,\epsilon,a_{0},u_{0},\nu,\phi_{-}\), \(\sigma_{\max}^{\prime},\sigma_{\min}\) and the geometry of \(\Omega\), we have \[\tau\sum_{k=1}^{K}\|\phi^{k}\|_{\mathrm{H}^{1}(\Omega)}^{2}\leq C. \tag{3.44}\] Proof.: We take equation (3.1b) and we choose \(\eta=\phi^{k}\), to give \[\int_{\Omega}\sigma(u^{k})|\nabla\phi^{k}|^{2}=-\nu_{1}\int_{\Omega}d_{\tau}u ^{k}\phi^{k}+\nu_{2}\int_{\Omega}\sigma^{\prime}(u^{k})\partial_{x}u^{k}\phi^ {k}. \tag{3.45}\] Then, \[\sigma_{\min}|\phi^{k}|_{\mathrm{H}^{1}(\Omega)}^{2}\leq\nu_{1}\|d_{\tau}u^{k} \|_{\mathrm{L}_{2}(\Omega)}\|\phi^{k}\|_{\mathrm{L}_{2}(\Omega)}+\nu_{2} \sigma_{\max}^{\prime}\|\partial_{x}u^{k}\|_{\mathrm{L}_{2}(\Omega)}\|\phi^{k} \|_{\mathrm{L}_{2}(\Omega)}, \tag{3.46}\] where \(\sigma_{\max}^{\prime}:=\max_{s\in\mathbb{R}}\partial_{s}\sigma(s)\). We now use the Poincare-Friedrichs inequality for the left hand side and the generalized Young inequality on the right hand side for both terms, \[\frac{\sigma_{\min}}{2C_{PF}}\|\phi^{k}\|_{\mathrm{L}_{2}(\Omega)} ^{2}+\frac{\sigma_{\min}}{2}|\phi^{k}|_{\mathrm{H}^{1}(\Omega)}^{2}\leq\frac {\nu_{1}}{2\varepsilon_{1}}\|d_{\tau}u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}\\ +\frac{\varepsilon_{1}\nu_{1}}{2}\|\phi^{k}\|_{\mathrm{L}_{2}( \Omega)}^{2}+\frac{\nu_{2}\sigma_{\max}^{\prime}}{2\varepsilon_{2}}\|\partial_{ x}u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\frac{\varepsilon_{2}\nu_{2}\sigma_{\max}^{ \prime}}{2}\|\phi^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}. \tag{3.47}\] We choose \[\varepsilon_{1}:=\frac{\sigma_{\min}}{4C_{PF}\nu_{1}},\ \ \varepsilon_{2}:=\frac{ \sigma_{\min}}{4C_{PF}\nu_{2}\sigma_{\max}^{\prime}}, \tag{3.48}\] so that \[\|\phi^{k}\|_{\mathrm{H}^{1}(\Omega)}^{2}\leq C^{*}\Big{(}\|d_{\tau}u^{k}\|_{ \mathrm{L}_{2}(\Omega)}^{2}+\|\partial_{x}u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2} \Big{)} \tag{3.49}\] where \(C^{*}\) is a positive constant dependent on \(\nu_{1},\nu_{2},\sigma_{\max}^{\prime},\sigma_{\min}\) and the geometry of \(\Omega\). We now multiply with \(\tau\) and sum for \(k=1,2,...,K\), \[\tau\sum_{k=1}^{K}\|\phi^{k}\|_{\mathrm{H}^{1}(\Omega)}^{2}\leq C^{*}\Big{(} \tau\sum_{k=1}^{K}\|d_{\tau}u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\tau\sum_{k= 1}^{K}\|\partial_{x}u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}\Big{)}. \tag{3.50}\] Lemma 3.7 gives us the necessary bounds on the right hand side to finish the proof. ### Lemma (Energy estimates for the concentration) Suppose \(c^{0}\in\mathrm{H}^{1}(\Omega)\). Then, for \(\tau\leq 1/2\mu\) and some positive constant C that is dependent on \(c_{0}\), \(D^{e},D^{s}\) and the dependencies as described in Lemma 3.9, we have \[\max_{l=0,...,K}\|c^{l}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\tau\sum_{k=1}^{K}\|c^ {k}\|_{\mathrm{H}^{1}(\Omega)}^{2}+\tau\sum_{k=1}^{K}\|d_{\tau}c^{k}\|_{ \mathrm{L}_{2}(\Omega)}^{2}\leq C. \tag{3.51}\] Proof.: We take equation (3.1c) and we test it with \(\chi=c^{k}\), so it becomes \[(d_{\tau}c^{k},c^{k})+(D(u^{k})\nabla c^{k},\nabla c^{k})=-\mu(d_{\tau}u^{k}, c^{k})-(D_{1}(u^{k},c^{k-1})\nabla\phi^{k},\nabla c^{k}). \tag{3.52}\] We use (3.34) to bound the first term from below, the fact that \(D(u^{k})\in\mathrm{L}_{\infty}(\Omega)\) and the properties of \(D_{1}\), (2.6), to obtain \[\frac{\tau}{2}\|d_{\tau}c^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+ \frac{1}{2}d_{\tau}\|c^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+D_{\min}|c^{k}|_{ \mathrm{H}^{1}(\Omega)}^{2}\\ \leq-\mu(d_{\tau}u^{k},c^{k})-(D_{1}(u^{k},c^{k-1})\nabla\phi^{k},\nabla c^{k})\\ \leq\mu\|d_{\tau}u^{k}\|_{\mathrm{L}_{2}(\Omega)}\|c^{k}\|_{ \mathrm{L}_{2}(\Omega)}+D_{\min}|\phi^{k}|_{\mathrm{H}^{1}(\Omega)}|c^{k}|_{ \mathrm{H}^{1}(\Omega)}\\ \leq\frac{\mu}{2}\|d_{\tau}u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+ \frac{\mu}{2}\|c^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\frac{D_{\min}}{2}|\phi^{k }|_{\mathrm{H}^{1}(\Omega)}^{2}+\frac{D_{\min}}{2}|c^{k}|_{\mathrm{H}^{1}( \Omega)}^{2}. \tag{3.53}\] Therefore, \[\frac{\tau}{2}\|d_{\tau}c^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+ \frac{1}{2}d_{\tau}\|c^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\frac{D_{\min}}{2}|c ^{k}|_{\mathrm{H}^{1}(\Omega)}^{2}\\ \leq\frac{\mu}{2}\|d_{\tau}u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+ \frac{\mu}{2}\|c^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\frac{D_{\min}}{2}|\phi^{k }|_{\mathrm{H}^{1}(\Omega)}^{2}. \tag{3.54}\] Then, we multiply by \(\tau\) and sum for \(k=1,2,...,l\), where \(l\in\mathbb{N}\) is arbitrary with \(0\leq l\leq K\), and by using the telescope property the above inequality becomes \[\frac{\tau^{2}}{2}\sum_{k=1}^{l}\|d_{\tau}c^{k}\|_{\mathrm{L}_{2} (\Omega)}^{2}+\frac{1}{2}\|c^{l}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\tau\frac{D_{ \min}}{2}\sum_{k=1}^{l}|c^{k}|_{\mathrm{H}^{1}(\Omega)}^{2}\\ \leq\frac{1}{2}\|c^{0}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\tau\frac{ \mu}{2}\sum_{k=1}^{l}\|d_{\tau}u^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\tau\frac{ \mu}{2}\sum_{k=1}^{l}\|c^{k}\|_{\mathrm{L}_{2}(\Omega)}^{2}+\tau\frac{D_{\min} }{2}\sum_{k=1}^{l}|\phi^{k}|_{\mathrm{H}^{1}(\Omega)}^{2}. \tag{3.55}\] So, for \(\tau\leq 1/(2\mu)\) we get \[\frac{1}{4}\|c^{l}\|^{2}_{\mathrm{L}_{2}(\Omega)}+\tau\frac{D_{\min }}{2}\sum_{k=1}^{l}|c^{k}|^{2}_{\mathrm{H}^{1}(\Omega)}\leq\frac{1}{2}\|c^{0}\|^ {2}_{\mathrm{L}_{2}(\Omega)}\\ +\tau\frac{\mu}{2}\sum_{k=1}^{l}\|d_{\tau}u^{k}\|^{2}_{\mathrm{L} _{2}(\Omega)}+\tau\frac{D_{\min}}{2}\sum_{k=1}^{l}|\phi^{k}|^{2}_{\mathrm{H}^{ 1}(\Omega)}+\tau\frac{\mu}{2}\sum_{k=1}^{l-1}\|c^{k}\|^{2}_{\mathrm{L}_{2}( \Omega)}. \tag{3.56}\] From Lemmas 3.7 and 3.9 the second and third terms of the right hand side of the above inequality are uniformly bounded in \(k\). We can now use the discrete Gronwall Lemma as we did in Lemma 3.7 and since \(l\) is arbitrary chosen, we obtain \[\frac{1}{4}\max_{l=0,...,K}\|c^{l}\|^{2}_{\mathrm{L}_{2}(\Omega)} +\tau\frac{D_{\min}}{2}\sum_{k=1}^{K}|c^{k}|^{2}_{\mathrm{H}^{1}(\Omega)}\\ \leq\left(\frac{1}{2}\|c^{0}\|^{2}_{\mathrm{L}_{2}(\Omega)}+\tau \frac{\mu}{2}\sum_{k=1}^{l}\|d_{\tau}u^{k}\|^{2}_{\mathrm{L}_{2}(\Omega)}+ \tau\frac{D_{\min}}{2}\sum_{k=1}^{l}|\phi^{k}|^{2}_{\mathrm{H}^{1}(\Omega)} \right)\mathrm{e}^{\frac{\mu T}{2}}. \tag{3.57}\] For the last estimate we know that (3.1c) holds for all \(v\in\mathrm{H}^{1}(\Omega)\), so it is true that \[\|d_{\tau}c^{k}\|^{2}_{\mathrm{L}_{2}(\Omega)}=\|D(u^{k})\nabla c ^{k}+D_{1}(u^{k},c^{k-1})\nabla\phi^{k}+d_{\tau}u^{k}\|^{2}_{\mathrm{L}_{2}( \Omega)}\\ \leq D_{\max}|c^{k}|_{\mathrm{L}_{2}(\Omega)}+D_{\min}|\phi^{k}| ^{2}_{\mathrm{L}_{2}(\Omega)}+\|d_{\tau}u^{k}\|^{2}_{\mathrm{L}_{2}(\Omega)}. \tag{3.58}\] By multiplying by \(\tau\) and summing for \(k=1,2,...,K\), we reach to (3.51). ### Proof of Theorem 3.1 Theorem 3.1 is a direct consequence of Lemmas 3.2, 3.4 and 3.6. ## 4. Weak convergence of the limits We move onto the final step of the proof of Theorem 2.6, which is to pass to the weak limits. In Lemma 4.4 we use the uniform bounds of the time-discrete approximation, so that we pass to the limits as \(\tau\to 0\) of the subsequences of the linear terms of (3.1a)-(3.1c). In Lemma 4.5 we prove strong convergence of the nonlinear terms of (3.1a)-(3.1c) in \(\mathrm{L}_{2}([0,T];\mathrm{L}_{p}(\Omega))\), with \(p\) being dependent on the dimension of \(\Omega\). We continue with Lemma 4.6, in which we show that the forcing term in the Allen-Cahn equation and the concentration function of the convection term in the convection-diffusion equation are strongly convergent in \(\mathrm{L}_{2}([0,T];\mathrm{L}_{q}(\Omega))\), where \(q\) is again dependent on the dimension of \(\Omega\). We follow Bartels, Eyles et al. and define the following interpolation in time, so that we can identify the limits of the approximations. ### Definition of Interpolants in time Given a time step size \(\tau>0\) and a sequence \((u^{k})_{k=0,...,K}\subset\mathrm{L}_{2}(\Omega)\) for \(K=\lceil T/\tau\rceil\), we set \(t_{k}=k\tau\) for \(k=0,1,...,K\) and define the piecewise constant and piecewise affine interpolants \(u_{\tau}^{-},u_{\tau}^{+},\hat{u}_{\tau}:[0,T]\to\mathrm{L}_{2}(\Omega)\) for \(t\in(t_{k-1},t_{k})\) by \[u_{\tau}^{-}=u^{k-1},\,\,u_{\tau}^{+}=u^{k},\,\,\,\hat{u}_{\tau}=\frac{t-t_{k- 1}}{\tau}u^{k}+\frac{t_{k}-t}{\tau}u^{k-1} \tag{4.1}\] and similarly for \(c_{\tau}^{-},c_{\tau}^{+},\hat{c}_{\tau},\phi_{\tau}^{-},\phi_{\tau}^{+},\hat{ \phi}_{\tau}\). ### Remark (Regularity of the interpolants) From Lemma 3.7 we have that \(\hat{u}_{\tau}\in\mathrm{W}^{1}_{\infty}([0,T];\mathrm{L}_{2}(\Omega))\) with \(\hat{u}^{\prime}_{\tau}=d_{\tau}u^{k}\) on \((t_{k-1},t_{k})\) for \(k=1,2,...,K\). Moreover, we have that \(u^{-}_{\tau},u^{+}_{\tau}\in\mathrm{L}_{\infty}([0,T];\mathrm{L}_{2}(\Omega))\) and \[\|u^{+}_{\tau}\|^{2}_{\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))}\leq\tau \sum_{k=1}^{K}\|u^{k}\|^{2}_{\mathrm{H}^{1}(\Omega)} \tag{4.2}\] with equality if \(K\tau=T\). We have similar results for \(c^{-}_{\tau},c^{+}_{\tau},\hat{c}_{\tau},\phi^{-}_{\tau},\phi^{+}_{\tau},\hat{ \phi}_{\tau}\). ### Lemma (Continuous extension of the implicit Euler scheme) Rewriting (3.1a)-(3.1b) using the interpolants of the approximations \((u^{k})_{k=0,...,K}\), \((c^{k})_{k=0,...,K}\) and \((\phi^{k})_{k=0,...,K}\), we have \[\epsilon^{2}(\hat{u}^{\prime}_{\tau}(t),v)+(\mathbf{A}(\nabla u^{+}_{ \tau}(t))\nabla u^{+}_{\tau}(t),\nabla v)+\gamma(g^{\prime}(u^{+}_{\tau}(t)),v )\\ =(m(c^{-}_{\tau}(t))h^{\prime}(u^{+}_{\tau}(t)),v), \tag{4.3a}\] \[(\hat{c}^{\prime}_{\tau}(t),\chi)+(D(u^{+}_{\tau}(t))\nabla c^{+} _{\tau}(t),\nabla\chi)\\ =-\mu(\hat{u}^{\prime}_{\tau}(t),\chi)-(D_{1}(u^{+}_{\tau}(t),c^{ -}_{\tau}(t))\nabla\phi^{+}_{\tau}(t),\nabla\chi),\] (4.3b) \[(\sigma(u^{+}_{\tau}(t))\nabla\phi^{+}_{\tau}(t),\nabla\eta)=-(\nu_{1}\hat{u}^ {\prime}_{\tau}(t)+\nu_{2}\sigma^{\prime}(u^{+}_{\tau}(t))\partial_{x}u^{+}_{ \tau}(t),\eta), \tag{4.3c}\] for all \((v,\chi)\in\mathrm{H}^{1}(\Omega)^{2}\), \(\eta\in\mathrm{H}^{1}_{0|\Gamma}(\Omega)\) and for almost every \(t\in[0,T]\). Moreover, we have for every \(\tau>0\) \[\|u^{+}_{\tau}\|_{\mathrm{L}_{\infty}([0,T];\mathrm{L}_{2}(\Omega))}+\|u^{+}_ {\tau}\|_{\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))}+\|\hat{u}_{\tau}\|_{ \mathrm{H}^{1}([0,T];\mathrm{L}_{2}(\Omega))}\leq C, \tag{4.3d}\] \[\|c^{+}_{\tau}\|_{\mathrm{L}_{\infty}([0,T];\mathrm{L}_{2}(\Omega))}+\|c^{+}_ {\tau}\|_{\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))}+\|\hat{c}_{\tau}\|_{ \mathrm{H}^{1}([0,T];\mathrm{L}_{2}(\Omega))}\leq C, \tag{4.3e}\] \[\|\phi^{+}_{\tau}\|_{\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))}\leq C. \tag{4.3f}\] Proof.: The equations (4.3a)-(4.3c) follow directly from (3.1a)-(3.1b) for \(k=0,...,K\) and every \(v\in\mathrm{H}^{1}(\Omega)\), \(\eta\in\mathrm{H}^{1}_{0|\Gamma}(\Omega)\), with the interpolants as defined in Definition 4.1. From (4.2) we easily verify that the second term of (4.3d) is uniformly bounded. Since \(u^{+}_{\tau}\) is a piecewise constant function in time and, as a consequence, \(\nabla u^{+}_{\tau}\) is piecewise constant in time, we deduce from Lemma 3.7 \[\sup_{t\in[0,T]}\|u^{+}_{\tau}\|_{\mathrm{L}_{2}(\Omega)}=\max_{l=0,...,K}\|u^ {l}\|_{\mathrm{L}_{2}(\Omega)}\leq C \tag{4.4}\] and so, the first term of (4.3d) is uniformly bounded as well. For the last term of (4.3d) it is enough to show a uniform bound on the \(\mathrm{L}_{2}([0,T];\mathrm{L}_{2}(\Omega))\) norm of \(\hat{u}_{\tau}\), since from (4.1) we know that \(\hat{u}^{\prime}_{\tau}=d_{\tau}u^{k}\). Again from Lemma 3.7 we can easily verify that the \(\mathrm{L}_{2}([0,T];\mathrm{L}_{2}(\Omega))\) norm of \(\hat{u}^{\prime}_{\tau}\) is uniformly bounded, noting that \(\hat{u}_{\tau}(t)=u^{+}_{\tau}(t)-(t_{k}-t)\hat{u}^{\prime}_{\tau}(t)\) for a.e. \(t\in(t_{k-1},t_{k})\). The bounds in (4.3e) and (4.3f) follow using similar arguments and Lemmas 3.9 and 3.9. ### Lemma (Selection of the limits) There exist \((u,c,\phi)\) and \((\partial_{t}u,\partial_{t}c)\) as in Theorem 2.6 such that for a sequence \((\tau_{n})_{n\in\mathbb{N}}\) of positive numbers with \(\tau_{n}\to 0\) as \(n\to\infty\), we have the following, \[\hat{u}_{\tau_{n}},u^{\pm}_{\tau_{n}}\overset{*}{\rightharpoonup}u\ \text{in}\ \,\operatorname{L}_{\infty}([0,T];\operatorname{L}_{\infty}(\Omega)), \tag{4.5a}\] \[\hat{u}_{\tau_{n}},u^{\pm}_{\tau_{n}}\overset{*}{\rightharpoonup}u \ \text{in}\ \operatorname{L}_{\infty}([0,T];\operatorname{H}^{1}(\Omega)),\] (4.5b) \[\hat{u}_{\tau_{n}},u^{\pm}_{\tau_{n}}\rightharpoonup u\ \text{in}\ \operatorname{L}_{2}([0,T]; \operatorname{H}^{1}(\Omega)),\] (4.5c) \[\hat{u}^{\prime}_{\tau_{n}}\rightharpoonup\partial_{t}u\ \text{in}\ \operatorname{L}_{2}([0,T]; \operatorname{L}_{2}(\Omega)),\] (4.5d) \[\hat{c}_{\tau_{n}},c^{\pm}_{\tau_{n}}\overset{*}{\rightharpoonup}c \ \text{in}\ \operatorname{L}_{\infty}([0,T];\operatorname{L}_{2}(\Omega)),\] (4.5e) \[\hat{c}_{\tau_{n}},c^{+}_{\tau_{n}}\rightharpoonup c\ \text{in}\ \operatorname{L}_{2}([0,T];\operatorname{H}^{1}(\Omega)),\] (4.5f) \[\hat{c}^{\prime}_{\tau_{n}}\rightharpoonup\partial_{t}c\ \text{in}\ \operatorname{L}_{2}([0,T]; \operatorname{L}_{2}(\Omega)),\] (4.5g) \[\phi^{\pm}_{\tau_{n}}\rightharpoonup\phi\ \text{in}\ \operatorname{L}_{2}([0,T]; \operatorname{H}^{1}(\Omega)). \tag{4.5h}\] Proof.: From Lemma 4.3 and the energy estimates of Lemmas 3.7, 3.9 and 3.10 we immediately get for (4.5b)-(4.5h) that there are weakly convergent subsequences that converge to appropriate limits that are also unique. For (4.5a) we use Lemma 3.3, to conclude the asserted weak - \(*\) convergence. ### Lemma (Strong convergence of the nonlinearities) For the functions \(g^{\prime},h^{\prime},m,\sigma\) and \(D\) as defined in (2.21), (2.22), (2.15), (2.4), (2.5) and (2.6), with \(g^{\prime}(u),h^{\prime}(u)\),\(\sigma(u),m(c),D(u)\in\operatorname{L}_{2}([0,T];\operatorname{L}_{p}(\Omega))\), where \(1\leq p<\infty\) if \(\dim\,\Omega=2\) and \(1\leq p<6\) if \(\dim\,\Omega=3\), we have the following as \(\tau_{n}\to 0\) \[g^{\prime}(u^{+}_{\tau_{n}}) \to g(u)\ \text{in}\ \operatorname{L}_{2}([0,T];\operatorname{L}_{p}( \Omega)), \tag{4.6a}\] \[h^{\prime}(u^{+}_{\tau_{n}}) \to h(u)\ \text{in}\ \operatorname{L}_{2}([0,T];\operatorname{L}_{p}( \Omega)),\] (4.6b) \[D(u^{+}_{\tau_{n}}) \to D(u)\ \text{in}\ \operatorname{L}_{2}([0,T];\operatorname{L}_{p}( \Omega)),\] (4.6c) \[\sigma(u^{+}_{\tau_{n}}) \to\sigma(u)\ \text{in}\ \operatorname{L}_{2}([0,T]; \operatorname{L}_{p}(\Omega)),\] (4.6d) \[m(c^{-}_{\tau_{n}}) \to m(c)\ \text{in}\ \operatorname{L}_{2}([0,T];\operatorname{L}_{p}( \Omega)). \tag{4.6e}\] Proof.: We will describe the proof for (4.6a) as the arguments are the same for the rest of the limits. In Lemma 4.4 we proved that \[u^{\pm}_{\tau_{n}}\rightharpoonup u\ \text{in}\ \operatorname{L}_{2}([0,T]; \operatorname{H}^{1}(\Omega))\cap\operatorname{H}^{1}([0,T];\operatorname{L}_{ 2}(\Omega)). \tag{4.7}\] We know that the inclusion \(\operatorname{L}_{2}(\Omega)\subset\operatorname{H}^{1}(\Omega)^{\prime}\) is continuous and therefore \[\operatorname{L}_{2}([0,T];\operatorname{H}^{1}(\Omega))\cap \operatorname{H}^{1}([0,T];\operatorname{L}_{2}(\Omega))\subset \operatorname{L}_{2}([0,T];\operatorname{H}^{1}(\Omega))\cap\operatorname{H}^{1 }([0,T];\operatorname{H}^{1}(\Omega)^{\prime}) \tag{4.8}\] is continuous too. From Aubin-Lions Lemma [e.g. Roubicek] we have that \[\operatorname{L}_{2}([0,T];\operatorname{H}^{1}(\Omega))\cap \operatorname{H}^{1}([0,T];\operatorname{H}^{1}(\Omega)^{\prime}) \overset{c}{\longrightarrow}\operatorname{L}_{2}([0,T];\operatorname{L}_{p}( \Omega)), \tag{4.9}\] for \(1\leq p<\infty\) if \(\dim\,\Omega=2\) and \(1\leq p<6\) if \(\dim\,\Omega=3\), which implies that \[u^{\pm}_{\tau_{n}}\to u\ \text{in}\ \operatorname{L}_{2}([0,T]; \operatorname{L}_{p}(\Omega)). \tag{4.10}\] Hence, \(u^{\pm}_{\tau_{n}}\to u\) almost everywhere up to a subsequence, that we denote with the same index. From the continuity of \(g^{\prime}\) we immediately deduce that \(g(u^{\pm}_{\tau_{n}})\to g(u)\) almost everywhere. Since \(g^{\prime}\) is a polynomial and \(g^{\prime}:[0,1]\to\mathbb{R}\), there is a positive real number \(M\), such that \(|g^{\prime}(u^{\pm}_{\tau_{n}})|\leq M\). From the dominated convergence theorem we obtain \[g(u^{\pm}_{\tau_{n}})\to g(u)\ \text{in}\ \operatorname{L}_{2}([0,T]; \operatorname{L}_{p}(\Omega)). \tag{4.11}\] Taking into consideration the definitions of \(h^{\prime},m,\sigma\) and \(D\) we can use the same arguments to prove (4.6b)-(4.6e). ### Lemma (Strong convergence of the products) For \(m(c)h^{\prime}(u),D_{1}(u,c)\in\operatorname{L}_{2}([0,T];\operatorname{L}_{q}( \Omega))\), where \(1\leq q<\infty\) if \(\dim\Omega=2\) and \(1\leq q<3\) if \(\dim\Omega=3\), we have the following as \(\tau_{n}\to 0\) \[m(c_{\tau_{n}}^{-})h^{\prime}(u_{\tau_{n}}^{+})\to m(c)h^{\prime}(u) \text{ in }\operatorname{L}_{2}([0,T];\operatorname{L}_{q}(\Omega)), \tag{4.12a}\] \[D_{1}(u_{\tau_{n}}^{+},c_{\tau_{n}}^{-})\to D_{1}(u,c)\text{ in } \operatorname{L}_{2}([0,T];\operatorname{L}_{q}(\Omega)). \tag{4.12b}\] Proof.: We use the definition of the strong convergence in \(\operatorname{L}_{p}\) spaces. By the triangle inequality and the generalized Holder inequality with \(1/q=1/p+1/p^{\prime}\), we obtain \[\|m(c_{\tau_{n}}^{-})h^{\prime}(u_{\tau_{n}}^{+})-m(c)h^{\prime}( u)\|_{\operatorname{L}_{2}([0,T];\operatorname{L}_{q}(\Omega))}\\ =\|m(c_{\tau_{n}}^{-})h^{\prime}(u_{\tau_{n}}^{+})-m(c)h^{\prime }(u_{\tau_{n}}^{+})+m(c)h^{\prime}(u_{\tau_{n}}^{+})-m(c)h^{\prime}(u)\|_{ \operatorname{L}_{2}([0,T];\operatorname{L}_{q}(\Omega))}\\ \leq\|\big{(}m(c_{\tau_{n}}^{-})-m(c)\big{)}h^{\prime}(u_{\tau_{n }}^{+})\|_{\operatorname{L}_{2}([0,T];\operatorname{L}_{q}(\Omega))}+\|m(c) \big{(}h^{\prime}(u_{\tau_{n}}^{+})-h^{\prime}(u)\big{)}\|_{\operatorname{L}_ {2}([0,T];\operatorname{L}_{q}(\Omega))}\\ \leq\|h^{\prime}(u_{\tau_{n}}^{+})\|_{\operatorname{L}_{2}([0,T]; \operatorname{L}_{p^{\prime}}(\Omega))}\|m(c_{\tau_{n}}^{-})-m(c)\|_{ \operatorname{L}_{2}([0,T];\operatorname{L}_{p}(\Omega))}\\ +\|m(c)\|_{\operatorname{L}_{2}([0,T];\operatorname{L}_{p^{\prime }}(\Omega))}\|h^{\prime}(u_{\tau_{n}}^{+})-h^{\prime}(u)\|_{\operatorname{L}_ {2}([0,T];\operatorname{L}_{p}(\Omega))}. \tag{4.13}\] From Lemma 4.5 we know that the sequences are strongly converging in \(\operatorname{L}_{2}([0,T];\operatorname{L}_{p}(\Omega))\) for all \(p\in[1,\infty)\) if \(\dim\Omega=2\) and for all \(p\in[1,6)\) if \(\dim\Omega=3\). We also know that \(\|m(c)\|_{\operatorname{L}_{2}([0,T];\operatorname{L}_{p^{\prime}}(\Omega))}\) and \(\|h^{\prime}(u_{\tau_{n}}^{+})\|_{\operatorname{L}_{2}([0,T];\operatorname{L }_{p^{\prime}}(\Omega))}\) are well defined and bounded for all \(q\in[1,\infty)\) if \(\dim\Omega=2\) and for all \(q\in[1,3)\) if \(\dim\Omega=3\). We use similar arguments to prove (4.12b). ### Lemma (Weak convergence of the products) For \(D(u)\nabla c\), \(D_{1}(u,c)\nabla\phi\), \(\sigma(u)\nabla\phi\), \(\boldsymbol{A}(\nabla u)\nabla u\in\operatorname{L}_{2}([0,T];(\operatorname{ L}_{2}(\Omega))^{2})\) we have the following as \(\tau_{n}\to 0\) \[D(u_{\tau_{n}}^{+})\nabla c_{\tau_{n}}^{+}\rightharpoonup D(u) \nabla c\text{ in }\operatorname{L}_{2}([0,T];(\operatorname{L}_{2}(\Omega))^{2}), \tag{4.14a}\] \[D_{1}(u_{\tau_{n}}^{+},c_{\tau_{n}}^{-})\nabla\phi_{\tau_{n}}^{+} \rightharpoonup D_{1}(u,c)\nabla\phi\text{ in }\operatorname{L}_{2}([0,T];( \operatorname{L}_{2}(\Omega))^{2}),\] (4.14b) \[\sigma(u_{\tau_{n}}^{+})\nabla\phi_{\tau_{n}}^{+} \rightharpoonup\sigma(u)\nabla\phi\text{ in }\operatorname{L}_{2}([0,T];( \operatorname{L}_{2}(\Omega))^{2}),\] (4.14c) \[\boldsymbol{A}(\nabla u_{\tau_{n}}^{+})\nabla u_{\tau_{n}}^{+} \rightharpoonup\boldsymbol{A}(\nabla u)\nabla u\text{ in }\operatorname{L}_{2}([0,T];( \operatorname{L}_{2}(\Omega))^{2}). \tag{4.14d}\] Proof.: For (4.14a) we will first show the existence of a limit. We have that \[\int_{0}^{T}\int_{\Omega}D(u_{\tau_{n}}^{+})\nabla c_{\tau_{n}}^ {+}\cdot\nabla c_{\tau_{n}}^{+}=\int_{0}^{T}\int_{\Omega}|D^{1/2}(u_{\tau_{n} }^{+})\nabla c_{\tau_{n}}^{+}|^{2}\\ \leq\|D^{1/2}(u_{\tau_{n}}^{+})\nabla c_{\tau_{n}}^{+}\|^{2}_{ \operatorname{L}_{2}([0,T];(\operatorname{L}_{2}(\Omega))^{2})}\leq D_{\max} \|c_{\tau_{n}}^{+}\|^{2}_{\operatorname{L}_{2}([0,T];\operatorname{H}^{1}( \Omega))}. \tag{4.15}\] The last norm is bounded from Lemma 4.3, so this implies that there is a \(\xi_{1}\in\operatorname{L}_{2}([0,T];(\operatorname{L}_{2}(\Omega))^{2})\) such that \[D(u_{\tau_{n}}^{+})\nabla c_{\tau_{n}}^{+}\rightharpoonup\xi_{1}\text{ in } \operatorname{L}_{2}([0,T];(\operatorname{L}_{2}(\Omega))^{2}). \tag{4.16}\] We will use the definition of the weak convergence to prove that \(\xi_{1}=D(u)\nabla c\), i.e. we will show the following as \(\tau_{n}\to 0\), \[\int_{0}^{T}\int_{\Omega}\Big{(}D(u_{\tau_{n}}^{+})\nabla c_{\tau_{n}}-D(u) \nabla c\Big{)}\cdot\nabla\psi=0 \tag{4.17}\] for all \(\psi\in\mathrm{L}_{2}([0,T];\mathrm{C}^{\infty}(\Omega))\). We have \[\Big{|}\int_{0}^{T}\int_{\Omega}\Big{(}D(u_{\tau_{n}}^{+})\nabla c_ {\tau_{n}}^{+}-D(u)\nabla c\Big{)}\cdot\nabla\psi\Big{|}\\ =\Big{|}\int_{0}^{T}\int_{\Omega}\Big{(}D(u_{\tau_{n}}^{+})\nabla c _{\tau_{n}}^{+}-D(u)\nabla c_{\tau_{n}}^{+}+D(u)\nabla c_{\tau_{n}}^{+}-D(u) \nabla c\Big{)}\cdot\nabla\psi\Big{|}\\ \leq\Big{|}\int_{0}^{T}\int_{\Omega}D(u)\Big{(}\nabla c_{\tau_{n} }^{+}-\nabla c\Big{)}\cdot\nabla\psi\Big{|}+\Big{|}\int_{0}^{T}\int_{\Omega} \Big{(}D(u_{\tau_{n}}^{+})-D(u)\Big{)}\nabla c_{\tau_{n}}^{+}\cdot\nabla\psi \Big{|}\\ \leq\Big{|}\int_{0}^{T}\int_{\Omega}D(u)\Big{(}\nabla c_{\tau_{n} }^{+}-\nabla c\Big{)}\cdot\nabla\psi\Big{|}\\ +\|D(u_{\tau_{n}}^{+})-D(u)\|_{\mathrm{L}_{2}([0,T];\mathrm{L}_{ 2}(\Omega))}\|\nabla c_{\tau_{n}}^{+}\cdot\nabla\psi\|_{\mathrm{L}_{2}([0,T]; \mathrm{L}_{2}(\Omega))}. \tag{4.18}\] Since \(D(u)\in\mathrm{L}_{\infty}(\Omega)\), the first term vanishes as \(\tau_{n}\to 0\) because of the weak limit (4.5f). The second term vanishes in the limit as \(\tau_{n}\to 0\) because of (4.6c). Similarly, we obtain (4.14b) and (4.14c). The proof of (4.14d) has already been done in full detail in Burman and Rappaz. We describe here the main arguments. Since \(\boldsymbol{A}(\nabla u_{\tau_{n}}^{+})\nabla u_{\tau_{n}}^{+}\) is bounded in \(\mathrm{L}_{2}([0,T];\mathrm{L}_{2}(\Omega))^{2}\) we have \[\boldsymbol{A}(\nabla u_{\tau_{n}}^{+})\nabla u_{\tau_{n}}^{+}\rightharpoonup \xi_{2}\text{ in }\operatorname{L}_{2}([0,T];\mathrm{L}_{2}(\Omega))^{2}. \tag{4.19}\] Therefore, (3.1a) converges to \[\int_{0}^{T}(\partial_{t}u,v)+\int_{0}^{T}(\xi_{2},\nabla v)+\gamma\int_{0}^{ T}(g^{\prime}(u),v)=\int_{0}^{T}(m(c)h^{\prime}(u),v) \tag{4.20}\] for all \(v\in\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))\). We also define the sequence \[\chi_{\tau_{n}}^{+}(w):=\int_{0}^{T}(\boldsymbol{A}(\nabla u_{\tau_{n}}^{+}) \nabla u_{\tau_{n}}^{+}-\boldsymbol{A}(\nabla w)\nabla w,\nabla(u_{\tau_{n}}^{ +}-w)) \tag{4.21}\] for all \(w\in\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))\). The proof follows by properly combining (4.5c), (4.5f), (4.6a), (4.6e) and the properties of the anisotropic tensor as described in Section 2.4, yielding \[\int_{0}^{T}(\xi_{2}-\boldsymbol{A}(\nabla u)\nabla u,\nabla w)\geq 0 \tag{4.22}\] for all \(w\in\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))\) and accordingly \[\xi_{2}=\boldsymbol{A}(\nabla u)\nabla u. \tag{4.23}\] ### Proof of Theorem 2.6 Now we can prove our main result, Theorem 2.6. Proof.: From Theorem 3.1 we have established that there is a solution \((u^{k},c^{k},\phi^{k})\in(\mathrm{H}^{1}(\Omega))^{2}\times\mathrm{H}^{1}_{0| \Gamma}(\Omega)\) of the discretized in time system (3.1a)-(3.1b). Lemmas 4.4-4.7 give us convergence of every term that appear in these equations to unique weak limits, which form the system (2.23a)-(2.23c). As a consequence \(u\in\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))\cap\mathrm{H}^{1}([0,T]; \mathrm{L}_{2}(\Omega))\cap\mathrm{L}_{\infty}([0,T];\mathrm{H}^{1}(\Omega)) \cap\mathrm{L}_{\infty}([0,T];\mathrm{L}_{\infty}(\Omega))\), \(c\in\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))\cap\mathrm{H}^{1}([0,T]; \mathrm{L}_{2}(\Omega))\cap\mathrm{L}_{\infty}([0,T];\mathrm{L}_{2}(\Omega))\) and \(\phi\in\mathrm{L}_{2}([0,T];\mathrm{H}^{1}(\Omega))\) form a unique weak solution to the above system for almost every \(t\in[0,T]\), that satisfies \((u(0),c(0))=(u_{0},c_{0})\in(\mathrm{H}^{1}(\Omega))^{2}\) ## 5. Numerical Results In this section we will present numerical results produced by a software that we developed, using the DUNE Python module, Dedner and Nolte, and the DUNE Alugrid module, Alkamper et al.. Since (2.23a)-(2.23b) demonstrate the crystal growth inside Li-metal batteries, an engineering problem that demands for practical solutions sooner than later, it is essential for us to present numerical simulations of the fore-mentioned system. For the numerical scheme we used a standard Galerkin adaptive finite element method on the fully discrete system \[\epsilon^{2}(d_{\tau}u_{h}^{k},v_{h})+(\boldsymbol{A}(\nabla u_{h}^{k-1}) \nabla u_{h}^{k},\nabla v_{h})+\gamma(g^{\prime}(u_{h}^{k-1}),v_{h})=(m(c_{h}^ {k-1})h^{\prime}(u_{h}^{k-1}),v_{h}), \tag{5.1a}\] \[(d_{\tau}c_{h}^{k},\chi_{h})+(D(u_{h}^{k})\nabla c^{k},\nabla\chi _{h})+(D_{1}(u_{h}^{k},c_{h}^{k})\nabla\phi_{h}^{k},\nabla\chi_{h})=-\mu(d_{ \tau}u_{h}^{k},\chi_{h}),\] (5.1b) \[(\sigma(u_{h}^{k})\nabla\phi_{h}^{k},\nabla\eta_{h})=(-\nu_{1}d_ {\tau}u_{h}^{k}+\nu_{2}\sigma^{\prime}(u_{h}^{k})\partial_{x}u_{h}^{k},\eta_{h }), \tag{5.1c}\] for all \((v_{h},\chi_{h},\eta_{h})\in V_{h}^{1}\times V_{h}^{2}\times V_{h}^{3}\) with \(u_{h}^{0}=u_{0}(\boldsymbol{x})\), \(c_{h}^{0}=c_{0}(\boldsymbol{x})\), where \(V_{h}^{1}\) and \(V_{h}^{2}\) are finite dimensional subspaces of \(\mathrm{H}^{1}(\Omega)\) and \(V_{h}^{3}\) is a finite dimensional subspaces of \(\mathrm{H}^{1}_{0|\Gamma}(\Omega)\). In our simulations we linearized the anisotropic tensor and the nonlinear terms of the phase-field equation. For the sake of the computational cost we reduced our domain to half, taking advantage of the symmetric properties of the dendritic growth of the crystal. In Figure 2 we display plots of the solution \((u,c,\phi)\) at three different times. These results show good comparison with Figure 4 in Chen et al. [b] and Figure 3a in Mu et al. [a]. The model we study has a unique solution under certain conditions. One main condition is that the anisotropy strength should always fulfill the inequality \(\delta<\delta_{0}=1/(\omega^{2}-1)\). The molecular structure of the lithium atom indicates that \(\omega=4\), which represents the mode of the anisotropy. So, \(\delta<1/15\approx 0.067\). In Figure 3, the numerical computations show the order parameter for different values of \(\delta\). We chose to present several cases for values of \(\delta\) that comply with the theoretical limitations for the existence of the weak solution. However, our numerical method treats the anisotropy tensor explicitly and thus we ensure numerical convergence for values that exceed \(\delta_{0}\) and so we also present a computation for \(\delta=0.1\). Figure 2. The solution of equations (5.1a)–(5.1c). The order parameter \(u\) is at the top row, the concentration \(c\) is at the middle row and the electric potential \(\phi\) is at the bottom row. The captions are at the times \(t_{1}=0.061,t_{2}=0.244\) and \(t_{3}=0.427\). In Figure 4, we compare how the shaping is affected by the forcing term of the convection-diffusion equation. We have also added an image of a complete isotropic simulation, so that we can compare it with the rest of the results. Figure 3. Comparison of different values for the anisotropy strength \(\delta\), which is introduced in (2.24), at \(t=0.366\). We observe that image (A) corresponds to the isotropic case of the order parameter. However, the shape is not a sphere because it is affected by the convection–diffusion equation which has as forcing term the time derivative of the order parameter. By increasing the anisotropy strength we see that a crystal shape is formed. Images (B) and (C) show this, but in image (D) we finally see a full crystal shape with only one branch across the x-axis, compared to images (B) and (C), where we see two branches growing across the x-axis. Image (E) represents an experiment for anisotropy strength very close to the theoretical convexity limit \(\delta_{0}=1/15\). Image (F) is an example of how the shape is with a value of anisotropy strength that exceeds the theoretical bound for convexity of the anisotropic Dirichlet energy (2.25).
2303.01646
Dynamic Competency Self-Assessment for Autonomous Agents
As autonomous robots are deployed in increasingly complex environments, platform degradation, environmental uncertainties, and deviations from validated operation conditions can make it difficult for human partners to understand robot capabilities and limitations. The ability for a robot to self-assess its competency in dynamic and uncertain environments will be a crucial next step in successful human-robot teaming. This work presents and evaluates an Event-Triggered Generalized Outcome Assessment (ET-GOA) algorithm for autonomous agents to dynamically assess task confidence during execution. The algorithm uses a fast online statistical test of the agent's observations and its model predictions to decide when competency assessment is needed. We provide experimental results using ET-GOA to generate competency reports during a simulated delivery task and suggest future research directions for self-assessing agents.
Nicholas Conlon, Nisar R. Ahmed, Daniel Szafir
2023-03-03T00:41:28Z
http://arxiv.org/abs/2303.01646v1
# Dynamic Competency Self-Assessment for Autonomous Agents ###### Abstract. As autonomous robots are deployed in increasingly complex environments, platform degradation, environmental uncertainties, and deviations from validated operation conditions can make it difficult for human partners to understand robot capabilities and limitations. The ability for a robot to self-assess its competency in dynamic and uncertain environments will be a crucial next step in successful human-robot teaming. This work presents and evaluates an Event-Triggered Generalized Outcome Assessment (ET-GOA) algorithm for autonomous agents to dynamically assess task confidence during execution. The algorithm uses a fast online statistical test of the agent's observations and its model predictions to decide when competency assessment is needed. We provide experimental results using ET-GOA to generate competency reports during a simulated delivery task and suggest future research directions for self-assessing agents. Human-robot teaming, robot self-assessment + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). + Footnote †: 2023 Copyright held by the owner/author(s). dynamic competency changes, require examples of both good (competent) and poor (incompetent) behavior, which may be difficult or impossible to acquire in many real-world applications. Another method of _in situ_ self-assessment involves monitoring features of the agent's current state. For example, Gautam et al. developed a method to monitor deviations from design assumptions (Gautam et al., 2017), while Ramesh et al. used the "vitals" of a robot to monitor its health during task execution (Ramesh et al., 2017). Both methods provide a valuable instantaneous snapshot of the agent's state at a given time which can indicate performance degradation online; however, neither predicts higher level task competency (e.g., does the degradation actually impact the task outcome?). In contrast, we propose a method of _in situ_ self-assessment that offers a hybrid approach by monitoring the alignment between the agent's predictions and observations and triggering a (re)assessment of task confidence using an accurate _a priori_ method when there is a deviation in alignment. ## 3. Dynamic Self-Assessment of Task Confidence ### When to Assess Competency To prevent unnecessary assessments and save onboard computational resources, we take an event triggered approach to _in situ_ self-assessment. Our algorithm assesses confidence only when there is evidence that the agent's task confidence has changed. One promising method for detecting such a change is the Surprise Index (SI). SI is defined as the sum of probabilities of more extreme (or less probable) events than an observed event given a probabilistic model (Srivastava et al., 2017). For a given event \(e\in E\), SI is computed by summing over the probabilities of more extreme events in the distribution \(p(E)\): \[SI(e,p(E))=\int_{p(E)<p(e)}p(E)dE \tag{1}\] SI can be though of as how (in)compatible an observation \(e\) is given a set of possible events \(E\). This is a similar to the more well known entropy based surprise (Gautam et al., 2017; Gautam et al., 2017); however entropy based surprise is unbounded, while Surprise Index is bounded between zero (most surprising) and one (least surprising). SI also shares similarities with the tail probability or the p-value given the hypothesis that \(e\) is from the distribution \(p(E)\); a large p-value (large SI) indicates strong evidence that \(e\) is likely from \(p(E)\), while a small p-value indicates strong evidence to the contrary. In this work we are interested in determining when the agent should re-assess its task confidence. We propose computing the SI of the agent's observed state \(s_{t}\) with respect to that agent's model prediction \(p(\hat{s}_{t})\), and triggering a re-assessment when the SI falls below a threshold \(\delta\). In essence, we are monitoring the quality of the agent's model given the task and triggering an assessment when quality wanes. Because some aspects of \(s_{t}\) may be more relevant to competency than others, we compute SI over a subset of the state marginals. ### How to Assess Competency To assess competency we leverage the Generalized Outcome Assessment (GOA) (Gautier et al., 2017). Given a probabilistic world model \(M\), GOA simulates task execution by rolling out state predictions \(p_{M}(s_{t+1}|s_{t},a_{t})\). Note that a \(M\) could take the form of a Monte Carlo based planner(Gautam et al., 2017), a black box neural network (Gautier et al., 2017; Gautier et al., 2017), or similar. GOA then analyses the state predictions and computes the agent's margin of confidence in attaining an outcome better than some target outcome threshold \(Z\). Examples of target outcomes could include craters hit (which we prefer less of) or packages delivered (which we prefer more of). The confidence value can be reported as a raw value \(\in(0,1)\) or mapped to a semantic label indicating confidence such as _highly likely, likely, unlikely, highly unlikely_. For the experiments outlined later, we use the raw numerical values of confidence. ### Event-Triggered Generalized Outcome Assessment Algorithm We call our method for surprise-based dynamic self-assessment _Event-Triggered Generalized Outcome Assessment_ (ET-GOA). The algorithm is presented in Alg. 1 and can be broken up into two components: (1) before task execution and (2) during task execution. _Before task execution (lines 1-5)_: Line 1 takes as input a model \(\mathbf{M}\), a task specification \(\mathbf{T}\), a set of outcome thresholds \(\mathbf{Z}\) (one for each outcome), and a set surprise thresholds \(\delta\) (one for each state marginal of interest). Next (line 2) the model \(\mathbf{M}\) is used to simulate execution of task \(\mathbf{T}\) given initial state \(s_{0}\). This results in a set predicted state distributions \([p(\hat{s}_{t})]_{t=0:N}\), one for each time step \(t\). We further break the state distribution for a given time step into \(K\) marginal components. For example if we were interested in using the \(x,y\), and \(z\) position in the SI trigger, \(K=3\) and \(p(\hat{s}_{t})\) would be broken into the set of marginal probability distributions \([p(\hat{s}_{t,x}),p(\hat{s}_{t,y}),p(\hat{s}_{t,z})]\). This additional step of marginalization is implicit in the algorithm, but important to note. The predicted marginals for each time step are then stored in an experience buffer (line 3), and then used to compute the initial Generalized Outcome Assessment (line 4), which can be reported to an operator (line 5). ``` 1AlgorithmET-GOA(\(M\), \(T\), \(Z\), \(\delta\)): ``` 1\([p(\hat{s}_{1}),...,p(\hat{s}_{N})]\leftarrow\) simulate \(\mathbf{M}(T,s_{0})\) 2\(\text{exp\_buffer}\leftarrow[p(\hat{s}_{1}),...,p(\hat{s}_{N})]\) 3\(goa\leftarrow\) GOA(exp_buffer, \(\mathbf{Z}\)) 4\(goa\leftarrow\) GOA(exp_buffer, \(\mathbf{Z}\)) 5 report goa 6 7else 8 continue 9 ``` **Algorithm 1**Event-Triggered Generalized Outcome Assessment During task execution (lines 6-16)The agent observes the state \(s_{t}\) at time \(t\) (line 7). It then retrieves the state distributions (i.e., the predictions) for time \(t\) from the experience buffer (line 8). Next the algorithm computes \(si_{min}\), the minimum of the SI of each of the \(K\) observed state marginals \(s_{t,i}\) given the predicted marginal distributions \(p(\hat{s}_{i,t})\) (line 9). If \(s_{min}\) is below \(\delta\) an anomalous or surprising state observation has been received and confidence should be reassessed (line 10). In this case, a new set of predicted state distributions are simulated from \(M\) (line 11) and saved in the experience buffer (line 12). A new self-assessment is then computed using the newly updated experience buffer (line 13) and reported to an operator (line 14). \(si_{min}\) above \(\delta\) indicates that the the agent's predictions align with its observations and no confidence update is needed at this time (line 16). This loop (line 6) continues for the duration of the task, comparing predicted state marginal distributions to real observations and (if necessary) recomputing and reporting updates to the agent's task confidence. ## 4. Experiments We evaluated ET-GOA in two simulation experiments. The first investigated ET-GOA's impact on task performance. The second investigated ET-GOA's ability to capture changes in task difficulty. ### Delivery Scenario Overview Our experimental scenario was based on the motivating SAR example from section 1: A single agent was tasked to safely deliver cargo to one of three goals. The environment contained two types of obstacles: craters and dust zones, which were difficult for the agent to avoid. Driving over craters damaged the agent, and if enough craters were hit while navigating them the agent was considered broken and failed the delivery task. Dust zones degraded sensors and injected noise into the agent's state transition dynamics. Dust zones were generally found near craters, which increases the chance that the agent hit a craters if it found itself in dust. To simulate environmental changes that would occur in realistic deployments, new obstacles could spawn at random locations (except for the agent's location) during task execution. The environment was a custom OpenAI Gym environment (Brockman et al., 2017). The agent was modeled as a discrete state/action Markov Decision Process with state space \(s=(s_{x},s_{y},s_{c},s_{z})\) consisting of the agent's \((x,y)\) location and the counts of craters \((s_{c})\) and dust zones \((s_{z})\) within its sensor field of view (FOV). The sensor FOV was modeled as omnidirectional with a radius of 10 grid squares. The total size of the 2D environment was 50x50 grid squares. We trained one policy for each goal using Q-Learning (Kang et al., 2017). No obstacles were present during training to prevent the agent from learning how to overcome the difficulties of the environment. The world model \(M\) used for self-assessment was a copy of the environment that had an identical state transition function but only included known craters and dust zones. The agent chose the goal which had the maximum assessed confidence. If there was a tie in confidence, the agent chose the closest goal. An example environment can be seen in Fig. 1. We evaluated two different environments, _static_ and _dynamic_. In the static environment, the locations of craters and dust zones were known by the agent _a priori_ and remained unchanged for the entire task execution. In the dynamic environment, the locations of craters and dust zones were initially known, but changed at a predetermined time to simulate a previously generated onboard navigation map suddenly becoming out-of-date. ### Hypotheses We had three hypotheses: (1) In a static environment, ET-GOA and GOA will perform equally well and will both outperform a random goal choice; (2) In a dynamic environment, ET-GOA will outperform both GOA and random choice; (3) ET-GOA can capture both positive and negative changes in task difficulty. We analyzed agent performance (number of deliveries) for hypothesis 1 and 2, and we analyzed reported confidence relative to task difficulty changes for hypothesis 3. ### Improvements to Performance Our first experiment was used to validate our first two hypotheses. At \(t=0\) we initialized the agent with the locations of all obstacles. For dynamic conditions we changed the locations of the obstacles without the agent's knowledge at \(t=10\). Three conditions were considered: _no assessment_, _GOA_, and _ET-GOA_. The _no assessment_ condition did not use any competency assessment. Rather, at \(t=0\) the agent chose the goal at random and navigated directly to it. The _GOA_ condition used the standard Generalized Outcome Assessment analysis discussed in (Brockman et al., 2017). At \(t=0\) the agent selected and navigated to the goal \(g\in G\) with the highest GOA confidence according to Eqn. 2. \[g=\arg\max_{i\in G}GOA_{i} \tag{2}\] The _ET-GOA_ condition used the ET-GOA algorithm discussed in 3.3. We used two state marginals as triggers: \(s_{c}\) (and \(\hat{s}_{c}\)), the actual (and predicted) craters visible in the agent's FOV; and \(s_{z}\) (and \(\hat{s}_{z}\)), the actual (and predicted) dust zones visible in the agent's FOV. This essentially computed the surprise between the expected obstacle locations (from the initial location information) and the "on the ground" obstacle locations observed while traversing the environment. We chose these specific marginals because they align with the sensor capability in modern robots. Additionally, looking at surprising observations in the agent's sensor FOV enabled the algorithm to trigger a re-assessment prior to the agent physically coming into contact with a possibly dangerous obstacle. The algorithm triggered a re-assessment if the minimum SI of either state Figure 1. Example environment illustrating the agent’s location and FOV (orange), the goal area (green), truth locations of dust zones (blue circle) and craters (white circles). The obstacles are highlighted with blue and red to improve visual contrast. marginal was less than \(\delta=0.05\): \[\min(SI(s_{c},\hat{s}_{c}),SI(\hat{s}_{c},\hat{s}_{x}))<0.05\] The agent then selected the goal based on eqn. 2. The agent navigated directly to that goal until it either reached the goal or triggered a re-assessment and chose a new goal. For each condition the agent attempted 100 delivery tasks. #### 4.3.1. Results We found significant main effects of the environment on number of deliveries (\(t(598)=4.65,p<0.0001\)) indicating that the static environment was easier than the dynamic environment, which was expected. In the static environment, we found significant effects of each reporting condition on deliveries (\(F(2,297)\)= \(43.5,p<0.0001\)). Post-hoc analysis using Tukey's HSD revealed significant increase in deliveries in ET-GOA compared to random (\(p=0.0001\)), and significant increase in deliveries in GOA compared to and random (\(p<0.0001\)). There was no difference between GOA and ET-GOA in the static environment, which was expected. In the dynamic environment, we found significant effects of each reporting condition on deliveries (\(F(2,297)\)= \(44.1,p<0.0001\)). Post-hoc analysis using Tukey's HSD revealed significant increase in deliveries in ET-GOA compared to both random (\(p=0.0001\)) and GOA (\(p<0.0001\)). These results confirm our first and second hypotheses and can be seen in Fig. 2. ### Detecting Changes in Difficulty Our second experiment was used to validate our third hypothesis. Here we evaluated at how well ET-GOA captured changes in the environment that impacted task difficulty. For this evaluation, we isolated the agent to navigate to a single static goal location under two conditions. The first, called \(easy\xrightarrow{}hard\xrightarrow{}easy\), starts out with no obstacles, then obstacles are randomly added at time step 10, then all obstacles are deleted at time step 30. The second, called \(hard\xrightarrow{}easy\xrightarrow{}hard\), starts out with randomized obstacles, then all obstacles are deleted at time step 10, and then new obstacles are added at time step 30. Adding obstacles increases task difficulty for the agent and _vice versa_ for deleting obstacles. Obstacles at time step zero were known to the agent, while the obstacles added/deleted at time steps 10 and 30 had to be observed _in situ_. We ran 100 episodes for each condition and recorded the initial assessment and the ET-GOA assessment after each add/delete event. #### 4.4.1. Results We observed a significant difference in the agent's confidence between \(easy\xrightarrow{}hard\xrightarrow{}easy\) tasks and \(hard\xrightarrow{}easy\xrightarrow{}hard\) at the initial assessment (\(t(99)=110.0,p<0.0001\)), after the first environmental change (\(t(99)=27.7,p<0.0001\)), and after the second environmental change (\(t(99)=37.5,p<0.0001\)). This confirms our third hypothesis that ET-GOA can capture both positive and negative impacts to task difficulty. A plot of the results can be seen in Fig. 3. ## 5. Conclusions and Future Work In this work we presented an algorithm for _Event-Triggered Generalized Outcome Assessment_ which computes an autonomous agent's _in situ_ task confidence in dynamic and uncertain environments. ET-GOA chooses when to assess task confidence based on the Surprise Index between an agent's predicted and actual state. We evaluated ET-GOA on a delivery task in both static and dynamic environments and found that it led to significant performance improvements over baseline methods. We also found that ET-GOA was able to capture changes in agent confidence indicating changes in task difficulty. That is, our method can determine when tasks become more or less difficult. Our next step is to validate ET-GOA both on live platforms and in a human subjects study. We hypothesize that the presence of ET-GOA will help operators make better decisions when it comes to relying on an autonomous robot, leading to improved performance and reductions in workload. ET-GOA can enable autonomous robots to provide critical information about their "on the ground" confidence in task success, when that confidence changes, and why. We believe that it can be invaluable to human-robot teams, particularly those working in high risk and uncertain environments where human operators need to make critical decisions with respect to task execution, level of autonomy, and/or control. ###### Acknowledgements. This work was supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0032. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA. Figure 3. Plot showing ET-GOA captured task difficulty changes. The \(hard\xrightarrow{}easy\xrightarrow{}hard\) tasks are in orange, while \(easy\xrightarrow{}hard\xrightarrow{}easy\) tasks are in blue. Task difficulty changing events occurred at \(t=10\) and \(t=30\). The solid lines indicate task confidence mean and standard deviation. Figure 2. Plot of 100 delivery attempts per condition showing ET-GOA preformed significantly better than random in the static environment (left) and significantly better than both random and GOA in the dynamic environment (right).
2310.19131
Versatile spaceborne photonics with chalcogenide phase-change materials
Recent growth in space systems has seen increasing capabilities packed into smaller and lighter Earth observation and deep space mission spacecraft. Phase-change materials (PCMs) are nonvolatile, reconfigurable, fast-switching, and have recently shown a high degree of space radiation tolerance, thereby making them an attractive materials platform for spaceborne photonics applications. They promise robust, lightweight, and energy-efficient reconfigurable optical systems whose functions can be dynamically defined on-demand and on orbit to deliver enhanced science or mission support in harsh environments on lean power budgets. This comment aims to discuss the recent advances in rapidly growing PCM research and its potential to transition from conventional terrestrial optoelectronics materials platforms to versatile spaceborne photonic materials platforms for current and next-generation space and science missions. Materials International Space Station Experiment-14 (MISSE-14) mission-flown PCMs outside of the International Space Station (ISS) and key results and NASA examples are highlighted to provide strong evidence of the applicability of spaceborne photonics.
Hyun Jung Kim, Matthew Julian, Calum Williams, David Bombara, Juejun Hu, Tian Gu, Kiumars Aryana, Godfrey Sauti, William Humphreys
2023-10-29T19:40:20Z
http://arxiv.org/abs/2310.19131v1
# Versatile spaceborne photonics with chalcogenide phase-change materials ###### Abstract Recent growth in space systems has seen increasing capabilities packed into smaller and lighter Earth observation and deep space mission spacecraft. Phase-change materials (PCMs) are nonvolatile, reconfigurable, fast-switching, and have recently shown a high degree of space radiation tolerance, thereby making them an attractive materials platform for spaceborne photonics applications. They promise robust, lightweight, and energy-efficient reconfigurable optical systems whose functions can be dynamically defined on-demand and on orbit to deliver enhanced science or mission support in harsh environments on lean power budgets. This comment aims to discuss the recent advances in rapidly growing PCM research and its potential to transition from conventional terrestrial optoelectronics materials platforms to versatile spaceborne photonic materials platforms for current and next-generation space and science missions. Materials International Space Station Experiment-14 (MISSE-14) mission-flown PCMs outside of the International Space Station (ISS) and key results and NASA examples are highlighted to provide strong evidence of the applicability of spaceborne photonics. The space sector has witnessed tremendous growth within the past decade from not only government agencies but also entrants from the private sector. This is attributed to: (1) the increased availability and cost reduction of launch platforms, (2) expanded utilization of Low Earth Orbit (LEO) for crewed and uncrewed missions, and (3) the development of plans for lunar missions and ones further into the solar system. As a result, sector growth is expected to accelerate in the coming decades[1]. A host of complementary technological innovations and capabilities are underpinning this growth from reusable rockets to miniaturized satellites. These are enabling applications including planetary exploration missions, remote sensing studies and high-speed satellite broadband. Spaceborne sensing, imaging and communications capabilities are provided through electronic and photonics subsystems. Terrestrally, photonic components are rapidly displacing electronic counterparts to enable next-generation power efficient, high speed and highly versatile technologies, from reconfigurable photonic integrated circuits to tunable optical filters[2]. The technological demands of next-generation space systems are yet more demanding, and require operation in harsh environments while constrained by lean SWaP-C (Size, Weight, Power, and Cost) budgets. To realize subsystems that meet such performance demands requires novel photonic material platforms and innovative multifunctional design schemes. _Chalcogenide phase-change materials_ (PCMs) have been proposed for spaceborne solid-state memory modules because of their nonvolatile, reconfigurable, fast-switching and space radiation tolerant capabilities[3]. In recent years, developments in PCM-based reconfigurable and tunable photonic technology have given rise to the notion of PCMs as a material platform enabling versatile, compact spaceborne photonics[4, 5]. Such multifunctional photonic devices are expected to find LEO applications such as optical modulators in integrated photonic modules aboard miniaturized satellites (SmallSats / CubeSats), reconfigurable optical devices in remote sensing missions, multifunctional lab-on-a-chip astronaut health monitoring systems, and rapid environmental sensors on the NASA Orion crew transport vehicle[5]. Especially, reconfigurable photonics based on PCMs allow dynamic tuning of optical functionalities post-fabrication and thus have opened up exciting opportunities for unexpected new discoveries in space from agile manipulation of light[5]. The function and properties of traditional optical devices are set once fabricated[6]. This comment discusses the recent advances in rapidly growing PCM research and its potential to transition from a conventional terrestrial optoelectronics materials platform to a versatile spaceborne photonic materials platform (Fig.1a) for current and next-generation missions. Based on the underlying physical characteristics of PCMs, harsh space environment effect on PCMs through the Materials International Space Station Experiment-14 (MISSE-14) mission following experimental demonstrations in novel optical sub-systems, and associated space performance metrics, we argue for the technology's potential for widespread integration in space systems. Discussions on space material definitions are presented via NASA space science mission-driven examples. Due to remaining unknowns such as material long-term reliability and maximum cycling, we intend for the arguments presented herein to help direct future PCM research towards the expanding field of extraterrestrial applications, as well as to help inform and guide future funding of space technology research more optimally. **Phase-change materials: From terrestrial to space applications** PCMs are solid-state materials that can change between amorphous and crystalline phases as a result of thermal stimulus[7, 8, 9]. This is accompanied by a large shift in the material's electronic and optical properties (Fig.1b) and systematic trends in properties and performance of PCMs have recently been discussed[10]. The amorphous phase boasts higher electrical resistance, a lower refractive index, and less optical absorption (lower extinction coefficient), while the crystalline state shows the exact opposite. The magnitude of these changes can also be quite dramatic. Typical PCMs have a wavelength-dependent refractive index shift much greater than unity (typically \(\sim\)1-2)[11] accompanied by a small increase in extinction coefficient, and resistivity shifts of 3 orders of magnitude[12]. It is also possible to exploit a near-continuum of partially-crystalline states, with properties that lie more-or-less linearly between those of the amorphous and crystalline states[4]. Importantly, all of these phase changes are nonvolatile, meaning they do not require a constant supply of energy to maintain their phase change; i.e., '_set it and forget it_' if you will. The different compositions of germanium (Ge)-antimony(Sb)-telluride(Te) alloys (Fig.1c) present the most widely utilized PCMs[13]. In GeSbTe (GST), the crystallization temperature is a function of composition (Fig.1d), with different applications suiting different GST alloys[14]. These _phase changes_ occur on a very short timescale (between tens of milliseconds and a few picoseconds)[7, 8], depending on the material type or composition. The concept of applying different, non-chalcogenide PCMs to space applications is not entirely unheard of. Since the 1970s[15], gallium and paraffin materials (materials with a solid-liquid phase changes) have been proposed and tested for spacecraft thermal control in LEO or lunar orbit[16]. However, to-date, the application of all-solid-state, and particularly chalcogenide, PCMs for space applications has been largely non-existent. **Fig.1] Prototypical phase-change materials and their characteristics, application-driven interest of PCMs since 1960. (a) Application-driven interest of PCMs since 1960: from terrestrial to spaceborne applications. (b) Switching the crystalline phase into the amorphous phase involves heating above T\({}_{\rm melt}\) (melting point) and then rapidly quenching the PCM. Switching the amorphous phase into a crystalline phase involves heating above T\({}_{\rm crys}\) (crystallization temperature) for sufficient time to crystallize the PCM. PCM devices are programmable via the "SET" state (akin to writing a logic "1") and "RESET" state (writing a logic "0"). (c) Ge-Sb-Te ternary phase diagram showing popular GeSbTe (GST)-based PCMs include Ge\({}_{2}\)Sb\({}_{2}\)Te\({}_{5}\), Ge\({}_{1}\)Sb\({}_{4}\)Te\({}_{7}\), and Ge\({}_{1}\)Sb\({}_{2}\)Te\({}_{4}\). (d) Crystallization temperature as a function of composition in the Ge-Sb-Te ternary phase diagram. The optimal compositional region for high-performance embedded phase change memories is highlighted in green. Adapted from Liang Sun _et al._ paper[14].** **MISSE-14 mission flow PCMs outside of the ISS and the results provide strong evidence of the applicability of spaceborne photonics using PCMs[17]** Key advances have also been made in material survivability qualification through the _Materials International Space Station Experiment-14 (MISSE-14)_ mission[18]. Although PCMs have previously been noted for their resilience to various forms of radiation[3], they had not been long-term exposed in a realistic space environment until 2021. Twenty-four samples of various PCM thin films, along with optical filters comprised of these PCMs[18], were delivered to the International Space Station (ISS) via a Northrup Grumman (NG) Cygnus spacecraft. These were then installed on the MISSE-Flight Facility (_MISSE-FF, in a LEO about 400km to 420km above the Earth's surface_) and the MISSE-14 Science Carriers (MSCs) opened to start exposure of the specimens to the space environment. The samples were flown outside of the ISS and then returned on a SpaceX CRS-24 capsule (also known as SpX-24) in January 2022 as shown in Figure 2a. The total open exposure time for the specimens was 133 days 5 hours 6 minutes in the Wake direction and 148 days 21 hours 11 minutes in the Zenith direction. The samples were returned to NASA Langley Research Center in March 2022 for post-flight characterization. Space materials and photonics on the exterior of a spacecraft will be subjected to many environmental effects and threats that can cause degradation. In LEO these threats include visible light photon radiation, ultraviolet (UV) radiation, vacuum ultraviolet (VUV) radiation, x-rays, solar wind particle radiation (electrons, protons), cosmic rays, temperature extremes, thermal cycling, impacts from micrometeoroids and orbital debris, on-orbit contamination, hard vacuum, and atomic oxygen (AO)[19]. Such harsh environmental exposures can result in optical property degradation of PCMs and filter performance (i.e., center wavelength, bandwidth, transmittance) and filter tunability (i.e., tuning range, speed, switching cycles) threatening spacecraft performance and durability. The PCMs were exposed in space along the two different orientations of Zenith (deep space view, away from earth and grazing AO and highest solar / most UV exposure) and Wake (general space exposure, away from the direction of ISS travel and moderate solar / UV exposure). In-space monitoring data included the ionizing radiation dose (Fig.2b), UV irradiance (Fig.2c), and temperature (Fig.2d). Surface damage/contamination photographs (Fig.2e) were included. Ionizing radiation dose and total AO fluence data were provided at the conclusion of the mission. During the mission, the MSCs were closed 35 times (Wake) and 39 times (Zenith) for possible contamination events (i.e., docking, undocking, thruster tests, relocation, rebboost, deboost, and prop purge). The estimated total surface contamination during the MISSE-14 flight was 7.96 A (Wake) and 0.37 A (Zenith) bonding thicknesses at \(+\)25 \({}^{\circ}\)C. AO, which is the most prevalent atomic species encountered in LEO, is highly reactive with plastics and some metals causing severe erosion. There is also extreme ultraviolet (UV) radiation due to the lack of an atmospheric filter. This radiation deteriorates and darkens many plastics and coatings. The vacuum in space also alters the physical properties of many materials. Impacts of meteoroids and orbiting man-made debris can damage all materials exposed in space. The combined effects of all of these environments on specimens and photonic devices can only be investigated in space. On Earth, a material can only be subjected to one environment at a time. Samples on the Zenith and Wake carriers experienced a total AO fluence of 3.07E\(+\)19 atoms/cm\({}^{2}\) and 3.96E\(+\)19 atoms/cm\({}^{2}\) exposed area, respectively. The AO fluence was determined[20] using a mass loss technique using Kapton witness samples with an exposed area of 2.742 cm\({}^{2}\). AO can induce PCM degradation and erosion by producing surface recession and mass loss which have the most severe impact on optic characteristics such as transmittance. An AO-resistant SiO\({}_{2}\) coating can be applied[21] to address this issue. The monthly measured ionizing dose from the sensor (Teledyne uDOS001-C radiation detector shielded by 0.3 - 0.5 inch thickness aluminum) located on the Zenith MSC was 0.112 rads (with the measurement started on June 30), 1.078 rads (July), 2.132 rads (August), 1.42 rads (September), 3.096 rads (October), 2.201 rads (November), and 1.956 rads (measurement ending on December 26). All of these values are within the recorded average[22]. Even with radiation hardness considered an issue for materials in LEO, no crystallization peaks were produced on the MISSE-14 flight PCMs (Ge\({}_{2}\)Sb\({}_{2}\)Te\({}_{5}\), Ge\({}_{2}\)Sb\({}_{2}\)Se\({}_{4}\)Te\({}_{1}\), Sb\({}_{2}\)S\({}_{3}\)) based on x-ray diffraction measurements except for two peaks on the CaF\({}_{2}\) substrate (28.227\({}^{\circ}\) for 2\(\theta_{111}\) and 58.435\({}^{\circ}\) for 2\(\theta_{222}\)). Moreover, PCMs, in particular GST, are reported to be tolerant to ionizing radiation effects from the appreciable void volume present in the amorphous state as elucidated by ab initio molecular-dynamics simulations[3]. It is suspected that extreme temperature cycling of -120 \({}^{\circ}\)C to 120 \({}^{\circ}\)[23], with sixteen cycles per day for the ISS, imparts the most impact since this can lead to degradation of data retention periods related to the thermal stability of the amorphous state of PCM. However, the actual measured temperature (using a k-type thermocouple, Nannac A14B-1-24-K-48 model and attached to the underdeck of the deck on both the swing and mount sides of the MSC) was in a -20 \({}^{\circ}\)C to 50 \({}^{\circ}\)C range, not matching the extreme temperature ranges in the literature. The temperature is impacted by the ISS flight orientation and the Sun Beta Angle which provides insight into the temperature flux during the mission. These lower temperature fluctuations contributed to a high amorphous-phase stability of the PCM. The vacuum environment of LEO, with typical pressures less than 10-6 Torr, is similar to the base pressure of solid-state PCM film deposition and would not lead to outgassing following mass change of materials or system contamination. Along with the science data, high-resolution cameras scanned and captured photographs of the samples about once a month to detect changes as a function of time on-orbit. Figure 2f shows the pre-launch and post-flight characterized transmission spectra results of: (1) four Ge\({}_{2}\)Sb\({}_{2}\)Te\({}_{5}\)-based Fabry-Perot (GST-FP) bandpass filters with different center wavelengths (CWLs, 3.46 \(\upmu\)m, 3.60 \(\upmu\)m, 4.26 \(\upmu\)m, and 4.7 \(\upmu\)m) across the Mid Wavelength Infrared (MWIR), and (2) broadband dielectric mirror and CaF\({}_{2}\) substrate references. The measurements indicate a slight reduction in performance of the bandpass filters at 3.46 \(\upmu\)m after 6 months of LEO exposure. The average peak transmission decreased 5-10% (highlighted in detail in Figure 2f, _right_), with minor broadening of the passband FWHM and a minor blue-shift of the CWL (both approximately several nanometers in scale). Space-environment induced degradation of multi-layer optical filters and optical coatings has been previously reported[24, 25, 26, 27, 28]. Using these studies, which include detailed analyses of failure mechanisms, the transmission decrease and passband broadening observed in PCM-filters may be associated with small reductions in reflectance between the multi-layer thin-film interfaces, due to (1) layer interdiffusion or mechanical stresses arising from extended thermal / vacuum cycling[24, 25, 27, 28], (2) contamination from water molecules and outgassing, which increases off-axis scattering[24, 25], or (3) reactive AO[28]. The small CWL wavelength shift in space-based filters may be linked to compaction of thin-films due to large thermal cycling[24]. The degradation associated with ionizing radiation and UV solar radiation--usually predominant at the shorter wavelengths (i.e. UV) and negligible at longer (i.e. MWIR) wavelengths[27]--is discounted as an explanation for the MWIR transmittance results. However, visual inspection of one of the bandpass filters (3.46 \(\upmu\)m CWL filter in Figure 2e) shows color variation across its surface, indicative of AO bombardment[28], but yields negligible transmittance variation in the MWIR. It can be concluded that the PCM-based filters have performance degradation in-keeping with similar 'hard coating' optical filters[24, 25, 26, 27, 28], that is, minimal changes in transmittance and passband FWHM, for LEO exposure. Significant changes in performance due to PCM-integration has not been observed, and filter function--closely resembling pre-launch performance--remains across all filters. We further show'real-world' applicability by performing CO\({}_{2}\) gas sensing (Fig.2g) and multispectral thermal imaging (Fig.2h) with space exposed 3.46 \(\upmu\)m and 4.26 \(\upmu\)m CWL filters, respectively. For the thermal imaging demonstration, the GST-FP bandpass filter (3.46 \(\upmu\)m CWL) was annealed at 200 \({}^{\circ}\)C and 400 \({}^{\circ}\)C inside a vacuum chamber to induce GST crystallization and continuously tunable (i.e., a-GST with 3.46 \(\upmu\)m, p\({}_{1}\)-GST with 3.72 \(\upmu\)m, p\({}_{\text{final}}\)-GST with 4.38 \(\upmu\)m CWL) operation was realized. In addition, the filter, designed to have its initial CWL (amorphous GST sate) matched to a molecular (vibrational) absorption mode of CO\({}_{2}\), successfully demonstrates the gas sensing image of externally added CO\({}_{2}\) gas. Two separate filters (3.6 \(\upmu\)m and 4.26 \(\upmu\)m) have been used for demonstration purpose here, but clearly a single, switchable, GST-FP tunable bandpass filter of this originally design has the CO\({}_{2}\) gas detection ability even after six months in space. **Fig.2] Materials International Space Station Experiment-14 (MISSE-14) PCM and PCM-based Tunable MWIR Filters Experiment.** (a) Overview of MISSE-14 mission operation cycle with on-orbit images (Credit: NASA). MISSE-14 environment flight data, highlighting (b) ionizing radiation, (c) ultraviolet radiation (UV), (d) temperature, and (e) on-orbit photographic images at the 3.46 \(\upmu\)m center wavelength (CWL) filter from June to December 2021 on Zenith. (f) Characterized transmission spectra of Ge\({}_{2}\)Sb\({}_{2}\)Te\({}_{5}\) (GST) Fabry-Perot (GST-FP) bandpass filters with different CWLs (3.46 \(\upmu\)m, 3.60 \(\upmu\)m, 4.26 \(\upmu\)m, and 4.7 \(\upmu\)m) before (pre-flight) and after (post-flight) the LEO exposure. A MWIR broadband dielectric mirror and CaF\({}_{2}\) substrate (optical window) is also included for reference. (g) LEO exposed GST-FP bandpass filters imaging: CO\({}_{2}\) gas sensing with GST-FP filters and (h) MWIR imaging results at a fixed 363 K blackbody spectral irradiance, as a function of varying GST-FP bandpass filter states from thermal annealing, with varying passband CWLs. **The emerging applications of PCM-based photonics for space science missions** The emerging PCM-enabled spaceborne applications is summarized in Figure 3a,b, whereby notable advancements in terrestrial applications can be exploited for the space sector. Broadly speaking, PCM-devices enable low-SWaP and high performance for a number of capabilities with relevance to space missions (listed alongside the capabilities in parenthesis), such as: (1) photonic integrated circuits (high speed communications and sensing), (2) LIDAR and imaging spectroscopy (spatial light modulators, beam steerers, tunable filters), (3) deep-space imaging (autofocus/real-time phase-corrective lenses, planar adaptive optics), and (4) satellite temperature management/thermal homeostasis (tunable/dynamic thermal emission control)\({}^{2}\). Mission requirements such as operating waveband and modulation speed restricts PCM selection. Figure 3c,d collates the wavelength dependent change in refractive index and extinction coefficient for different PCMs [9, 29, 30], and Figure 3e shows the modulation (switching) speed for a reduced selection [2, 5]. Operation at shorter wavelengths (VIS)--necessary for conventional optical imaging--mean most PCMs are unattractive due to their high extinction (loss), apart from niche applications such as color coatings or e-ink. In near-infrared (NIR) applications--such as astronaut health monitoring and telecommunications--SbS and GSST alloys become more attractive. Further into the IR lies regions for atmospheric gas monitoring and vehicle thermal imaging, where GST becomes more suitable. Performance parameters, including switching speed and contrast, endurance (lifetime), and power consumption can vary depending on the specific use cases [2, 5] and their 'importance' further dictates the suitability of specific PCMs (Fig.3f). The first viable technology for space applications may be solid-state tunable filters. The PCM-based tunable optical filter is an 'all encompassing' acronym--technically it includes any filter design, which incorporates a PCM as the tunable constituent to effectively tune the transmission passband. Such filters have recently been shown for a variety of device architectures [4, 31, 32], operating across the NIR to MWIR wavebands. Thermography and imaging spectroscopy (IS) are critical measurement techniques for a variety of NASA missions including vehicle ascent and vehicle Entry, Descent, and Landing (EDL) projects. The PCM-based tunable filter offers significant advantages over the current IS state-of-the-art via the ability to remove the traditional bulky motorized filter wheel used in current IS systems and replace it with a single, low-power-consumption filter, with no moving parts. Apart from the orders-of-magnitude reduction in total SWaP, it also offers significant advantages in terms of data collection, as PCM tuning speeds allow for much higher temporal resolution than rotating filter wheel mounts. Additionally, the PCM-based tunable filter technology can support human exploration and operational missions in space through astronaut health monitoring, including detection of specific blood-based or tissue-based molecular markers for in-situ discovery and monitoring of abnormal conditions (diseases), or ex-situ analysis[33]. Beyond filters, PCM-based reconfigurable optical wavefront lenses are attractive for exoplanet imaging. Exoplanet imaging in space requires real-time wavefront corrections to mitigate the effects of thermal gradients, optical imperfections, and diffraction issues[34]. Current methods of performing these corrections involve deformable mirrors, requiring many actuators to provide the necessary authority to the mirror. Recent demonstrations suggest that PCM optics could be used to simplify the correction system and introduce a transmissive correction element versus a reflective one[35, 36]. The PCM-based optical wavefront lens would be beneficial for the application since the wavefront correction system has to exhibit the control authority to compensate for very small aberrations at high speed[37]. The fast-tuning capability of PCM-based optics is by far the most important feature since exoplanet imaging requires control of high frequency spatial and temporal aberrations. Free-space optical (FSO) communication is another application area for PCM-based photonics, where high fidelity laser beam steering is critical. Recent demonstrations in reconfigurable PCM-based optics[38, 39, 40] suggest that such devices could be utilized at both the transmission and receiving ends of a FSO system to remove the effects of vibration and thermal gradients as replacements for deformable mirrors. This would permit higher bandwidth optical communications through space. Additionally, PCM beam steerers with broad angular ranges can be used to maintain optical links across satellite constellations without the need to maneuver satellites into a direct line-of-sight configuration, reducing fuel usage and coordination complexity across the constellation. Moreover, the small volume of the PCM-based photonics is important for operation in CubeSat platforms in space. **Application 1 - Tunable bandpass filter for remote sensing: Spaceborne LIDAR, IR imaging** Because of their rapid tuning speeds and broadband transparency in the Short Wavelength Infrared (SWIR), to Long Wavelength Infrared (LWIR), PCM-based optics are a strong contender to make an impact on space-based LIDAR and IR imaging telescopes. The reduced SWaP of PCM-based metasurfaces compared to traditional optical components like filter wheels and spectral splitters allows for a significantly reduced SWaP that is amenable to small satellite form factors. As an example, consider the Stratospheric Aerosol and Gas Experiment (SAGE) missions, a series of remote sensing instruments that measure key atmospheric constituents, with specific aerosols and molecules of interest[41]. Unlike previous SAGE instruments, which were mounted on free-flying spacecraft of different sizes, SAGE IV is 6U CubeSat platform concept for observing key atmospheric constituents at the same quality as previous SAGE instruments that furnishes both benefits and challenges. The CubeSat platform (Fig 4a) allows high vertical resolution measurements with semi-global coverage of key trace gas constituents in the stratosphere and upper troposphere. However, the 6U form factor requires a significant size, weight, power, and cost (SWaP) reduction, on the order of \(\sim\)1/10\({}^{\text{th}}\). SAGE uses a filter wheel (Fig 4c) to switch between different chemical absorption channels such as H\({}_{2}\)O, N\({}_{2}\)O, CO\({}_{2}\) and CH\({}_{4}\)[41]. This requires tuning to \(\sim\)6 discrete wavelengths from 386 nm to 1020 nm. PCM-based filters may facilitate a SAGE IV scenario with significant SWaP reduction in 6U CubeSat architectures by empowering a dynamically tunable, all-solid-state solution without moving parts [32, 33, 4] (Fig.4f). For the SAGE IV scenario, arguably one of the strictest science requirements to which they can be applied - mainly, (1) materials science challenges of identifying a PCM that can operate efficiently across visible and near infrared wavelengths and (2) good cyclability for electronic PCMs levels--they must meet or exceed these specifications. Careful design of the PCM metasurface filter--for example, via deep neural networks [31]--along with proper choice of the PCM material can likely satisfy the requirements for tuning range and transmission passband width. The switching requirement seems likely to be trivial for PCM devices, as they are currently used in PRAM electronics. However, switching stability and failure analysis for photonic PCMs has yet to be fully characterized, and is a key research element required to transition PCMs from the lab and into scientific instruments. SAGE switches between filters at a rate that translates to \(\sim\)9.5x10\({}^{6}\) switching cycles per year; the science observation events are a maximum length of 6 minutes, and the filter (specification of high-speed motorized filter wheels, \(\sim\)1 ms per filter) will be switching five times per second during the event. LEOs are approximately 90 minutes long, and one science observation event will take place per orbit. Approximately 10\({}^{7}\) cycles are performed per year on orbit based on this estimation. There is potential interest in using the SAGE IV chassis for future missions, optimizing its IR (1\(\sim\)5 \(\upmu\)m) measurement capability. This requires tuning to \(\sim\)8 discrete wavelengths from 1.4 - 4.1 \(\upmu\)m for enhanced science measurements (i.e., \(\times\)10 better H\({}_{2}\)O measures) and for extended species detection (i.e., CH\({}_{4}\), CO\({}_{2}\), N\({}_{2}\)O, and CO) [42]. Moreover, the reduced SWaP benefits from using PCM-based filters as a replacement for filter wheels reduce volume requirements, thereby allowing the addition of a spectrometer. The SAGE IV IR is applicable for Mars (CO\({}_{2}\), N\({}_{2}\), H\({}_{2}\)O, NO) and / or Venus (CO\({}_{2}\), N\({}_{2}\), H\({}_{2}\)O, CO) atmospheric monitoring. **Application 2 - Reconfigurable planar optics for wavefront correction and beam steering: Spaceborne LIDAR, Free-Space Optical Communications** A second area of interest is within applications that require adaptive optics for wavefront correction or beam steering. The use of MEMS-based micromirror arrays and / or fine-steering mirrors (FSMs) is common in various imaging LIDAR and FSO communications systems (Fig.4b) in order to capture high-spatial-resolution data within a given field of regard, correct for wavefront aberrations caused by scatter/turbulence, or maintain optical links--both between satellites and ground links (Fig.4d)--without having to maneuver satellite constellations to maintain direct line-of-sight. Recent work in the development of FSO communications has concentrated on adapting this technology for use in SmallSat and CubeSat platforms since FSO communication systems can potentially increase signal to noise ratios significantly within the communication channel [43]. However, the actuator response times and pointing requirements for these applications can be strict. PCM-based tunable metasurfaces such as beam steerers and tunable metalenses have recently been demonstrated as proof-of-concept devices, and have significant potential to address both of these applications [35, 44]. Due to their large refractive index contrast, PCM-based beam steerers and highly pixelated "meta-correctors" can achieve a broad range of phase coverage, translating to large angular scanning and phase-correction ranges (Fig.4e). Although these devices have only recently been demonstrated in the lab, targeted research into high-resolution control of intermediate PCM phases and highly pixelated metasurfaces could soon play a transformative role in space-based adaptive optics such as the aforementioned applications. **Fig.4] Spaceborne application examples: PCM-integrated active devices for technology subsystems in CubeSats.** (a) A constellation of CubeSats providing time-resolved spectroscopic imaging. Image credit: MIT/LL, NASA[45]. (b) CubeSat Laser Infrared CrosslinK (CLICK) payloads for space-to-ground and crosslink communications. The fine steering mirror (FSM) steers the beams (976 nm Beacon laser / 1550 nm transmitter laser) to dichroic mirrors in the respective optical systems[46]. (c) SAGE-IV (Stratospheric Aerosol and Gas Experiment-IV) pathfinder multispectral LIDAR instrument, with filter wheel consisting of eight bandpass filters and one opaque element. Each designed to observe a target species such as aerosols and ozone[41]. (d) NASA's LCOT (Low-Cost Optical Terminal) FSOS (Free-Space Optical Subsystem), a ground terminal for space optical communications[47]. This terminal consists of two sets of filter wheels and a FSM. (e) Reconfigurable planar optics for free-space optical communications: PCM-based beam steerers and highly pixelated "meta-correctors" can achieve a broad range of optical phase coverage, translating to large angular scanning and phase-correction ranges[38] (e, _left_). Packaged device consisting of the reconfigurable PCM-metasurface (_middle, right_) which deflects an incoming IR beam with output angle dependent on index of the PCM. (f) Tunable bandpass filter for spaceborne LIDAR: (f, _left_) The SAGE filter wheel may be replaced with a single element PCM tunable filter used to switch between different CWLs (i.e., different chemical absorption channels). Through an optical or electrical stimuli on the filter active area, PCM crystallinity is modified, and the resultant transmission response (\(\lambda_{\text{N}}\)) is spectrally shifted as demonstrated (_right_) by CO\({}_{2}\) gas sensing with GST-Fabry Perot tunable bandpass filters[48]. **Outlook: challenges and opportunities toward space applications** The definition and qualification of space materials is challenging. Understanding the performance of materials under extreme conditions sets bounds to where and how they can be used and informs the range of system engineering that may be needed to enable their use in applications. Similarly, space missions come with extremely challenging and wholly inflexible performance metrics to ensure mission success, and understanding the capabilities of devices comprised of such materials with respect to inflexible mission metrics is equally important. The most mature PCM demonstrations have been carried out as proof-of-concept or low-fidelity breadboard prototypes in a controlled lab environment, placing PCM devices at TRL 3-4 in the highest cases. In order to cross the so-called "Valley of Death" to TRL 9 and truly make an impact on photonic applications in space, PCMs must address a number of hurdles, spanning fundamental science to (sub-)system implementation. * **TRL 4-5:** Photonic components deployed in space require additional performance evaluations for their ability to withstand the harsh space environment as a complete system. PCMs are a materials platform for a sub-system / technology. This can enable a particular part and / or small 'active' part of a photonic application like photonic integrated circuits for beam steering applications in CubeSat platforms. Pragmatically, this means that it is predicted that PCMs will find use in space qualified materials, optics, housing, and platforms that can withstand both the launch and space environments. * **TRL 5-6:** The cycle lifetime or endurance of PCMs in photonic devices remains to be validated or improved[49]. In electronic memory, PCM endurance and failure mechanisms have been extensively characterized and have indicated an endurance value of over 2\(\times\)10\({}^{12}\) cycles. However, these studies have been solely focused on electronic PCMs (and related properties). There is still significant effort required to fully validate optical PCM longevity (e.g., GeSbTe and GeSbSeTe), the applications for which can require similar endurance values as electronic applications depending on the duration of the space mission. This limitation is primarily a material issue, as the fundamental physics related to optical PCM switching longevity (and how they are affected by the space environment) are not yet fully understood. Reproducible PCM optical device properties are important throughout the mission period and need to be evaluated as part of the endurance. Understanding of failure modes specific to photonic devices, the fundamental thermal transport properties of PCMs[50] (to control the repeatability of the PCM switching), and implementation of the presently-used design rules to boost device lifetime[51] will be imperative to facilitating adoption of PCMs in practical applications, and is arguably the greatest challenge currently faced by the technology. Despite these material issues, system-level considerations like tight control of the electronic pulse scheme and thermal design of the PCM structure as a whole are also paramount. Minimizing the switching voltage and current of PCMs is another important direction for space applications in lean power budgets. * incurred when the PCM is heated to switch states - does not interfere with the desired signal. This is a not only a systems consideration, but also plays into the device architecture to ensure rapid cooling/quenching of the PCM, ideally on timescales faster than the desired data acquisition rate of the full system (e.g., the camera frame rate). * **TRL 7-9:** To successfully move from prototype to true subsystem, proper electrical integration of the PCM device must be achieved. While breadboard demonstrations may use hand-soldered contacts and bulk leads and power supplies, full integration requires contacting the device in a package similar to traditional microelectronics, complete with a power supply consistent with the volume and power constraints of an intended application. In layman's terms, PCM devices need to go from bulky lab demonstrations to looking like something you might find inside of your cell phone or laptop. For certain applications, such as aberration correction and beam steering, this will also require significant pixelation of the device on the order of tens-of-thousands of pixels or more. While the present state of that particular PCM implementation sits at a much lower TRL (\(\sim\) TRL 2), it must be considered nonetheless, and will likely require the involvement of proper foundry partners to achieve - as well as a better understanding of PCM thermal transport to achieve thermal isolation and minimize pixel crosstalk. * Finally, although not required to reach higher TRLs, another key materials science challenge is identifying a PCM that can operate efficiently across visible and near infrared wavelengths. This is important for a number of spectroscopic applications, particularly aerosol remote sensing. Optical loss (non-negligible extinction) is the bane of optical PCMs. Hence, another material challenge is development of a new class of PCMs where the phase transition only triggers refractive index (real-part) modulation with a minimal loss penalty related through the Kramers-Kronig relations[9]. This would be a game changer for optical engineers that would open up numerous applications. The space sector has witnessed tremendous growth within the past decade--not only from government agencies but also entrants from the private sector. This industry, along with societal interest in space exploration, is expected to grow even faster during this decade. The introduction of PCM technology and associated optical devices will help to accelerate the adoption of new architectures for reduced SWaP-C platforms in space. Even though there are currently no clear answers for _"What is a space material?"_ and "_Who has the responsibility for a space evaluation?"_, we foresee that the arguments put forth in this comment, along with the MISSE-14 sample evaluation and data sharing within the science community, will significantly expedite PCM integration into spaceborne platforms and open emerging applications. Webpage ([https://spaceborne-pcms.github.io](https://spaceborne-pcms.github.io)) is generated as Supporting Information for the npj Microgravity readers. Scan the QR code for "The Maps", the locations of space material evaluation facilities (academic research laboratories, space agencies, and private companies) including simulated space environments across the world. ## Acknowledgments The authors are grateful to Mr. Stephen Borg at NASA Langley Research Center / LaRC for his helpful technical discussions. ## Author contributions All authors contributed to writing the paper. ## Competing financial interests The authors declare no competing financial interests.
2304.04799
A Practical Box Spline Compendium
Box splines provide smooth spline spaces as shifts of a single generating function on a lattice and so generalize tensor-product splines. Their elegant theory is laid out in classical papers and a summarizing book. This compendium aims to succinctly but exhaustively survey symmetric low-degree box splines with special focus on two and three variables. Tables contrast the lattices, supports, analytic and reconstruction properties, and list available implementations and code.
Minho Kim, Jörg Peters
2023-04-10T18:09:55Z
http://arxiv.org/abs/2304.04799v1
# A Practical Box Spline Compendium ###### Abstract Box splines provide smooth spline spaces as shifts of a single generating function on a lattice and so generalize tensor-product splines. Their elegant theory is laid out in classical papers and a summarizing book. This compendium aims to succinctly but exhaustively survey symmetric low-degree box splines with special focus on two and three variables. Tables contrast the lattices, supports, analytic and reconstruction properties, and list available implementations and code. ## 1 Introduction As a generalization of uniform polynomial tensor-product splines, and with the beautiful interpretation as a projection of a higher-dimentional box partition [57, 14, 54, 38], Figure 1: Box splines as a projection of \(n\)-dimensional boxes [38]. see Fig. 1, box splines have repeatedly commanded the attention of researchers seeking an elegant foundation for differentiable function spaces on low-dimensional lattices. Notably, box splines provide the regular prototypes for generalized uniform polynomial subdivision algorithms [5, 18, 50] and have been advocated for reconstructing signals on non-Cartesian lattices, see Section 9. This compendium summarizes the latest findings for box spline spaces with emphasis on \(d=2\) and \(d=3\) variables and _symmetric_ box splines, i.e. box splines that have at least the symmetry of their domain lattice. The aim is to provide a succinct overview, via tables and illustrations, of the properties, literature and computational tools and code, and to characterize each box spline's efficiency in terms of smoothness, polynomial reproduction, support size and polynomial degree. ## 2 Lattices and box splines We refer to Conway and Sloane [9] for a general treatment of lattices and their symmetry groups, beyond the needs of the compendium. Lattices and Direction SetsGiven the integer grid \(\mathbb{Z}^{d}\), any non-singular \(d\times d\)_generator matrix_\(\mathbf{G}\) defines a lattice \(\mathbb{Z}_{\mathbf{G}}:=\mathbf{G}\mathbb{Z}^{d}\). The _symmetry group_\(\mathcal{SG}\left(\mathbb{Z}_{\mathbf{G}}\right)\) of \(\mathbb{Z}_{\mathbf{G}}\), represented as an orthogonal matrix group, consists of all orthogonal transformations that leave \(\mathbb{Z}_{\mathbf{G}}\) invariant: \[\mathcal{SG}\left(\mathbb{Z}_{\mathbf{G}}\right):=\left\{\mathbf{L}\in \mathbb{R}^{d\times d}:\mathbf{L}^{T}\mathbf{L}=\mathbf{I}_{d}\text{ and }\forall\boldsymbol{j}\in\mathbb{Z}_{\mathbf{G}}\ \mathbf{L} \boldsymbol{j}\in\mathbb{Z}_{\mathbf{G}}\right\}.\] where \(\mathbf{I}_{d}\) is the \(d\times d\) identity matrix. In the plane (2D) and 3-space (3D), five lattices are known for their high symmetries. They are listed in Table 1. To enumerate box splines, we collect the lattice direction vectors \(\boldsymbol{j}\in\mathbb{Z}_{\mathbf{G}}\) into _direction sets_\(\mathcal{DS}\left(\mathbb{Z}_{\mathbf{G}},k\right)\) consisting of one vector and its images \begin{table} \begin{tabular}{c c c c c} \hline \hline dim. & name & symbol & generator matrix & \#\(\mathcal{SG}\left(\ast\right)\) \\ \hline \multirow{2}{*}{2} & Cartesian & \(\mathbb{Z}^{2}\) & \(\mathbf{I}_{2}\) & 8 \\ \cline{2-5} & hexagonal & \(\mathbb{Z}_{\mathrm{h}}\) & \(\mathbf{G}_{\mathrm{h}}:=\frac{1}{2}\big{[}\begin{smallmatrix}1\\ -\sqrt{3}\end{smallmatrix}\frac{1}{\sqrt{3}}\big{]}\) & 12 \\ \hline \multirow{5}{*}{3} & Cartesian & \(\mathbb{Z}^{3}\) & \(\mathbf{I}_{3}\) & 48 \\ \cline{2-5} & FCC (face-centered cubic) & \(\mathbb{Z}_{\mathrm{fcc}}\) & \(\mathbf{G}_{\mathrm{fcc}}:=\left[\begin{smallmatrix}0&1&1\\ 1&0&1\\ 1&1&0\end{smallmatrix}\right]\) & 48 \\ \cline{2-5} & BCC (body-centered cubic) & \(\mathbb{Z}_{\mathrm{bcc}}\) & \(\mathbf{G}_{\mathrm{bcc}}:=\left[\begin{smallmatrix}-1&1&1\\ 1&-1&1\\ 1&1&-1\end{smallmatrix}\right]\) & 48 \\ \hline \hline \end{tabular} * #\(\mathcal{S}\) is the cardinality of the set \(\mathcal{S}\), \(\mathbf{I}_{d}\) the \(d\times d\) identity matrix. \end{table} Table 1: Five domain lattices for \(d=2,3\). under the symmetry group of the lattice. The index \(k\) is assigned by non-decreasing vector length, see Fig. 2 and Fig. 3, which is unique for \(k\leq 3\), the cases of interest. (For \(k>3\), multiple direction sets can lie in the same spherical shell [9], e.g. \((5,0)\) and \((4,3)\) in \(\mathbb{Z}^{2}\).) Since \(-\boldsymbol{j}=\mathbf{G}(-\boldsymbol{i})\) and \(-\boldsymbol{i}\in\mathbb{Z}^{d}\) if \(\boldsymbol{i}\in\mathbb{Z}^{d}\), for each \(\boldsymbol{j}=\mathbf{G}\boldsymbol{i}\in\mathbb{Z}_{\mathbf{G}}\) also \(-\boldsymbol{j}\in\mathbb{Z}_{\mathbf{G}}\), we list only one of \(\boldsymbol{j}\) and \(-\boldsymbol{j}\) in \(\mathcal{DS}\left(\mathbb{Z}_{\mathbf{G}},k\right)\). Box SplinesGiven a domain lattice \(\mathbb{Z}_{\mathbf{G}}\), direction vectors \(\boldsymbol{\xi}\in\mathbb{Z}_{\mathbf{G}}\) can be collected into a \(d\times m\)_direction matrix_\(\boldsymbol{\Xi}\) to define the centered box spline \(M_{\boldsymbol{\Xi}}\) recursively, starting with the characteristic function \(\chi_{\boldsymbol{\Xi}\boldsymbol{\Theta}^{d}}\) on the (half-open) parallelepiped \(\boldsymbol{\Xi}\,\boldsymbol{\Theta}^{d}\), \(\boldsymbol{\Theta}:=\left[-\frac{1}{2},\frac{1}{2}\right)\) Figure 3: Stratifying 3D direction vectors corresponding to direction sets \(\mathcal{DS}\left(\mathbb{Z}_{\mathbf{G}},k\right)\), \(k=1\), \(k=2\), \(k=3\). Table 2 lists coordinates. see [6, 17] and Fig. 4: \[M_{\boldsymbol{\Xi}}:=\begin{cases}\int_{-\frac{1}{2}}^{\frac{1}{2}}M_{ \boldsymbol{\Xi}\setminus\boldsymbol{\xi}}\left(\cdot-t\boldsymbol{\xi} \right)\mathrm{d}t&\text{if }d<m,\ \boldsymbol{\xi}\in\boldsymbol{\Xi},\\ \frac{|\det\mathbf{G}|}{|\det\boldsymbol{\Xi}|}\chi_{\boldsymbol{\Xi}\, \boldsymbol{\Theta}^{d}}&\text{if }d=m\text{ and }\det\boldsymbol{\Xi}\neq 0.\end{cases} \tag{1}\] The centered box spline is invariant under exchange of columns or multiplication of a column by -1: \(M_{\boldsymbol{\Xi}_{1}}=M_{\boldsymbol{\Xi}_{2}}\) if and only if there exists a'signed permutation' matrix \(\mathbf{P}\) that can permute and/or change sign of a coordinate, such that \(\boldsymbol{\Xi}_{1}=\boldsymbol{\Xi}_{2}\mathbf{P}\). Moreover, since for any linear map \(\mathbf{L}\), see [17, page 11], \[M_{\boldsymbol{\Xi}}=|\det\mathbf{L}|M_{\boldsymbol{\mathrm{L}}\boldsymbol{ \Xi}}(\mathbf{L}\cdot), \tag{2}\] many properties for centered box splines on the Cartesian lattice \(\mathbb{Z}^{d}\) transfer directly to \(\mathbb{Z}_{\mathbf{G}}\) by a linear change of variables \(\mathbf{G}\). Let \(\boldsymbol{\Xi}\in\mathbf{G}\mathbb{Z}^{d\times m}\) with \(\operatorname{rank}\boldsymbol{\Xi}=d\), \(M_{\boldsymbol{\Xi}}\) the corresponding box spline, and \(S_{\boldsymbol{\Xi}}:=\operatorname{span}(M_{\boldsymbol{\Xi}}(\cdot- \boldsymbol{j}))\) the space of its shifts over the lattice. Then \(M_{\boldsymbol{\Xi}}\) and \(S_{\boldsymbol{\Xi}}\) have the following properties: 1. \(M_{\boldsymbol{\Xi}}\) is non-negative and its shifts over \(\mathbb{Z}_{\mathbf{G}}\) sum to 1: due to the factor \(|\det\mathbf{G}|\) in (1) \[\sum_{\boldsymbol{j}\in\mathbb{Z}_{\mathbf{G}}}M_{\boldsymbol{\Xi}}(\cdot- \boldsymbol{j})=1.\] 2. The support of \(M_{\boldsymbol{\Xi}}\) is \(\boldsymbol{\Xi}\,\boldsymbol{\Theta}^{d}\), i.e. the set sum of the vectors in \(\boldsymbol{\Xi}\). 3. \(M_{\boldsymbol{\Xi}}\) is piecewise polynomial of total degree \(m-d\). 4. \(M_{\boldsymbol{\Xi}}\in C^{r-2}\). i.e. \(r-2\) times continuously differentiable, where \(r\) is the minimal number of columns that need to be removed from \(\boldsymbol{\Xi}\) to obtain a matrix whose columns do not span \(\mathbb{R}^{d}\). Figure 4: Box splines via convolution in the directions (columns) of \(\boldsymbol{\Xi}\) on \(\mathbb{Z}^{2}\). 5. \(S_{\Xi}\) reproduces all polynomials of degree \(r-1\). 6. The \(L^{p}\) approximation order of \(S_{\Xi}\) is \(r\)[17, page 61], i.e. for all sufficiently smooth \(f\) there exists a sequence \(c:\mathbb{Z}_{\mathbf{G}}\mapsto\mathbb{R}\) such that : \[\left\|f-\sum_{\boldsymbol{j}\in\mathbb{Z}_{\mathbf{G}}}c(\boldsymbol{j})M_{ \Xi}((\cdot-\boldsymbol{j})/h)\right\|_{p}=O(h^{r}),\quad h<1.\] (3) 7. \(S_{\Xi}\) forms a basis (the shifts are linearly independent) if and only if all square nonsingular submatrices of \(\Xi\) are unimodular, i.e., \(|\det\mathbf{Z}|=1\) for all \(\mathbf{Z}\subset\Xi\) where \(\mathbf{Z}\in\mathbb{R}^{d\times d}\)[17, page 41]. 8. With \(\operatorname{vol}\left(\Xi\,\raisebox{-0.5pt}{\hbox{\rule{0.4pt}{6.5pt}\rule {6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}^{\!d}\right)\) denoting the volume of the support of \(M_{\Xi}\), the number of coefficients on \(\mathbb{Z}_{\mathbf{G}}\) required to evaluate a spline value is \(\operatorname{vol}\left(\Xi\,\raisebox{-0.5pt}{\hbox{\rule{0.4pt}{6.5pt}\rule {6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}^{\!d}\right)/|\det\mathbf{G}|\), [17, page 36]. The _symmetry group_ of \(M_{\Xi}\) is defined analogous to the symmetry group of a lattice: \[\mathcal{SG}\left(M_{\Xi}\right):=\left\{\mathbf{L}\in\mathbb{R}^{d\times d}: \mathbf{L}^{T}\mathbf{L}=\mathbf{I}_{d}\text{ and }M_{\Xi}=M_{\Xi}(\mathbf{L}\cdot)\right\}.\] A centered box spline \(M_{\Xi}\) on the domain lattice \(\mathbb{Z}_{\mathbf{G}}\) is _symmetric_ if it has the same or more symmetries than \(\mathbb{Z}_{\mathbf{G}}\): \(\mathcal{SG}\left(\mathbb{Z}_{\mathbf{G}}\right)\subset\mathcal{SG}\left(M_{ \Xi}\right)\). (The centered box spline defined by \(\Xi:=\left[\begin{smallmatrix}1&1\\ 0&1\end{smallmatrix}\right]\) is not symmetric: its symmetry group is \(\{\mathbf{I}_{2},-\mathbf{I}_{2}\}\), but the symmetry group of \(\mathbb{Z}^{2}\) has the cardinality \(8\) of the signed permutation group.) If \(\boldsymbol{\xi}\in\mathcal{DS}\left(\mathbb{Z}_{\mathbf{G}},k\right)\) is a column of \(\Xi\) then all directions of \(\mathcal{DS}\left(\mathbb{Z}_{\mathbf{G}},k\right)\) must be columns in \(\Xi\) to make \(M_{\Xi}\) symmetric. This can be seen as follows. For any \(\boldsymbol{\xi}\in\mathbb{Z}_{\mathbf{G}}\), let \(\Xi:=\left\{\mathbf{L}\boldsymbol{\xi}:\mathbf{L}\in\mathcal{SG}\left(\mathbb{ Z}_{\mathbf{G}}\right)\right\}.\) Then for any \(\mathbf{L}\in\mathcal{SG}\left(\mathbb{Z}_{\mathbf{G}}\right)\), the set of directions \(\Xi\) equals the set \(\mathbf{L}\Xi\) and \(|\det\mathbf{L}|=1\) so that by (2) \(M_{\Xi}=|\det\mathbf{L}|M_{\mathbf{L}\Xi}(\mathbf{L}\cdot)=M_{\Xi}(\mathbf{L}\cdot).\) That is, \(M_{\Xi}\) is symmetric. It suffices to include either \(\boldsymbol{\xi}\) or \(-\boldsymbol{\xi}\) into \(\Xi\) since for any \(\boldsymbol{\xi}\in\mathcal{DS}\left(\mathbb{Z}_{\mathbf{G}},k\right)\) \[\int_{-1/2}^{1/2}f(\cdot-t\boldsymbol{\xi})dt=\int_{-1/2}^{1/2}f(\cdot-t(- \boldsymbol{\xi}))dt=\int_{0}^{1/2}f(\cdot-t\boldsymbol{\xi})dt+\int_{0}^{1/2} f(\cdot-t(-\boldsymbol{\xi}))dt.\] ## 3 Choice of direction vectors The algebraic and differential geometric properties of Section 2 imply that the efficiency of a box spline space is closely related to the choice of direction vectors in the construction of the box spline and favors the vectors to be * snapped to a grid: this guarantees that the approximation order can be maximal. (In the extreme case, the shifts of \(M_{[1/2]}\) on \(\mathbb{Z}\) do not sum to \(1\). The shifts of \(M_{[1,1/2]}\) on \(\mathbb{Z}\) form a partition of \(1\), but a spline in \(S_{[1,1/2]}\) has intervals where the spline is constant and cannot match linear functions.) * short: since longer vectors result in larger support and more vectors are required to achieve symmetry, increasing the degree. * uniformly distributed: for the same degree, uniformity increases the continuity and approximation order. (For example, see Table 3, the bi-linear B-spline \(M_{\mathrm{c20}}\) and the ZP element \(M_{\mathrm{c11}}\) have degree 2, but both the continuity and the approximation order of \(M_{\mathrm{c11}}\) is higher by one than those of \(M_{\mathrm{c20}}\).) * in \(\mathcal{DS}\left(\mathbb{Z}_{\mathbf{G}},1\right)\): for the five lattices, direction sets with \(k>1\) yield \(\mathbf{\Xi}\) that are not unimodular, and so the box spline shifts are not linearly independent [17]. Uniform distribution on a lattice is in competition with shortness since equi-distribution of directions requires inclusion of farther lattice points. Table 2 lists the direction sets for the bivariate and trivariate domain lattices of Table 1 in terms of the matrices (see Fig. 2 and 3): \[d=2:\quad\mathbf{\Xi}_{\mathrm{cc2}}:=\mathbf{I}_{2},\qquad \mathbf{\Xi}_{\mathrm{qc}}:=\begin{bmatrix}1&-1\\ 1&1\end{bmatrix},\qquad\mathbf{\Xi}_{3}:=\begin{bmatrix}1&0&-1\\ 0&1&-1\end{bmatrix},\] \[d=3:\quad\mathbf{\Xi}_{\mathrm{cc3}}:=\mathbf{I}_{3},\ \mathbf{\Xi}_{\mathrm{ fcc}}:=\begin{bmatrix}1&-1&1&1&0&0\\ 1&1&0&0&1&-1\\ 0&0&1&-1&1&1\end{bmatrix},\quad\mathbf{\Xi}_{\mathrm{bcc}}:=\begin{bmatrix}-1& 1&1&-1\\ 1&-1&1&-1\\ 1&1&-1&-1\end{bmatrix},\] where the subscripts are to remind of Cartesian (cc2, cc3) quincunx (qc), 3 directions, FCC, and BCC directions, respectively. ## 4 Bivariate box splines Since the third direction set in Table 2 of \(\mathbb{Z}^{2}\) and \(\mathbb{Z}_{\mathrm{h}}\) already repeat the first, we restrict the list of bivariate box splines in Table 3 to \(\mathcal{DS}\left(\mathbb{Z}_{\mathbf{G}},k\right)\) for \(k<3\), as illustrated in \begin{table} \begin{tabular}{c c c c c c c c} \hline \multirow{2}{*}{lattice} & \multirow{2}{*}{\(k=1\)} & \multirow{2}{*}{\(k=2\)} & \multirow{2}{*}{\(\mathcal{DS}\left(\mathbb{Z}_{\mathbf{G}},k\right)\)} & \multirow{2}{*}{\(k=3\)} & \multirow{2}{*}{\(k=4\)} & \multirow{2}{*}{\(\mathcal{DS}\left(\mathbb{Z}_{\mathbf Fig. 5. We could skip \(k=3\) and consider the box spline defined by \(\cup_{k=1,2,4}\mathcal{DS}\left(\mathbb{Z}^{2},k\right)\) with \(2+2+0+4=8\) directions, but the corresponding box spline has a large support Figure 5: Directions (arrows) and supports (polygons with black edges) of select bivariate box splines with polynomial pieces delineated by knot lines (gray lines). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline lattice & \multicolumn{2}{c}{direction sets} & \multicolumn{2}{c}{degree} & differentiability & stencil & \multirow{2}{*}{reference} \\ & 1 & 2 & & \(r{-}2=\) & & size & \\ \hline & \(n\) & 0 & \(2n{-}2\) & \(n{-}2\) & \(n^{2}\) & [11] \\ & 1 & 1 & 2 & 1 & 7 & [59, 53, 49] \\ \(\mathbb{Z}^{2}\) & 2 & 1 & 4 & 2 & 14 & [7, 51, 42] \\ & 3 & 1 & 6 & 3 & 23 & \\ & 2 & 2 & 6 & 4 & 28 & \\ \hline \multirow{2}{*}{\(\mathbb{Z}_{\text{h}}\)} & \(n\) & 0 & \(3n{-}2\) & \(2n{-}2\) & \(3n^{2}\) & [25, 26, 43, 42] \\ & 1 & 1 & 4 & 3 & 24 & \\ \hline \hline \end{tabular} \end{table} Table 3: Bivariate symmetric box splines up to degree 6. \(M_{\text{c}n0}\) is the tensor-product B-spline, \(M_{\text{c}11}\) is the Zwart-Powell element, \(M_{\text{c}21}\) is the extended 6-direction ZP element, and \(M_{\text{h}10}\) the hat function. The continuity is \(C^{r-2}\) with \(r\) defined by 4. of Section 2. and degree \(8-2=6\), while the resulting \(C^{5}\) continuity is unlikely to match any generic application needs. Similarly, the box spline defined by \(\mathcal{DS}\left(\mathbb{Z}^{2},4\right)\) yields a box spline of degree \(2\) but of support size \(24\), whereas the ZP spline \(M_{c11}\) has the same smoothness for support size \(7\). Denoting by \(n_{k}\) the number of repetitions of the \(k\)th direction set, the box spline \[\text{on }\mathbb{Z}^{2}\text{ are named }M_{\text{c}n_{1}n_{2}}\text{ and those on }\mathbb{Z}_{\text{h}}\text{ are named }M_{\text{h}n_{1}n_{2}}.\] Table 3 leaves out direction sets of the form \((0,n)\) and \((1,n)\) for \(\mathbb{Z}^{2}\), since their properties do not improve on \((n,0)\) and \((n,1)\), respectively and result in a larger support. Analogously, \((0,n)\) is omitted for \(\mathbb{Z}_{\text{h}}\). We note that the options for \(C^{1}\) continuity are \(M_{c30}\) (9), \(M_{c11}\) (7), with the stencil sizes listed in parentheses. For \(C^{2}\) continuity the options Figure 6: Directions and supports of select trivariate box splines. are \(M_{c40}\) (16), \(M_{c21}\) (14), and \(M_{h20}\) (12). The only linearly independent symmetric box splines are \(M_{c{\rm n}0}\), i.e. the B-splines on \({\mathbb{Z}}^{2}\), and \(M_{{\rm h}n0}\) on \({\mathbb{Z}}_{\rm h}\). (Other linearly independent box splines, such as the three-direction box spline on \({\mathbb{Z}}^{2}\)[16], are not symmetric. ) The stencil size explains why several box splines have not been investigated in detail. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{lattice} & \multicolumn{2}{c}{direction sets} & \multirow{2}{*}{degree} & \multicolumn{2}{c}{differentiability} & \multirow{2}{*}{stencil} & \multicolumn{1}{c}{note} \\ & 1 & 2 & 3 & & \(r{-}2=\) & & size & / reference \\ \hline \multirow{9}{*}{\({\mathbb{Z}}^{3}\)} & \(n\) & 0 & 0 & 3\(n{-}3\) & \(n{-}2\) & \(n^{3}\) & B-splines [15] \\ & 1 & 1 & 0 & 6 & 3 & 87 & [19] \\ & 2 & 1 & 0 & 9 & 4 & 172 & \\ & 1 & 0 & 1 & 4 & 2 & 53 & [47, 52, 20, 33] \\ & 1 & 0 & 2 & 8 & 4 & 249 & \\ & 2 & 0 & 1 & 7 & 4 & 120 & \\ & 0 & \(n\) & 0 & 6\(n{-}3\) & 3\(n{-}2\) & 32\(n^{3}\) & [34] \\ & 0 & 1 & 1 & 7 & 5 & 216 & \\ & 0 & 0 & \(n\) & 4\(n{-}3\) & 2\(n{-}2\) & 16\(n^{3}\) & \\ \hline \multirow{9}{*}{\({\mathbb{Z}}_{\rm fcc}\)} & \(n\) & 0 & 0 & 6\(n{-}3\) & 3\(n{-}2\) & 16\(n^{3}\) & [40, 29] \\ & 1 & 1 & 0 & 6 & 3 & 86 & [19]\({}^{\dagger}\) \\ \cline{1-1} & 1 & 2 & 0 & 9 & 4 & 228 & \\ \cline{1-1} & 0 & \(n\) & 0 & 3\(n{-}3\) & \(n{-}2\) & 4\(n^{3}\) & B-splines \\ \cline{1-1} & 0 & 0 & 1 & 9 & 7 & 784 & \\ \hline \multirow{9}{*}{\({\mathbb{Z}}_{\rm bcc}\)} & \(n\) & 0 & 0 & 4\(n{-}3\) & 2\(n{-}2\) & 4\(n^{3}\) & [21, 38] \\ & 2 & 1 & 0 & 8 & 4 & 106 & \\ \cline{1-1} & 1 & 1 & 0 & 4 & 2 & 30 & [30] \\ \cline{1-1} & 1 & 2 & 0 & 7 & 4 & 92 & \\ \cline{1-1} & 1 & 0 & 1 & 7 & 5 & 200 & \\ \cline{1-1} & 0 & \(n\) & 0 & 3\(n{-}3\) & \(n{-}2\) & 2\(n^{3}\) & B-splines [10] \\ \cline{1-1} & 0 & 1 & 1 & 6 & 3 & 174 & \\ \cline{1-1} & 0 & 2 & 1 & 9 & 4 & 344 & \\ \cline{1-1} & 0 & 0 & \(n\) & 6\(n{-}3\) & 3\(n{-}2\) & 64\(n^{3}\) & \\ \hline \hline \end{tabular} * The box spline proposed in [19] is a sibling of \(M_{{\rm f}110}\) built from the direction matrix \(\left[\begin{matrix}\mathbf{\Xi}_{\rm fcc}&\mathbf{\Xi}_{\rm cc3}\end{matrix}\right]\). Since \(\mathbf{\Xi}_{\rm cc3}\) do not snap to \({\mathbb{Z}}_{\rm fcc}\), the resulting approximation order is lower than \(M_{{\rm f}110}\). \end{table} Table 4: Trivariate symmetric box splines up to degree 9. Trivariate box splines Analogous to the bivariate case, denoting by \(n_{k}\) the number of repetitions of the \(k\)th direction set, the box splines on \(\mathbb{Z}^{3}\), \(\mathbb{Z}_{\rm fcc}\), and \(\mathbb{Z}_{\rm bcc}\) are named \[M_{{\rm cn}_{1}n_{2}n_{3}},M_{{\rm fn}_{1}n_{2}n_{3}},\ {\rm and}\ M_{{\rm bn}_{1}n_{2}n_{3}}\] in Table 4. Fourth direction vectors are not used since, e.g. for \(M_{\rm bs}\), they are typically too long and too many. The only symmetric linearly-independent box splines are \(M_{{\rm cn}00}\), the B-splines on \(\mathbb{Z}^{3}\), \(M_{{\rm fn}00}\) on \(\mathbb{Z}_{\rm fcc}\), and \(M_{{\rm bn}00}\) on \(\mathbb{Z}_{\rm bcc}\). (There are additional linearly independent asymmetric box splines like four-direction box splines on \(\mathbb{Z}^{3}\)). That is \(M_{*n00}\) are the only box splines that form a basis. Listing the support sizes in parentheses, the \(C^{1}\) box splines are \(M_{c300}\) (27), \(M_{c010}\) (32), \(M_{f100}\) (16), \(M_{f030}\) (108), \(M_{b030}\) (54), \(M_{b001}\) (64) and the \(C^{2}\) box splines are \(M_{c400}\) (64), \(M_{c101}\) (53), \(M_{c002}\) (128), \(M_{f040}\) (256), \(M_{b200}\) (32), \(M_{b110}\) (30), \(M_{b040}\) (128). Due to their small support and the degree listed in square brackets, \(M_{c300}\)[6], \(M_{c010}\)[3], \(M_{f100}\)[3] (see Fig. 4) stand out as efficient for \(C^{1}\) and \(M_{b200}\)[5], \(M_{b110}\)[4] for \(C^{2}\). ## 6 Multi-variate box splines The five lattices in two and three variables are instances of \(d\)-dimensional lattices, \(d>3\) whose detailed definition can be found in [38, 39]. The generator matrices of the four lattices other than \(\mathbb{Z}^{d}\)[9] are as follows. \[\mathbf{A}_{d}:=\begin{bmatrix}-1&&&&\\ 1&-1&&\\ &1&\ddots&&\\ &&\ddots&-1&\\ &&&1&-1\\ &&&&1\end{bmatrix},\mathbf{A}_{d}^{*}:=\frac{1}{d+1}\begin{bmatrix}d&-1&\cdots&-1&-1 \\ -1&d&\cdots&-1&-1\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ -1&-1&\cdots&d&-1\\ -1&-1&\cdots&-1&d\\ -1&-1&\cdots&-1&-1\end{bmatrix},\] \[\mathbf{D}_{d}:=\begin{bmatrix}-1&1&&&&\\ -1&-1&1&&\\ &&\ddots&\ddots&&\\ &&&-1&1&\\ &&&&-1\end{bmatrix},\text{ and }\mathbf{D}_{d}^{*}:=\begin{bmatrix}1&&&1/2\\ &1&&1/2\\ &&\ddots&&\vdots\\ &&&1&1/2\\ &&&1/2\end{bmatrix}.\] Note that \(\mathbf{A}_{d}\) and \(\mathbf{A}_{d}^{*}\) are \((d+1)\times d\) and the corresponding lattices are generated in the hyperplane of the equation \(x_{1}+\cdots+x_{d+1}=0\). Table 5 lists the first and second direction sets of the five lattices. As in the bi- and the trivariate cases, various symmetric box splines can be constructed from these directions. We observe that for \(\mathbb{D}_{d}^{*}\), \(d>4\), there is a rich set of first directions, all corresponding to B-splines, to build smooth symmetric splines. Table 6 lists some important classes of box splines whose shifts live on these high-dimensional lattices, see e.g. [39]. Note that for some dimensions, two different direction sets share the same distance: for \(\mathbb{D}_{4}^{*}\) there are \(16/2+8/2=12\) first directions of the patterns \((\pm 1,\pm 1,\ldots,\pm 1)\) and \(\pi(\pm 2,0,0,0)\) and either or both groups yields a symmetric box spline. ## 7 Conversion to piecewise polynomial form It is useful to express the box spline pieces as polynomials, and in particular in the Bernstein-Bezier (BB-) form, see e.g. [12]. The partition into pieces follows from the convolution directions. The BB-coefficients are obtained from the differentiability constraints across boundaries and by normalizing the map, see [42]. Fig. 7, Fig. 8 and Fig. 9 show examples of the re-representation in BB-form. For trivariate box splines, using the the constraints can be error-prone. An easier approach is to sample the spline at sufficiently many interior points, using one of [13, 41], and solve for the BB-coefficients, keeping in mind that the coefficients are integers after scaling by a known multiple (see [37]); or, and this is faster and yields polynomial pieces in partially factored form, to apply a Green's function decomposition and inverse Fourier transform [27]. Figure 8: From [7]. (a) The polynomial pieces in the support of \(M_{\mathrm{c21}}\). Pieces of the same color have the same BB-net after appropriate rigid transformation and the BB-nets (multiplied by 192) of the pieces labeled b,\(\ldots\),h are shown in (b)–(h). Figure 7: The polynomial pieces in the support of \(M_{\mathrm{c11}}\) and the BB-net (scaled by 8). ## 8 Efficient evaluation By reversing the convolution, the algorithms of [13, 41] evaluate box splines recursively. This process is stable except near the boundaries between the polynomial pieces (knot lines in 2D, knot planes in 3D). Near boundaries [13] applies random perturbation and [41] propose careful bookkeeping. Converting the box spline pieces to BB-form yields much faster and stable evaluation [37], also of derviatives. A general technique to accelerate evaluation is to leverage symmetry [31, 28] with a general implementation available at [28] that automates steps and generates GPU kernels. Table 7 lists box splines with an available optimized evaluation code, some implemented on the GPU for high parallelism. Subdivision offers a stable and fast alternative when rendering an approximation, say a triangulation of a bivariate box spline graph. An alternative approximate evaluation is based on Fast Fourier transform [45]. \begin{table} \begin{tabular}{c c c} \hline \hline box spline & algorithm & code \\ \hline \(M_{\mathrm{c400}}\) & [56, 32] & [56] \\ \(M_{\mathrm{c010}}\) & [34] & [34] \\ \(M_{\mathrm{c101}}\) & [33] & [33] \\ \(M_{\mathrm{f100}}\) & [40, 29] & [29] \\ \(M_{\mathrm{b200}}\) & [4, 24] & [24] \\ \(M_{\mathrm{b110}}\) & [30, 36, 31] & [35] \\ \(M_{\mathrm{b040}}\) & [10] & \\ \hline \hline \end{tabular} \end{table} Table 7: Some fast 3D box spline evaluation implementations. See also [28]. Figure 9: From [7]. (a) The polynomial pieces in the support of \(M_{\mathrm{h20}}\). Pieces of the same color have the same BB-net after appropriate rigid transformation and the BB-nets (multiplied by 24) of the pieces labeled b,c,d are shown in (b),(c),(d). ## 9 Use for reconstruction or approximation A promising application of box splines is the approximation and reconstruction of a function \(f\) from samples \(\{f(\mathbf{j}):\mathbf{j}\in\mathbb{Z}_{\mathbf{G}}\}\) on a lattice \(\mathbb{Z}_{\mathbf{G}}\). To attain the maximal approximation order of the box spline space, i.e., to obtain \(c\) in Eq. (3), the samples are convolved with a discrete _quasi-interpolant_ to form the control points \[c(\mathbf{j}):=q_{0}f(\mathbf{j})+q_{1}\sum_{\mathbf{k}\in\mathcal{DS}(\mathbb{Z}_{ \mathbf{G}},1)}\left(f(\mathbf{j}+\mathbf{k})+f(\mathbf{j}-\mathbf{k})\right),\quad\forall\mathbf{ j}\in\mathbb{Z}_{\mathbf{G}}\] of the optimally approximating spline \(\sum_{\mathbf{j}\in\mathbb{Z}_{\mathbf{G}}}c(\mathbf{j})M(\cdot-\mathbf{j}).\) Several techniques exist to derive quasi-interpolants for box splines. [15, 17, 2, 8]. Table 8 lists quasi-interpolants, defined by \(q_{0}\) and \(q_{1}\), for the box splines of approximation order 3 or 4 of Table 3 and Table 4. Level sets of quasi-interpolating functions in three variables are used to display Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) data. A standard test function is the Marschner-Lobb signal [44], a combination of Dirac pulses and a circularly symmetric, disc-shaped component, see Fig. 10(h). Fig. 10 compares how convolution directions enhance or prevent reproduction of the circular features. \begin{table} \begin{tabular}{c c c c c c} \hline lattice & a.o. & box spline & \(24q_{0}\) & \(-12q_{1}\) & references \\ \hline \multirow{2}{*}{\(\mathbb{Z}^{2}\)} & 3 & \(M_{\mathrm{c30}}\), \(M_{\mathrm{c11}}\) & 18 & 3 & [39] \\ \cline{2-6} & 4 & \(M_{\mathrm{c20}}\), \(M_{\mathrm{c21}}\) & 40 & 4 & \\ \hline \(\mathbb{Z}_{\mathrm{h}}\) & 4 & \(M_{\mathrm{h20}}\) & 13 & 2 & [38] \\ \hline \multirow{4}{*}{\(\mathbb{Z}^{3}\)} & 3 & \(M_{\mathrm{c300}}\) & 21 & 3 & \\ & & \(M_{\mathrm{c010}}\) & 24 & 4 & [34] \\ \cline{2-6} & & \(M_{\mathrm{c400}}\) & 24 & 4 & \\ \cline{2-6} & 4 & \(M_{\mathrm{c101}}\) & 27 & 5 & [39] \\ \cline{2-6} & & \(M_{\mathrm{c002}}\) & 36 & 8 & \\ \hline \multirow{2}{*}{\(\mathbb{Z}_{\mathrm{fcc}}\)} & 3 & \(M_{\mathrm{f100}}\) & 18 & 1 & [39] \\ & & \(M_{\mathrm{f030}}\) & 30 & 3 & \\ \cline{2-6} & 4 & \(M_{\mathrm{f040}}\) & 36 & 4 & \\ \hline \multirow{4}{*}{\(\mathbb{Z}_{\mathrm{bcc}}\)} & 3 & \(M_{\mathrm{b030}}\) & 24 & 3 & \\ & & \(M_{\mathrm{b001}}\) & 28 & 4 & \\ \cline{1-1} \cline{2-6} & 4 & \(M_{\mathrm{b200}}\), \(M_{\mathrm{b110}}\) & 20 & 2 & [22, 38, 30] \\ \cline{1-1} & & \(M_{\mathrm{b040}}\) & 28 & 4 & \\ \hline \end{tabular} \end{table} Table 8: Quasi-interpolants of select box splines of approximation order (a.o.) 3 or 4. Note that \(q_{0}\) and \(q_{1}\) are scaled for clearer presentation. ## 10 Splines from pieces and unions of boxes One can consider the characteristic function of a piece of the box or of a union of boxes, and then convolve these characteristic functions. Convolving the characteristic function of half of a box in 2D, i.e. of a triangle, yields _half-box spline_ spaces with properties akin to box splines [55, 23, 3, 54, 1]. Alternatively, one can juxtapose non-centered boxes to form the Voronoi cell of a lattice, i.e. the region nearest to a lattice point. The convolution of the characteristic function of the Voronoi cell then yields _Voronoi splines_[58, 46]. Voronoi splines provide an example of how asymmetric splines can be linearly combined to form symmetric splines. Note though that such splines typically do not yield nested spaces [48]. ## 11 Conclusion Symmetric box splines provide a mature and powerful framework for shift-invariant smooth functions on a lattice. For bi- and tri-variate splines, a number of efficient box splines are now well-documented and come with optimized implementations. Figure 10: Ray-intersection rendering (ray-casting) of a level set of the Marschner-Lobb signal (h) with identical sampling density on their domain lattices. **Acknowledgements** This work was supported by a 2022 sabbatical year research grant of the University of Seoul hosted by the University of Florida. We thank Carl de Boor for feedback on an early draft.
2306.12782
Fantastic Fits with fantasy of Active Galactic Nuclei Spectra -- Exploring the Fe II emission near the H$α$ line
In this study, a refined approach for multicomponent fitting of active galactic nuclei (AGN) spectra is presented utilizing the newly developed Python code $fantasy$ (fully automated python tool for AGN spectra analysis). AGN spectra are modeled by simultaneously considering the underlying broken power-law continuum, predefined emission line lists, and an Fe II model, which is here extended to cover the wavelength range 3700 - 11000 A. The Fe II model, founded solely on atomic data, effectively describes the extensive emission of the complex iron ion in the vicinity of the H$\gamma$ and H$\beta$ lines, as well as near the H$\alpha$ line, which was previously rarely studied. The proposed spectral fitting approach is tested on a sample of high-quality AGN spectra from the Sloan Digital Sky Survey (SDSS) Data Release 17. The results indicate that when Fe II emission is present near H$\beta$, it is also detected redward from H$\alpha$, potentially contaminating the broad H$\alpha$ line wings and thus affecting the measurements of its flux and width. The production of Fe II emission is found to be strongly correlated with Eddington luminosity and appears to be controlled by the similar mechanism as the hydrogen Balmer lines. The study highlights the benefits of fitting AGN type 1 spectra with the $fantasy$ code, pointing that it may be used as a robust tool for analyzing a large number of AGN spectra in the coming spectral surveys.
Dragana Ilic, Nemanja Rakic, Luka C. Popovic
2023-06-22T10:27:18Z
http://arxiv.org/abs/2306.12782v1
# Fantastic Fits with fantasy of Active Galactic Nuclei Spectra ###### Abstract In this study, a refined approach for multicomponent fitting of active galactic nuclei (AGN) spectra is presented utilizing the newly developed Python code fantasy (fully automated python tool for AGN spectra analysis). AGN spectra are modeled by simultaneously considering the underlying broken power-law continuum, predefined emission line lists, and an Fe II model, which is here extended to cover the wavelength range 3700 - 11000 A. The Fe II model, founded solely on atomic data, effectively describes the extensive emission of the complex iron ion in the vicinity of the H\(\gamma\) and H\(\beta\) lines, as well as near the H\(\alpha\) line, which was previously rarely studied. The proposed spectral fitting approach is tested on a sample of high-quality AGN spectra from the Sloan Digital Sky Survey (SDSS) Data Release 17. The results indicate that when Fe II emission is present near H\(\beta\), it is also detected redward from H\(\alpha\), potentially contaminating the broad H\(\alpha\) line wings and thus affecting the measurements of its flux and width. The production of Fe II emission is found to be strongly correlated with Eddington luminosity and appears to be controlled by the similar mechanism as the hydrogen Balmer lines. The study highlights the benefits of fitting AGN type 1 spectra with the fantasy code, pointing that it may be used as a robust tool for analyzing a large number of AGN spectra in the coming spectral surveys. Active galactic nuclei(6) -- Quasars(1319) -- Atomic data(2216) -- Spectral line lists(2082) ## 1 Introduction Active galactic nuclei (AGN) spectra can be very complex, with underlying emission from the stellar component of the host galaxy, continuum emission predominantly from the accretion disk, and strong broad and narrow emission lines originating from regions at a wide range of distances from the central supermassive black hole (SMBH, see e.g. Netzer, 2013). Disentangling the complex optical spectra of AGN is an important part of AGN research to understand the physical processes behind continuum and emission line production (Padovani et al., 2017). In addition, robust and reliable extraction of spectral parameters in type 1 AGN1, such as the width and flux of the broad emission lines or the underlying continuum flux, is of importance for their application in estimating physical parameters such as the mass of the SMBH or the Eddington ratio (Shen et al., 2011; Liu et al., 2019). Both are needed to understand AGN and their role in galaxy evolution (e.g. Kormendy & Ho, 2013). The two best known broad emission lines are the H\(\beta\) and H\(\alpha\) lines, which are well studied and used for a large number of type 1 AGN (e.g., Greene & Ho, 2005; Xiao et al., 2011; Shen et al., 2011; Liu et al., 2019; Rakshit et al., 2020). We are still far from fully understanding the physical processes of the plasma in the broad line region (BLR), such as what the densities and temperatures are (Popovic, 2003; Marziani et al., 2020), and how to predict the observed emission line ratios (e.g., Ilic et al., 2012; Netzer, 2020), and understand the production of diffuse BLR continuum emission (Chelouche et al., 2019), and the presence and role of dust (Baron et al., 2016; Czerny et al., 2022), or determine the location and origin of Fe II emission (e.g., Baldwin et al., 2004; Gaskell et al., 2022). For sure, the promising approach for these investigations is to exploit large spectral data sets and provide catalogues of their spectral properties, as has been done for more than half a million of quasars from the Sloan Digital Sky Survey (SDSS, Rakshit et al., 2020). Some of the challenges in extracting pure broad H\(\beta\) and H\(\alpha\) line profiles and measuring their spectral parameters, such as the full width at half maximum (FWHM) and line fluxes, lie in subtracting the contribution of the host galaxy stellar emission, estimating the underlying AGN continuum emission, and identifying and subtracting narrow and other satellite lines. This can be particularly difficult for type 1 AGNs with strong Fe II emitters, such as the narrow-line Seyfert 1 (NLSy1, see e.g., Rakshit et al., 2017; Paul et al., 2022). Contamination by Fe II is probably the most difficult to deal with. It is known that the complex Fe II ion generates thousands of line transitions (Wills et al., 1985; Sigut & Pradhan, 1998, 2003; Sarkar et al., 2021). Therefore, these lines are typically blended and difficult to identify in AGN spectra, so there are several Fe II templates that can be used by the community (for recent review on different Fe II templates, see Park et al., 2022, and references therein). However, most templates do not focus on the region near the H\(\alpha\) line, which can be contaminated by iron emission, especially in the line wings (Veron-Cetty et al., 2004). Moreover, recent observations of the specific transient phenomena of stellar disruption in galactic nuclei, so-called tidal disruption events (TDEs, for a review see Gezari, 2021) show that some of these events are strong Fe II emitters (Petrushevska et al., 2023). As pointed out in Frederick et al. (2021), misleading identification of emission line features can cause errors in the classification of these events. TDEs are becoming increasingly important because they provide a special opportunity to detect and study intermediate-mass black holes (Gezari, 2022). With ongoing and upcoming large sky surveys that aim to explore the transient sky, such as Zwicky Transient Factory (ZTF Bellm et al., 2019) or Vera C. Rubin Legacy Survey in Space and Time (LSST Ivezic et al., 2019), the number of detected transients will increase rapidly, as will their spectroscopic-optical follow-up with either single campaigns or dedicated surveys, such as a very successful Public ESO Spectroscopic Survey for Transient Objects (PESSTO Smartt et al., 2015), the already accepted extragalactic community surveys2 as a part of the 4-metre Multi-Object Spectrograph Telescope (4MOST de Jong et al., 2012), or the forthcoming Manuakea Spectroscopic Explore (MSE The MSE Science Team et al., 2019). Footnote 2: [https://www.4most.eu/cms/science/extragalactic-community-surveys/](https://www.4most.eu/cms/science/extragalactic-community-surveys/) Hence, there is a need for software packages that can perform modelling, fitting, and analysis of AGN spectra in an automated manner. There are already a few publicly available codes, namely: i) Quasar Spectral Fitting package (QSFIT Calderone et al., 2017), an IDL based code for fitting all AGN emitting components simultaneously; ii) Python QSO fitting code (PyQSOFit Guo et al., 2018, 2019), designed for fitting quasar spectrum and additionally performs Mono-Carlo iterations using flux randomization to estimate uncertainties; iii) Sculptor, an interactive graphical user interface written in Python for general astronomical spectral analysis, with a special extension for quasar spectra (Schindler, 2022). Nevertheless, other open-source packages are needed that are tailored to model optical AGN spectra and are easy to use. Here we use the recently developed Python code fantasy (Fully automated python tool for AGN spectra analysis Ilic et al., 2020; Rakic, 2022), which is an updated approach to multicomponent fitting of AGN spectra. The main advantages of the code are: i) the ability to fit a wide range of wavelengths simultaneously (e.g., from H\(\delta\) to H\(\alpha\)), ii) the selection of lines from predefined line lists with the option to easily insert user-defined line lists, and iii) the flexibility to model a wide variety of type 1 AGN spectra. A special feature of the code is the use of the extended model of iron emission in the wavelength range 3700 - 11000 A for which initial concepts were presented in Popovic et al. (2004) and further developed in Kovacevic et al. (2010). Recently, during a transient event in an NLSy1 galaxy, the wavelength region redward from the H\(\alpha\) line was heavily contaminated by iron emission (for details see Petrushevska et al., 2023). Most available iron templates focus on the H\(\beta\) region, since this line is most commonly observed in distant AGN and is widely used for SMBH mass estimates from single-epoch observations (for a review see Popovic, 2020). With new instruments focusing on the NIR spectrum, such as the James Web Space Telescope, observation of the H\(\alpha\) line in distant quasars is becoming possible, and the need for more Fe II templates and models covering this wavelength band is evident. This has motivated us to investigate iron emission, which is known to exist in the vicinity of the H\(\alpha\) line but has not yet been studied in detail except in Veron-Cetty et al. (2004). In addition, we will demonstrate the importance of simultaneously fitting the spectra of type 1 AGNs in a broader wavelength range, including emission line components along with the underlying continuum and Fe II emission model. We will investigate the physical properties of regions emitting broad lines, with particular attention to the Fe II emission near the H\(\alpha\) line. For this purpose, we use a selection of optical spectra of type 1 AGN obtained from the public database of the SDSS latest Data Release 173, as well as as the publicly available optical spectra of I Zw 1, a well-known NLSy1, typically used to demonstrate the suitability of iron templates in AGN. Footnote 3: [https://www.sdss4.org](https://www.sdss4.org) The paper is organized as follows. In Sect. 2, we describe the data set used. In Sect. 3 we present our extended semi-empirical model of Fe II emission and describe the main functionalities of fantasy code, whereas in Sect. 4 we present the results and provide relevant discussion. Finally, in Sect. 6 we list our conclusions. We assume a cosmology with H\({}_{0}=67\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}=0.32\) and \(\Omega_{\Lambda}=0.68\) to calculate the luminosity distance to studied objects. ## 2 Data Sample We select a sample of type 1 AGN from the latest SDSS Data Release 17 (DR17, Abdurro'uf et al., 2022), where we specify as selection criteria the high signal-to-noise ratio of the optical spectra (S/N \(>35\)) and the redshift \(z<0.4\). The first criterion excludes the presence of noisy spectra that may significantly bias the modelling of complex AGN spectra, especially iron emission. The second criterion ensures that both H\(\beta\) and H\(\alpha\) are included. The DR17 is a fourth data release from the fourth phase of the survey (SDSS-IV), which contains the complete data set of optical single-fibre spectroscopy of the SDSS4 through January 2021 (Smee et al., 2013). The selection process and the characteristics of the selected sample are the same as in Rakic (2022), who used DR16 data (Ahumada et al., 2020). The query yielded 676 objects, of which 21 were discarded either because H\(\alpha\) emission was absent or distorted, or objects were misclassified as type 1 AGN while they are rather type 2 as no broad component is seen. The remaining 655 SDSS AGN type 1 objects were further investigated (referred to as the SDSS sample). Footnote 4: www.sdss.org/dr17/spectro/ One way to represent the distribution of type 1 AGN is to use a quasar main sequence diagram, defined with FWHM H\(\beta\) - \(R_{\rm FeII}\)5 space (Sulentic et al., 2000; Shen & Ho, 2014; Du et al., 2016; Marziani et al., 2022). These studies have shown that \(R_{\rm FeII}\) emission is a proxy for accretion strength in AGN, roughly also indicated by the Eddington ratio (see also Dong et al., 2011). The quasar main sequence is mainly occupied by two large groups of objects, populations A and B, defined according to the full width at half maximum (FWHM) of H\(\beta\) line (e.g., Sulentic et al., 2000, 2002; Marziani et al., 2022). These two populations have different physical properties and possibly different orientation (Marziani et al., 2018, 2022). Population B objects have broader emission lines (FWHM \(\geq\) 4000 km s\({}^{-1}\)), higher inclination angle, low Eddington ratio, and weak Fe II emission (\(R_{\rm FeII}\)\(\leq 0.5\)), whereas population A, occupying the other side of the diagram, have narrower broad emission lines, lower inclination, higher Eddington ratio, and strong Fe II (see a recent review Marziani et al., 2022). The extreme end of the objects of population A, which show very strong iron emission (\(\rm R_{FeII}\)\(>1\)), are also of great interest (Marziani et al., 2022). Footnote 5: \(R_{\rm FeII}\) is a measure of the Fe II optical emission, defined as the ratio of the equivalent widths of Fe II emission in the wavelength range 4435–4685 Å and H\(\beta\) broad line. Following the above, our SDSS sample of 655 objects was divided into 362 objects from population A (referred to as the "pop A" sample in the remainder of this text) and 293 objects from population B ("pop B" sample). A subset of 105 objects from the pop A sample belongs to the so-called extreme population A (\(\rm R_{FeII}\)\(>1\)). This sample is treated separately and we refer to it as the "xA" sample. Fig. 1 presents the distributions of the selected SDSS sample with the cosmological redshift \(z\) (upper panel) and continuum luminosity at 5100 A (lower panel), shown as stacked bars of pop A (red) and pop B (green) sub-samples. All samples (total, pop A, pop B) are almost uniformly distributed across the redshift, showing a slight decrease towards higher redshift (seen also in Liu et al., 2019, who studied a sample of type 1 AGN with redshift \(<0.35\)). The continuum luminosity distribution is asymmetric toward higher-luminosity, which is typical for type 1 AGN from SDSS (Liu et al., 2019), with most objects (95%) in the range \(\rm log(\textit{L}_{5100})\) = [42.5-45.3] erg s\({}^{-1}\), with the median of 44.3. To demonstrate the importance of simultaneous multi-component fitting of AGN spectra, in particular to extract Fe II emission, we also test spectral fitting on a publicly available optical spectrum of the well-known NLSy1 1 Zw1 (taken from Tsuzuki et al., 2006)6, which is widely used to construct and test templates and models of Fe II emission in AGN. ## 3 Methods and Analysis In this section we present the approach to modeling the complex spectra of type 1 AGN using the code fantasy and the description of the extended Fe II emission model. ### Extension to the atomic model of Fe II emission For interpreting the emission within the type 1 AGN spectra, one important requirement is to carefully reconstruct the Fe II emission. In addition, understanding the origin of the strong emission of a complex ion of one-time ionized iron in AGN is a continuing quest (e.g., Netzer, 1980; Penston, 1987; Verner et al., 1999; Sigut et al., 2004; Panda et al., 2019; Sarkar et al., 2021; Gaskell et al., 2022). Fe II emission is present in a broad spectral range and typically contaminates other broad lines from the UV (e.g., Mg II line Vestergaard & Wilkes, 2001; Popovic et al., 2019), through optical (e.g., H\(\beta\) and H\(\alpha\) lines Veron-Cetty et al., 2004; Park et al., 2022) to near-infrared bands (e.g., Pa\(\gamma\) line Rudy et al., 2000; Landt et al., 2008; Garcia-Rissmann et al., 2012; Marinello et al., 2016). Their investigations are important for understanding the physics of the broad line region. Figure 1: Distribution of SDSS sample of 655 objects with the cosmological redshift (upper panel) and continuum luminosity at 5100 Å (bottom panel) presented with stacked bars of pop A (red) and pop B (blue) sub-samples. Figure 2: Partial Grotrian diagram for Fe II, showing the transitions considered in the presented model of Fe II emission in the wavelength range 3700-11000 Å. The upper levels \(\rm u^{4}\)(D,F,P) are populated by \(\rm Ly\alpha\) photons, from which the cascades to lower levels are shown: red lines show the NIR transitions (bumps at 9200 Å and 1\(\mu\)m) and dark cyan lines represent the transitions in the optical band (centered at 4570 Å and 5270Å which are the two strongest bumps around H\(\beta\)). The gray lines represent the transitions responsible for populating the upper levels, mainly through UV emission, to illustrate the path. Usually, iron emission is modeled with empirical templates (Boroson and Green, 1992; Vestergaard and Wilkes, 2001; Veron-Cetty et al., 2004; Tsuzuki et al., 2006; Park et al., 2022). These templates consider that the relative intensities of identified Fe II lines are proportional to empirical ratios measured in AGN spectra whose broad lines are narrow enough, so that the Fe II features are more easily separated from other emission lines, i.e. NLSy1. Some Fe II templates are governed by the identification of emission lines within the same important multiplets (Marinello et al., 2016, 2020). The semi-empirical approach to modelling the Fe II emission was presented in Kovacevic et al. (2010) and updated in Shapovalova et al. (2012). Their semi-empirical model covers the wavelength range 4000 - 5500 A, and is mainly based on the atomic parameters of the strongest iron transitions. In short, the model consists of Fe II line-sets grouped according to the same lower energy level in the transition with line intensities connected by the line transition oscillatory strengths, for a given excitation temperature, usually assumed to be \(10^{4}\)K. For few line groups, the relative line intensities were measured from I Zw 1 (Kovacevic et al., 2010), which made this model a semi-empirical one. For a detailed presentation of most relevant Fe II templates, their properties, applications, and comparisons, we refer to the recent review presented in Park et al. (2022). It is worth noting that given the complex energy level structure of the Fe II ion and a huge number of transitions producing rich spectrum in the UV, optical and NIR regions, it remains challenging to use in practice fully theoretical Fe II templates (such as Sigut and Pradhan, 2003; Bruhweiler and Verner, 2008). We emphasize again that most Fe II templates focus on the H\(\beta\) region, which is the most studied broad emission line in AGN. Figure 4: Different Fe II templates compared to the I Zw 1 spectrum from which the continuum emission was subtracted. The Fe II model presented here is the result of fitting the observations and also includes [Fe II] lines that are strong in this object (see Section 3.2.1 for details). Figure 3: Comparison of the semi-empirical Fe II model presented in Kovacevic et al. (2010) and Shapovalova et al. (2012) with the model of Fe II emission presented in this work, which relies solely on the atomic data. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Wavelength (air) & log(gf) & transition & E(low) & E(up) & Relative & Ref. \\ A & & lower – upper & eV & eV & intensity & \\ \hline 4593.83 & -4.923 & \(a^{6}S_{5/2}-z^{4}F_{5/2}\) & 2.891 & 5.589 & 0.000 & (3) \\ 4601.38 & -4.428 & \(a^{6}S_{5/2}-z^{4}D_{3/2}\) & 2.891 & 5.585 & 0.001 & (3) \\ 4656.98 & -3.630 & \(a^{6}S_{5/2}-z^{4}D_{5/2}\) & 2.891 & 5.553 & 0.007 & (3) \\ 4663.71 & -3.820 & \(a^{6}S_{5/2}-z^{4}F_{7/2}\) & 2.891 & 5.549 & 0.004 & (1) \\ 4731.45 & -2.921 & \(a^{6}S_{5/2}-z^{4}D_{7/2}\) & 2.891 & 5.511 & 0.035 & (2) \\ 4923.93 & -1.561 & \(a^{6}S_{5/2}-z^{6}F_{7/2}\) & 2.891 & 5.408 & 0.810 & (2) \\ 5018.44 & -1.400 & \(a^{6}S_{5/2}-z^{2}F_{8/2}\) & 2.891 & 5.361 & 1.170 & (2) \\ 5169.03 & -1.466 & \(a^{6}S_{5/2}-z^{2}F_{7/2}\) & 2.891 & 5.289 & 1.000 & (2) \\ 5256.94 & -4.250 & \(a^{6}S_{5/2}-z^{2}F_{8/2}\) & 2.891 & 5.249 & 0.002 & (3) \\ 5284.11 & -3.121 & \(a^{6}S_{5/2}-z^{6}F_{7/2}\) & 2.891 & 5.237 & 0.022 & (2) \\ 6369.46 & -4.253 & \(a^{6}S_{5/2}-z^{2}D_{3/2}\) & 2.891 & 4.837 & 0.001 & (3) \\ 6432.68 & -3.708 & \(a^{6}S_{5/2}-z^{2}D_{7/2}\) & 2.891 & 4.818 & 0.005 & (3) \\ 6516.08 & -3.450 & \(a^{6}S_{5/2}-z^{2}D_{7/2}\) & 2.891 & 4.793 & 0.009 & (3) \\ \hline \end{tabular} Note. – Columns give line transition wavelength in air (in Å), oscillatory strength, the configuration of lower and upper energy level, corresponding energies (in eV), and a relative intensity with respect to the reference line, where intensity is set to 1.000. References for the used oscillatory strength: 1 - (Fuhri et al., 1981), 2 - (Kovacevic et al., 2010), 3 - atomic database ([https://lweb.cfa.harvard.edu/amp/ampdata/kurucz23/sekur.html](https://lweb.cfa.harvard.edu/amp/ampdata/kurucz23/sekur.html)). Table 1 is published in its entirety in the machine-readable format. A portion is shown here for guidance regarding its form and content. \end{table} Table 1: Example of Fe ii transitions used in the model for the lower level \(a^{6}S\) (energy 2.891 eV) group with the reference line set to a relative intensity of 1.000. Figure 5: Fe II model in the wavelength range 3700 – 11000 Å, with atomic groups indicated in the bottom panel. The position of the hydrogen Balmer and Paschen series are also indicated with the dashed vertical lines, whereas the shaded areas indicate the position of iron bands used later in the analysis: Fe II blue (4340–4680 Å), Fe II green (5100–5600 Å), and Fe II red (6100–6650 Å). In this work, our main motivation was to develop the Fe II model that is extended to include the H\(\alpha\) line region, which is also strongly populated by Fe II lines in NLSy1 objects (e.g., Tsuzuki et al., 2006; Dong et al., 2011; Park et al., 2022), and could be important in transient event, such as TDEs (see Petrushevska et al., 2023, and reference therein). Therefore, based on the same approach and assumptions presented in Kovacevic et al. (2010), here we construct the Fe II model in the 3700 - 11000 A wavelength range. In addition, here we revise the group of so-called high-excitation lines, for which the line ratios have been measured from I Zw 1 spectrum in Kovacevic et al. (2010). It follows that the proposed model of Fe II is based solely on atomic data and the assumed excitation temperature. The individual line profiles in this model of Fe II are assumed to be Gaussians with the same width and shift. The lines are grouped to have the same lower level of transition, and all lines in a single group have a fixed intensity (calculated from the atomic data) relative to the strongest line in the atomic group. This leaves only the intensities of the strongest lines as free parameters. It has been long known that various Fe II optical multiplets do not have the same relative intensities in different objects (see e.g., van Groningen, 1993; Veron-Cetty et al., 2004; Park et al., 2022). This multi-component model of Fe II emission has been shown to provide more flexibility for precise and careful fitting of diverse AGN spectra than monolithic empirical templates with a single overall intensity (see e.g., the observations and analysis in Kovacevic et al., 2010; Shapovalova et al., 2012; Kovacevic-Dojcinovic and Popovic, 2015; Popovic et al., 2019), where only the line width and shift are varied. In the following, we describe how the most important atomic transitions were selected for each atomic group. We consider the possible path of electron transitions within the Fe II ion. In addition to the collisional excitation as one of the important mechanisms for the production of strong optical Fe II emission (Garcia-Rissmann et al., 2012; Marinello et al., 2016), it has been discussed that the Ly\(\alpha\) fluorescence is another relevant process for populating upper levels of Fe II (Penston, 1987; Sigut and Pradhan, 1998, 2003; Sarkar et al., 2021), and thus responsible for NIR Fe II emission, and later contributing to the optical Fe II to a level of at least 20% (as shown in Garcia-Rissmann et al., 2012; Marinello et al., 2016). We start from the upper levels u\({}^{4}\)(D,F,P), which may be populated with Ly\(\alpha\) photons (Figure 2). From these, photons cascade down through transitions in the NIR (see Table 1), as also discussed and illustrated in Sigut and Pradhan (2003); Marinello et al. (2016, 2020). These two groups of upper levels are responsible for populating the upper levels in the energy range of 4-7 eV through UV transitions (the Fe II emission in UV will be studied elsewhere). These are the known energy levels previously identified by Kovacevic et al. (2010); Shapovalova et al. (2012); Veron-Cetty et al. (2004) which give rise to two main optical Fe II bumps around H\(\beta\) line centered at 4570 A and 5270 A (indicated with dark cyan lines in Fig 2). We group the lines based on the same lower level of the transition, so that the line intensities are constrained only by the transition oscillatory strength \(f\), which are listed \(gf\) in Table 1, where \(g\) is the level statistical weight. As an example, Table 1 lists the transitions identified within the a\({}^{6}\)S group, and the complete list with all transitions is available in a machine-readable format in the online Journal. In selecting transition groups in the wavelength range near the H\(\alpha\) line and beyond, we were guided by the Fe II transitions previously identified as strong, relying primarily on the work of Veron-Cetty et al. (2004); Park et al. Figure 6: Comparison of the Fe II model (black solid line) in the wavelength range 7000 - 11000 Å, with the Fe II model provided for the wavelength range 8200 - 11400 Å in García-Rissmann et al. (2012) (red dashed line, see text for details). The position of Fe II line identified in Landt et al. (2008); Marinello et al. (2020) are marked with vertical lines. (2022) for the optical Fe II and Rudy et al. (2000); Garcia-Rissmann et al. (2012); Marinello et al. (2020) for the NIR Fe II. Only the lines with oscillatory strength (\(\log(gf)>-5\)) were selected. The oscillatory strengths were adopted from Kovacevic et al. (2010) and Shapovalova et al. (2012), whereas for the new transition groups they were taken from the atomic spectral line database from R. L. Kurucz7. In several cases we have updated \(\log(gf)\) from the values given in Fuhr et al. (1981) as they have been shown to give line ratios that better describe the observations. The reference from which oscillatory strengths were taken is also given in Table 1. For the calculation of the line ratios, we used the excitation temperature of \(10^{4}\) K, which has been shown to represent well the region where these lines arise (Sulentic et al., 2000; Ilic et al., 2012). The line ratios would not change significantly for small variations of excitation temperature (Kovacevic et al., 2010). Footnote 7: [https://lweb.cfa.harvard.edu/amp/ampdata/kurucz23/sekur.html](https://lweb.cfa.harvard.edu/amp/ampdata/kurucz23/sekur.html) We outline two important updates with respect to the semi-empirical Fe II model presented in Kovacevic et al. (2010), and further extended in Shapovalova et al. (2012): 1. The wavelength range was extended to cover 3700 - 11000 A, whereas originally it was focused on 4200 - 5600 A because this region around the H\(\beta\) line was the most studied. 2. Instead of using so-called high-excitation lines (see discussion in Kovacevic et al., 2010, and their Table 3), whose ratios were previously measured from I Zw 1 spectrum, we have revised the list and assembled them into four atomic groups: y\({}^{4}\)G, b\({}^{4}\)G, x\({}^{4}\)D, y\({}^{4}\)P, as shown in Fig. 2. These groups are coming from the high-energy levels that are populated through the same paths as other levels (Fig. 2). The strongest lines in these groups were also identified by Kovacevic et al. (2010) (see their Table 3) and Veron-Cetty et al. (2004). This makes it a full model of Fe II emission that relies only on the atomic data. It contains a total of 283 transitions, divided into 17 atomic groups, in the wavelength range of 3700 - 11000 A (line transitions are listed in Table 1 and shown in Fig. 2 and 5, bottom panel). In comparison, the previous semi-empirical model had 57 line transitions in the 4200 - 5600 A, now the same range contains 125 transitions. However, the number of free parameters used in the fits has remained the same. Most of these lines are much weaker (see Table 1), but these are all included since in iron-rich objects their contribution may not be negligible, and they do not burden the computation (see Section 3.2 for details on modeling). Nevertheless, the extended model of Fe II emission differs only slightly from the original semi-empirical of Kovacevic et al. (2010) model as shown in Fig. 3. Figure 4 presents some widely used Fe II templates taken from the literature that are compared to the I Zw 1 spectrum, from which the continuum emission has been subtracted. The Fe II model presented in this paper is a result of the multi-component fitting (described in Section 3.2) and includes also [Fe II] lines which are strong in this and other similar iron-rich AGN. Full Fe II model in the wavelength range 3700 - 11000 A is presented in Fig. 5 with 17 atomic groups indicated on the bottom panel with different color. All lines are set to have a width of 1300 km s\({}^{-1}\), with intensities of reference lines arbitrary selected. In Figure 6 we zoom in the NIR part of the Fe II model in the wavelength range 7000 - 11000 A. There is not much work dedicated to building the Fe II templates in NIR, mostly due to the observational limitation to obtain NIR spectra for distant AGN. In selecting the most dominant line transitions, we were governed by so-called well-known "1\(\mu\)m" Fe II features at \(\lambda\)9997, \(\lambda\)10501, and \(\lambda\)10863 (Rudy et al., 2000). We compare the proposed model with the Fe II model provided for the wavelength range 8200 - 11000 A by Garcia-Rissmann et al. (2012) (Fig. 6, red dashed line), compiled from their best model for I Zw 1 (see their Tables 2 and 3). Fe II line identified in Landt et al. (2008); Marinello et al. (2020) are also marked with vertical lines. In this work, we focus on the spectra in 4000-7000A wavelength range due to the availability of SDSS spectra in this domain. The properties of 221 NIR iron emission in AGN spectra and further testing of the proposed model will be investigated in detail in a forthcoming publication. ### AGN spectral fitting For the modeling of AGN spectra, here we rely on the open-source code fantasy (Fully Automated pythonN Tool for AGN Spectra analYsis)8. This is a python-based code for multi-component spectral fitting, optimized for type 1 AGN spectra in the wavelength range 3700-11000 A, already successfully used in several studies (Ilic et al., 2020; Rakic, 2022; Petrushevska et al., 2023). The AGN spectra are modeled simultaneously with the underlying broken power-law continuum, the predefined emission line lists, and the Fe II model. The code is flexible in the selection of different groups of lines, either already predefined lists (e.g., standard narrow lines, Hydrogen and Helium lines, Fe II model, etc), but gives full flexibility to the user to merge predefined line lists or create customized line list. Fitting is based on Levenberg-Marquardt algorithm implemented through sherpa9 python package (Burke et al., 2022). We describe below the most important features of the fantasy code used in this analysis: Footnote 9: [https://pypi.org/project/sherpa/](https://pypi.org/project/sherpa/) 1. Several pre-processing steps to prepare the AGN spectra for multi-component fitting are available, such as the Galactic extinction and cosmological redshift correction. Based on either data provided in the header or manually inserted, the spectra are corrected for Galactic extinction using dust map data from Schlegel et al. (1998), and for the cosmological redshift. 2. To estimate and subtract the contribution of the host-galaxy starlight, we have decided for the approach already tested and used for SDSS spectra (Rakshit et al., 2020), which shows that most AGN spectra can be reconstructed as a linear combination of galaxy and quasar eigenspectra. Using the Principle Component Analysis, Yip et al. (2004a,b) constructed from 170,000 galaxy SDSS spectra and 16,707 quasar SDSS spectra, a set of galaxy and quasar eigenspectra. Vanden Berk et al. (2006) showed that the majority of AGN spectra can be reconstructed Figure 7: Multi-component fitting with the fantasy code of the I Zw 1 observed spectrum (gray line) in the \(\lambda\lambda\)4000-6800 Å wavelength region. The model (red line) consists of: broken power-law (black dashed line), narrow lines (green solid line), broad (blue solid line) and intermediate (blue dashed line) components of Balmer lines: H\(\alpha\), H\(\beta\), H\(\delta\), He I lines (yellow line), intermediate components of [O III] (blue dashed line), and Fe II model (dark-red line) and [Fe II] lines (light-red line). Bottom panel shows the zoomed-in observed (gray line), model (red line), and residual spectrum (black line). using the linear combination of 10 quasar eigenspectra from Yip et al. (2004b) and 5 galaxy eigenspectra from Yip et al. (2004a) as \[F(\lambda)=\sum_{0}^{10}a_{i}q_{i}(\lambda)+\sum_{0}^{5}b_{i}h_{i}(\lambda) \tag{1}\] where \(F(\lambda)\) is observed spectra, \(a_{i}\), and \(b_{i}\) are linear coefficients and \(q_{i}\) and \(h_{i}\) are quasar and galaxy eigenspectra, respectively. Note that prior to the fitting, the observed and eigenspectra are binned to the same spectral resolution and wavelength range. Optionally, fantasy may use all available eigenvectors (10 eignevectors for galaxy (stellar) and 15 for quasar components, Yip et al. 2004a,b). In this case the code will test for different number of components until reaching the best result based on the \(\chi^{2}\) parameter. In both cases, the weighted fit is used in order to avoid accounting for strong emission lines, with an option to mask strong narrow emission lines. By subtracting the reconstructed host galaxy contribution from the observed spectrum, one can obtain the pure AGN spectrum. The proposed technique allows for the recovery of the host galaxy spectrum, which is not resolved otherwise, which enables some studies of host galaxy properties, such as type, luminosity, colors, stellar mass, star-formation rates, etc. (see Vanden Berk et al. 2006). 3. One challenging step in bulk fitting of AGN spectra with different spectral features is the identification of emission lines and features present. fantasy approaches this by creating predefined standard lists of AGN emission lines within the specified wavelength range, such as: Hydrogen lines (Balmer and Paschen series), Helium lines (both He I and He II), most present narrow emission lines ([O III], [N II]), other AGN narrow lines (e.g., [S II], [O I]), other AGN broad lines (Ca I, O I), coronal lines (e.g. [Fe X], [Ar V]), Fe II model (described in Section 3.1), etc. In case of very strong Fe II emitters, such as NLSy1 galaxies, sometimes it is necessary to include additional transitions from forbidden Fe II (see e.g., Veron-Cetty et al. 2004). Users can modify available line lists, as well as create completely new ones. All identified line lists are available (Table 3 in the Appendix) for completeness, as we found that not much on this aspect is provided in the literature. 4. Many works use a standard approach and fit the optical continuum in AGN with a single power-law with adjustable spectral index (e.g. Rakshit et al. 2020). However, in order to be able to simultaneously fit the continuum and emission line features in a wider range of wavelengths (i.e., to cover both H\(\alpha\) and H\(\beta\) lines), Figure 8: An example of the AGN component (blue line) reconstructed after subtracting the contribution of the stellar component of the host galaxy (green line) from an observed spectrum (gray line) of SDSS J033013.26-053236.0. The host-galaxy emission contributes on the level of 76% to the observed continuum flux around 6000 Å Figure 9: The same as in Figure 7 but for three cases of host-galaxy corrected spectra from SDSS sample with diverse spectral properties to illustrate the fantasy multi-component fittings: SDSS J010226.31-003904.5 (upper), SDSS J094620.86+334746.9 (middle) and SDSS SDSS J093641.05+101415.9 (bottom). All components used in the fitting model (broken power-law continuum and emission line features) are indicated with different colors (see text for details). we have decided to go for more flexibility and use broken power-law (Dong et al., 2008). Vanden Berk et al. (2001) detected in a composite SDSS spectrum an abrupt change in the continuum slope redward from H\(\beta\) line (see their Figure 5), and discussed that the stellar light from the host galaxies may cause the steepening of the spectral index beyond 5000 A, but also pointed that it could be a real change in the quasar continuum, caused by the tail-end of thermal emission from hot dust. Liu et al. (2019) demonstrated that for SDSS type 1 AGN, the broken power-law with a break wavelength of \(\sim\)5650 A, is well suited for a simultaneous fit of continuum and emission lines. The break wavelength of 5650 A is adopted because it can ideally avoid the wavelength regions of the prominent emission lines. Therefore, the fantasy code uses a more generalized approach with the broken power-law for representing the AGN continuum, with the option to define the break wavelength, depending on the wavelength range of interest. 5. Final step is the spectral model construction and fitting. Basic model for AGN spectra should contain an underlying continuum (generalized to be in the form of a broken power-law) and narrow and broad emission lines. Depending on the wavelength range, spectral quality (S/N ratio, spectra resolution), and object type, the fitting model can be customized to contain many and complex emission features. Special feature in fantasy is a possibility to create a "fixed model", which calls all lines from the indicated line list(s) and sets them to have the same width and shift. An option to create a "tide model" includes all lines in indicated lists to have width and shift tided to the reference line (this is typically strong narrow [O III] \(\lambda\)5007 line). Creating a "feii model" calls for the Fe II model, in which all iron lines have the same width and shift, and line intensity ratios are calculated as described in Section 3.1. All emission lines are modeled with Gaussian function, defined with shift, width, and intensity. 6. The uncertainties in the spectra, and consequently in measured spectral quantities were estimated using a Monte Carlo approach (see e.g. description in Rakshit et al., 2020). We created 50 mock spectra for each object in the sample, by adding Gaussian random noise to the original spectrum at each pixel. The same fitting model was applied to all mock spectra as was done for the original one. Spectral quantities of interest (flux, line widths and shifts) were estimated from the original and mock spectra, giving us the distribution of each spectral quantity. For the uncertainty we then take the semi-amplitude of the range enclosing the 16th and 84th percentiles of the distribution. #### 3.2.1 Case of I Zw 1 To illustrate the potential of the fantasy code, the multi-component fitting was done on the I Zw 1 observed spectrum in the \(\lambda\lambda\)4000-6800 A wavelength region (Fig. 7), which was corrected for Galactic extinction. We note that the I Zw 1 spectrum is pretty flat in the blue part, indicating that there might be significant intrinsic extinction by the host-galaxy (see Richards et al., 2006; Park et al., 2022). We have tested the host galaxy subtraction provided in fantasy (see Section 3.2), and seen no noticeable difference in the corrected spectrum in the region of interest (near H\(\alpha\) and H\(\beta\) linea), and consequently in Fe II fittings, so we continued the analysis on the I Zw 1 spectrum not corrected for the host-galaxy contribution. The model (red line) consists of: broken power-law (black dashed line), narrow lines (green solid line), broad (blue solid line) and intermediate (blue dashed line) components of Balmer lines: H\(\alpha\), H\(\beta\), H\(\delta\), He I lines (yellow line), intermediate components of [O III] (blue dashed line), and Fe II model (dark-red line) and [Fe II] lines (light-red line). All lines within each modelling component are set to have the same width and shifts. We note here that since I Zw 1 and other NLSy1 are known to have strong forbidden ion emission, the modeling also includes these lines, set to have the same width and shift as Fe II model. The list of [Fe II] lines is compiled from Veron-Cetty et al. (2004) (Table 3 in the Appendix). The model describes remarkably the observed spectrum, as seen through the residual spectrum (bottom panel, Fig. 7). #### 3.2.2 Sample of SDSS AGN spectra We prepared the 655 spectra from the SDSS sample (through procedures listed in step 1) and subtracted the reconstructed host galaxy contribution (obtained through step 2) from the observed spectrum (see an example in Fig. 8). On the shown example, the host-galaxy contributes on the level of 76% to the observed continuum flux around 6000 A. In a few cases when stellar contribution from the host galaxy is estimated to be below zero, it has not been subtracted from the observed spectrum. We then performed simultaneous multi-component spectral fitting with the fantasy code, aiming to measure the fluxes and widths of the pure broad component of Fe II, H\(\gamma\), H\(\beta\) and H\(\alpha\) lines. We fitted the bulk of the spectra in the rest wavelength range \(\sim\)4000-7000 A using a single model10 consisting of: i) broken power-law continuum, to allow for the simultaneous fitting of the H\(\alpha\) and H\(\beta\) wavelength range, which could have a different continuum slope (Vanden Berk et al., 2001); the breaking point was set to be in the range 5350-5650 A, which is free from strong emission lines; ii) broad hydrogen (H\(\alpha\), H\(\beta\), H\(\gamma\), H\(\delta\)) and helium (He I 5877A, He II 4686A) lines; iii) very broad component for strong hydrogen lines (H\(\alpha\), H\(\beta\), H\(\gamma\)) lines; iv) standard strong narrow emission lines, all fixed to have the same shifts and widths as [O III] 5007A: [O III] 4363A, [O III] \(\lambda\lambda\)4959, 5007A, [N II] \(\lambda\lambda\)6548, 6583A, [S II] \(\lambda\lambda\)6716, 6731A, [O I] \(\lambda\lambda\)6300, 6364A; the ratios of [O III] and [N II] doublets were fixed to 3 (Dimitrijevic et al., 2007; Kovacevic-Dojcinovic et al., 2022; Dojcinovic et al., 2023); v) the broad component of the [O III] doublet, which line ratio is also fixed to 3, and have same width and shift (Kovacevic-Dojcinovic et al., 2022); vi) optical Fe II model, described in Section 3.1, in which all lines have same width and shift. The \(\chi^{2}\) was used to test for the goodness of the fitting results, however all results were also visually inspected. Footnote 10: Fittings were run on the SUPERAST computer cluster of the Department of Astronomy, University of Belgrade - Faculty of Mathematics (Kovacevic et al., 2022). ## 4 Results and Discussion The sample of 655 SDSS spectra were fitted with the single spectral model defined in the previous section (a few examples are given in Fig. 9). After visual inspection, 34 objects (\(\sim\)5%) were excluded from further analysis, mostly due to lower S/N or very broad or double-peaked profiles, which could not be addressed with the single model described here. We have noticed that in a few cases, the underlying stellar continuum was well reproduced (based on the galaxy absorption features), and subtracted, however the pure AGN spectra were still showing a slight increase towards larger wavelength (as illustrated in the bottom panel of Figure 9). This continuum reddening could be interpreted either as Figure 10: Luminosity (in erg s\({}^{-1}\)) of the broad H\(\alpha\) (left), H\(\beta\) (middle), and H\(\gamma\) (right) line as a function of the continuum luminosity \(\lambda L_{\rm cnt}\)Å at 5100 Å (upper panels) and for broad H\(\alpha\) vs. H\(\beta\) (left), H\(\alpha\) vs. H\(\gamma\) (middle), and H\(\beta\) vs. H\(\gamma\) (right) for the whole SDSS sample (bottom panles). Pearson correlation coefficient together with corresponding p-value is indicated on each plot. a result of poor removal of the host-galaxy stellar continuum using the method of spectral decomposition with galaxy templates, or possibly we see traces of the contribution coming from a tail-end of thermal emission from the hot dust present within the AGN. After decomposing the spectra into individual components, we measured the following spectral parameters from the best-fit model: broad H\(\alpha\), H\(\beta\), and H\(\delta\) fluxes, continuum luminosities (\(\lambda\)L\({}_{\lambda}\)) at 5100A and 6200 A (median integral in 5090-5110 A and 6190-6210 A respectively, from the reconstructed pure AGN spectra, Figure 8), and Fe II in three different windows: Fe II blue (4340-4680 A), Fe II green (5100-5600 A), and Fe II red (6100-6650 A). Fluxes were measured from the modeled broad line profiles and continuum, and then converted to luminosities based on the luminosity distance calculated from the cosmological redshift and adopted cosmological parameters (see Section 1). The Fe II blue band was used to get the \(R_{\rm FeII}\) parameter. The FWHM of the lines is also measured, and in case of broad Hydrogen lines, which were fitted with two Gaussians, the width of the total broad line was calculated. From the measured continuum luminosity at 5100 A and FWHM of H\(\beta\) line, we get the \(M_{\rm BH}\) through standard single-epoch method for SMBH mass estimates (see e.g., Popovic 2020; Dalla Bonta et al. 2020). Once we have \(M_{\rm BH}\), the Eddington luminosity is simply \(L_{\rm Edd}=1.26\times 10^{38}(M_{\rm BH}/M\odot)\) erg s\({}^{-1}\). For the bolometric luminosity \(L_{\rm bol}=k_{\rm bol}\lambda L_{\lambda}\) we used the mean quasar bolometric correction \(k_{\rm bol}\approx 10\) (e.g., Richards et al. 2006) and the continuum luminosity at 5100 A. This then gives the Eddington ratio \(L_{\rm Edd}/L_{\rm bol}\). The uncertainties of measured quantities are calculated as described in Section 3.2 (step 6), and then further propagated for derived quantities like luminosity or Eddington ratio. Some measured quantities, especially of strong broad emission lines (e.g. fluxes of H\(\alpha\) and H\(\beta\) lines or Fe II lines in the Figure 12: Balmer decrement H\(\alpha\)/H\(\beta\) (left) and H\(\beta\)/H\(\gamma\) (right) for pop A (blue) and pop B (red) sub-samples vs. Eddington ratio (\(L/L_{\rm Edd}\)). Pearson correlation coefficients point to no correlations: for pop A object r\(=0.05\) (\(p_{0}=0.3\)) for H\(\alpha\)/H\(\beta\) and r\(=-0.22\) (\(p_{0}<\)0.01) for H\(\beta\)/H\(\gamma\), and for pop B object r\(\sim-0.24\) (\(p_{0}<\)0.01) for H\(\alpha\)/H\(\beta\) and r\(\sim-0.16\) (\(p_{0}=\)0.01) for H\(\beta\)/H\(\gamma\). sample of xA objects) have low uncertainties, which is a result of spectra being selected to have higher S/N\(>\)35. For cases when we could not estimate the uncertainties, we used the mean value [in %] to get the uncertainty of the measured quantity. Table 2 lists the measured spectral parameters with uncertainties, that is: the SDSS object ID, redshift, broken power-law indices of the fitted underlying AGN continuum (\(\alpha_{1}\),\(\alpha_{2}\) where \(\alpha_{1}\) describes the part of spectra with wavelengths larger than the break wavelength, and \(\alpha_{1}+\alpha_{2}\) the lower ones), continuum luminosity L\({}_{5100}\) and L\({}_{6200}\), the H\(\alpha\), H\(\beta\) broad line luminosities, as well as the luminosities of Fe II blue, green, red, the full width half a maximum of the broad H\(\alpha\), broad H\(\beta\), and Fe II lines, and Eddington ratio \(L/L_{\rm Edd}\). The last row gives the mean average quantities for the total SDSS sample. Table 2 is available in its entirety in the machine-readable format in the online Journal. ### Hydrogen Balmer lines First we present our results for the hydrogen Balmer lines in the total sample. Figure 10 (upper panels) shows the strong correlation of the luminosity of broad Balmer lines (H\(\alpha\), H\(\beta\), H\(\gamma\)) as a function of the continuum luminosity \(L_{5100}\), for the whole SDSS sample. High Pearson correlation coefficients close to unity support this. Strong correlations between line and continuum luminosities have been observed before by many studies in both single object (e.g., Ilic et al., 2017; Dalla Bonta et al., 2020) and larger samples (e.g., Liu et al., 2019; Rakshit et al., 2020), supporting that the results obtained through the fantasy fittings are in agreement with previous findings. The observed line-continuum correlations are expected if photoionization by the central continuum emission is the main heating source of the BLR and thus responsible for the broad line emission (Osterbrock and Ferland, 2006; Netzer, 2013). In case of H\(\gamma\) line (Fig. 10, right panels), there is some scatter for lower line luminosities, probably due to difficulties to identify and subtract satellite lines such as [O III] \(\lambda\)4363. It is important to know that the intrinsic extinction due to the presence of dust within the AGN makes it unclear what fraction of luminosity is actually being measured (Kaspi et al., 2000). This AGN intrinsic reddening is still one of the critical points in the studies of the AGN phenomenon (Gaskell, 2017). Dust is present not just in the host galaxy, but is also associated with the central regions of the AGN, and not just in the equatorial part typically explained by the unified model of AGN, but also in parsec-scale polar areas (Honig et al., 2012; Stalevski et al., 2017). In this analysis we have preformed only the correction for the host galaxy contribution, with which we assumed that the extinction within the galaxy (if present) has been removed. However, the effects of extinction within the AGN itself have not been assessed (as not in other studies, e.g. Calderone et al., 2017; Liu et al., 2019; Rakshit et al., 2020). Therefore, some contamination to the continuum luminosity may still exist due to intrinsic AGN extinction. The obtained strong luminosity correlations may point to the internal extinction being not strong in most type 1 AGN studied here, as pointed before by some authors (e.g., Calderone et al., 2017). Strong correlations between different broad Balmer line luminosities (H\(\alpha\) vs. H\(\beta\), H\(\alpha\) vs. H\(\gamma\), and H\(\beta\) vs. H\(\gamma\)) for the whole SDSS sample are also present (Figure 10, bottom panels). Especially in case of H\(\alpha\) vs. H\(\beta\) (r=0.99 for the total SDSS sample), implying that the emission lines have the same physical origin. This is supported with the same kinematics of these lines, as they have the same FWHM, as also shown in Rakic (2022). No difference is seen in above correlations when considering different populations, i.e. for pop A, pop B and xA sub-samples. The ratio of Balmer lines can tell us about the physical processes of the region where they originate from (e.g., La Mura et al., 2007; Ilic et al., 2012). Balmer decrement H\(\alpha\)/H\(\beta\) vs. H\(\beta\)/H\(\gamma\) is given in Figure 11, in which Pop A and Pop B objects occupy slightly different areas. Pop B show higher Balmer decrement (average H\(\alpha\)/H\(\beta\)=2.42, H\(\beta\)/H\(\gamma\)=2.76) than pop A objects (average H\(\alpha\)/H\(\beta\)=2.07, H\(\beta\)/H\(\gamma\)=1.97). These values are below those (H\(\alpha\)/H\(\beta\)\(\approx\) 3) suggested by pure recombination theory. One possibility could be that collisional deexcitation is decreasing the H\(\alpha\) line, and thus giving a lower H\(\alpha\)/H\(\beta\) ratio. It is noteworthy that in our results H\(\alpha\) flux may be systematically lower, since a part of the emission is allocated to Fe II lines. The relationship between the Balmer decrement and the ratio of continuum measured at 5100 A and 6200 A can be seen in Fig. 11. A significant low-level anti-correlation is somewhat stronger for pop A object (r= \(-\)0.51 (\(p_{0}\)\(\ll\)0.01) for H\(\alpha\)/H\(\beta\), r= \(-\)0.40 (\(p_{0}\)\(\ll\)0.01) for H\(\beta\)/H\(\gamma\)) than in pop B object (r\(\sim-\)0.39 (\(p_{0}\)\(\ll\)0.01) for H\(\alpha\)/H\(\beta\), r= \(-\)0.25 (\(p_{0}\)\(\ll\)0.01) for H\(\beta\)/H\(\gamma\)). As the continuum 5100/6200 ratio decreases, meaning the object becomes redder, the Balmer decrement increases. This suggests that the increase of the broad H\(\alpha\)/H\(\beta\) ratio may be due to the low-level of reddening, at least in some fraction of object. On the other hand, the values of the Balmer decrement of broad lines H\(\alpha\)/H\(\beta\) measured here are below the theoretical predictions in most objects (Calderone et al., 2017; Lu et al., 2019), making it hardly possible to assess extinction using H\(\alpha\)/H\(\beta\) ratio. The possibility that this may however point to some dust presence is supported by previous studies showing that objects classified as "pop B" are more affected by dust due to their larger inclination angles, while "pop A" objects are seen more face-on and have lower inclination angles (Marziani et al., 2022), making them less affected by dust. We also demonstrate that Balmer decrement is not dependent on the Eddington ratio (\(L/L_{\rm Edd}\)), as shown in Figure 12, supported with no correlation present (correlation coefficients \(r\sim 0\)). This is in agreement with the previous findings of Lu et al. (2019). When plotted on the main sequence diagram (FWHM(H\(\beta\)) vs. R\({}_{\rm FeII}\), Fig. 13, upper panel), the SDSS sample occupies the expected parameter space (Marziani et al., 2018). Moreover, the Eddington ratio (\(L/L_{\rm Edd}\)) gradually rises from pop B to pop A sources (colorbar in Fig. 13), as expected (Du et al., 2016). The average values of \(L/L_{\rm Edd}\) are 0.07, 0.24 and 0.23 for pop B, pop A, and xA samples, respectively. The dependence of the R\({}_{\rm FeII}\) sequence on the Eddington ratio has been also described through modeling with photoionization codes (Panda et al., 2019). Measuring the Fe II emission near the H\(\alpha\) line, allowed us to construct for the first time a main sequence diagram using H\(\alpha\) line and fluxes Fe II red (6100-6650 A), presented in Fig. 13, bottom panel). The same trend with the Eddington ratio Figure 13: _Upper_: SDSS sample plotted on the main sequence diagram (FWHM(H\(\beta\)) vs. R\({}_{\rm FeII}\)). Horizontal (FWHM(H\(\beta\))=4000 km s\({}^{-1}\)) and vertical (R\({}_{\rm FeII}\)=1) line divide the population B (pop B), population A (pop A), and extreme population A (xA). Eddington ratio (\(L/L_{\rm Edd}\)) is shown on the colorbar. _Bottom_: Same diagram produced using the H\(\alpha\) FWHM and Fe II red (6100–6650 Å) emission. Pop A and B identified through the upper main sequence are marked with different symbols. Dashed and solid lines mark the position of FWHM(H\(\alpha\))=4000 km s\({}^{-1}\) and FWHM(H\(\alpha\))=3500 km s\({}^{-1}\), respectively. increasing from pop B toward pop A objects is also detected. Different populations identified through the standard main sequence diagram (Fig. 13, upper panel) occupy similar areas of H\(\alpha\) widths and iron strength, only that Fe II (red) emission is much weaker than H\(\alpha\) line. We note that the main sequence diagram for H\(\alpha\) line points that the division line between pop A and pop B objects may be lower (FWHM (H\(\alpha\)) \(\sim\) 3500 km s\({}^{-1}\)), as indicated with a solid horizontal line in Fig 13, bottom panel. Figure 14: Upper panels show luminosity of Fe II in three bands (Fe II blue (left), Fe II green (middle), and Fe II red (right)) as a function of the continuum luminosity \(L_{\rm 5100}\)Å (given in erg s\({}^{-1}\)) for the pop A (full circles) and pop B (open circles) samples. Bottom panels shows only a sub-sample of xA objects. Pearson correlation coefficient together with corresponding p-value is indicated on each plot. Figure 15: Total Fe II emission in pop A (full blue circles) and pop B (open red circles) samples with respect to the H\(\alpha\) (left) and H\(\beta\) (right) luminosities, in units erg s\({}^{-1}\). Pearson correlation coefficient together with corresponding p-value is indicated on each plot. ### Fe II emission near H\(\alpha\) and H\(\beta\) wavelength bands More compelling is to explore the behaviour of Fe II emission with respect to other spectral features. Here we study total Fe II emission and in three selected wavelength bands (Fe II blue 4340-4680 A, Fe II green 5100-5600 A, and Fe II red 6100-6650 A) in three populations of type 1 AGN, i.e. in pop A, pop B and xA sub-samples. In Figure 14 (upper panels), we plot the luminosity of Fe II in three bands: Fe II blue (left), Fe II green (middle), and Fe II red (right) as a function of the continuum luminosity \(L_{5100}\)A for the pop A (full circles) and pop B (open circles) samples. Significant correlation is seen for all three studied iron bands, pointing to the importance of the central continuum emission for the Fe II line production. It seems the scatter is slightly larger for pop B objects, however significant correlation is present for both sub-samples (correlation coefficient indicated in Figure 14, upper panels). Similar trends have been observed before by Dong et al. (2011), but they report a lower level of correlations and larger scatter probably due to much lower quality of the studied sample. We note that in case of objects with weaker Fe II emission (or spectra of low S/N), it is more difficult to detect Fe II lines as they blend with the continuum. This is probably the reason behind the scatter seen in Fe II (red) emission in pop B objects (see right panels in Fig. 14). In case of xA sample, the correlation with the continuum luminosity is even stronger for all three iron bands (Fig. 14, bottom panels). Detailed studies of the Fe II emission, including modeling and observations point to complex excitation mechanisms involved in its production. In their theoretical calculations of Fe II emission, Sigut & Pradhan (2003) used for the excitation mechanisms: continuum and line (Ly\(\alpha\), Ly\(\beta\)) fluorescence, collisional excitation, and self-fluorescence among the Fe II transitions. Some other studies show that collisional excitation is an important driver of the Fe II optical emission, with Ly\(\alpha\) fluorescence contributing on the level of \(\sim\)20% (Baldwin et al., 2004; Garcia-Rissmann et al., 2012; Marinello et al., 2016). Some evidence for photoionization by the central source as responsible for the Fe II emission comes from variability studies. For example, Barth et al. (2013) showed that Fe II emission in two Seyfert 1 galaxies, NGC 4593 and Mrk 1511, does reverberate on short timescales in response to continuum variations, pointing to the origin of the Fe II emission in photoionized gas in the BLR. Shapovalova et al. (2012) found that in a NLSy1 Ark 564, there is a a slightly better correlation of optical Fe II with the continuum at 5100 A than in the hydrogen Balmer lines, whereas in the case of another NLSy 1 galaxy NGC 4051, the variability of the optical Fe II emission also follows the continuum variability (Wang et al., 2005). However, in other two cases, NGC 5548 (Vestergaard & Peterson, 2005) and Ark 120 (Kuehn et al., 2008), weak correlation is seen, which might be due to poor cadence of the monitoring data, despite the large length of monitoring campaigns. Our findings may support the assumption that the central continuum emission is governing the production of Fe II lines (Gaskell et al., 2022). Based on the strong correlations between the Balmer lines and continuum luminosities, assuming that the continuum luminosity at 5100 A is a good tracer of ionization continuum, it is reasonable to assume that the Ly\(\alpha\) line could correlate with the continuum luminosity at 5100 A. Therefore, we cannot rule out the possibility that continuum and L\(\alpha\)-pumping may be responsible for the excitation of Fe II upper levels (considering that this line is broad enough), which then Figure 16: Total Fe II emission in pop A and pop B sample (left), and for xA sample (right) with respect to the Eddington ratio \(L_{\rm Edd}/L_{\rm bol}\). populate the upper levels of transitions leading to optical Fe II. As already stated, some previous work emphasized the importance of Ly\(\alpha\) fluorescence in Fe II production in AGN (e.g., Penston, 1987; Sigut & Pradhan, 1998, 2003; Sarkar et al., 2021). Some authors claimed that in NLSy1 other mechanisms, such as collisional excitation could be contributing as well to the Fe II production (e.g., Collin & Joly, 2000). However, here we see no difference in pop A and xA samples (which could be considered as representatives of NLSy1 objects) but even stronger dependence on the continuum luminosity. This is supported with strong correlations between the total Fe II emission with respect to the H\(\alpha\) and H\(\beta\) luminosities in both populations (Fig. 15). Photoionization by the accretion disc continuum is also indicated by the correlation of Fe II emission with the accretion rate. Figure 16 shows total Fe II emission in pop A and pop B sample (left), and for xA sample (right) with respect to the Eddington ratio \(L_{\rm Edd}/L_{\rm bol}\). A clear division between pop A and pop B objects is seen, where pop A objects have stronger Eddington ratio. A strong correlation of Fe II emission in the optical band with \(L_{\rm Edd}/L_{\rm bol}\) is seen, which has been reported before (Dong et al., 2011). Since the heating of the BLR plasma is dominantly through photoionization (Osterbrock & Ferland, 2006), this implies that the rate of collisions could be directly proportional to the input continuum ionizations. Therefore, with our findings we cannot rule out the effects of photoionization on the Fe II emission, influencing through all excitation mechanisms, i.e., collisional excitation, continuum and Ly\(\alpha\) fluorescence. We believe that simultaneous observations of Fe II emission in AGN from UV to NIR are needed to understand the connection between these different physical processes and different Fe II emission. Iron emission measured in three different wavelength bands, i.e. Fe II blue, Fe II green, and Fe II red is present in all objects, with Fe II (red) being somewhat weaker (Fig. 14, right panels). Pop B sample also contains objects with the strongest Fe II emission (red circled, Fig. 17, upper panels). This is not typically assumed, as strong Fe II emission is usually attributed to pop A (and NLSy1 as their subset). This could be the result of the fact that in these objects Fe II lines are broader and blended with underlying continuum or hidden in broad H\(\alpha\) wings, and thus difficult to extract Figure 17: Correlations between luminosity of Fe II in three bands: Fe II blue (4340–4680 Å), Fe II green (5100–5600 Å), and Fe II red (6100–6650 Å) for the pop A (blue circles) and pop B (red circles) samples (upper panels), and only for the population of extreme A objects (bottom panels). Pearson correlation coefficient together with corresponding p-value is indicated on each plot. (see examples in Fig. 9). This is why simultaneous multi-component fitting of AGN spectra, which is the approach implemented in fantasy code, may be important in studying the Fe II emission. The most relevant finding of this analysis is the strong correlation between luminosity of Fe II in three bands (blue, green, red) for pop A and pop B samples (Fig. 17, upper panels). The strongest correlation is seen between the blue and green Fe II band (r=0.94 in pop A and pop B) which is expected since these bands are populated with most Fe II transitions (Figs. 2 and 5). These two bands surrounding H\(\beta\) line are the ones typically measured and most Fe II templates are tackling these Fe II emission (Boroson and Green, 1992; Veron-Cetty et al., 2004; Tsuzuki et al., 2006; Park et al., 2022). However, the Fe II emission in the red band, redward from H\(\alpha\) line, is also present and correlates with the iron emission present in the vicinity of H\(\beta\) line (Fig. 17, left and right panels). When only xA sample is considered (Fig. 17, bottom panels), the correlations between Fe II bands are even stronger. Therefore, the red Fe II emission is contaminating the blue wing of H\(\alpha\), whereas the red wing is significantly less affected (Fig. 9). These iron blends are somewhat weaker and maybe hidden in the underlying continuum emission and broad H\(\alpha\) wings in pop B objects (with broader emission lines), especially in low-quality spectra (i.e., poor S/N and spectral resolution). Some authors have been attributing this emission to broad line wings of H\(\alpha\) line, for which an additional very broad line component had to be introduced (as e.g., in Calderone et al., 2017). This may significantly influence the measurements of H\(\alpha\) flux and width, and their consequent application. For example, presence of Fe II emission could be responsible for the scatter of \(M_{\rm BH}\) when measured from H\(\alpha\) line (Greene and Ho, 2005; Dalla Bonta et al., 2020). Finally, we check the kinematics of Fe II lines with respect to the broad H\(\alpha\) and H\(\beta\) lines through the analysis of the line FWHM. Figure 18 presents the width of Fe II emission versus H\(\alpha\) (middle) and H\(\beta\) (right) broad line width for the total SDSS sample. The average width of Fe II lines in total sample is \(\sim\)3300 km s\({}^{-1}\), whereas H\(\alpha\) FWHM is \(\sim\)3780 km s\({}^{-1}\) and H\(\beta\) is \(\sim\)4120 km s\({}^{-1}\). The two broad components used to fit the H\(\alpha\) line have the widths of 6640 km s\({}^{-1}\) (6300 km s\({}^{-1}\) for H\(\beta\)) and 2470 km s\({}^{-1}\) (2450 km s\({}^{-1}\) for H\(\beta\)). This is in agreement with the previous assumption that Fe II emission is originating in so called intermediate line region (Kovacevic et al., 2010; Dong et al., 2011). ### Fittings of type 1 AGN spectra with fantasy code Using the fantasy code, we successfully modeled the AGN optical spectra (\(\lambda\lambda\) 4000-7000 A) by fitting all emission components simultaneously: the underlying broken power-law continuum, broad and narrow emission lines, and the Fe II model (Fig. 9). The observed strong correlations between the line and continuum luminosities, and between the broad line luminosities indicate good performance of the automatic spectral fitting with the fantasy code. This is also confirmed by the strong correlation between the FWHMs of H\(\alpha\) and H\(\beta\) line (Fig. 18, left panel). We note that in few cases (\(<3\)%) the width of Fe II lines was pegged to the upper limit set by the code (Fig. 18, middle and right panels). These objects are pop B with very strong Fe II emission in all three bands. We visually inspected these fits, and found that the broad H\(\alpha\) and H\(\beta\), and Fe II bands were well fitted, from which we concluded that line fluxes have been correctly extracted. These could generally be removed from the analysis, or fitted with models that have lower Figure 18: Width of the broad H\(\beta\) plotted against the width of broad H\(\alpha\) line (left) for the total SDSS sample, as well as the width of Fe II emission versus H\(\alpha\) (middle) and H\(\beta\) (right) broad line width. Line widths are in units of km s\({}^{-1}\). Linear best-fitting line is displayed in H\(\beta\) vs. H\(\alpha\) plot, whereas on the two plots dashed lines denote the average values for the total sample. constraints on the limits, but, for consistency, we decided to include them, as they are not influencing the presented results. In addition, we have shown that especially when investigating the H\(\alpha\) line, the use of the fantasy code can be important to detect the Fe II emission hidden in the H\(\alpha\) wings and to carefully measure its spectral parameters. Overall, our findings show that the fantasy code is well suited for modeling SDSS type 1 AGN spectra. Finally, we have demonstrated that even in case of strong iron emitters, such as I Zw 1, the fittings using all emission components reproduced well the observed spectrum, especially in the vicinity of H\(\alpha\) line (Fig. 7). In addition to other well specialized program packages for AGN spectral fittings, namely pyQSOFit (Guo et al., 2018, 2019) and QSfit (Calderone et al., 2017), which are widely used in analysis of SDSS data (see e.g., Shen et al., 2011, 2019; Rakshit et al., 2020), the fantasy code seems to be also well suited to decompose the continuum and various emission features in AGN spectra. Its useful features are that it is user-friendly, contains necessary procedures for preprocessing and preparing the spectra for spectral decomposition, and the approach to simultaneously fit the emission lines and underlying continuum. In addition, the predefined lists of possible emission lines in AGN (e.g. strong narrow lines, see Table 3 in the Appendix), models with emission lines with fixed parameters (width, shift, line ratios), and model of Fe II emission, make fantasy a unique tool. However, apart from a numerical estimate of uncertainties from the fittings using the Monte Carlo approach, fantasy lacks a more serious treatment of uncertainties, which should be addressed in the future. Nevertheless, the features listed above (e.g., simultaneous fitting of different components, predefined line lists, flexibility to model a wide variety of spectra) make this code well suited for modeling optical AGN spectra. Of particular interest are the modeling of optical spectra in transient events, such as the strong iron TDEs, which was shown to be successful by Petrushevska et al. (2023). ## 5 Summary We present here the study of the physical properties of broad line emitting regions in type 1 AGN, with special attention to Fe II emission in the wavelength range of the H\(\alpha\) line. We use a sample of 655 objects from the current SDSS DR17, selected to cover the wavelength range 4000-7000 A (i.e., containing both H\(\alpha\) and H\(\beta\) lines). Our goal is to analyze only high-quality spectra (S/N ratio \(>\)35) so that we can reliably measure Fe II emission around H\(\beta\) but also near the H\(\alpha\) line, where it is typically blended with underlying continuum and broad H\(\alpha\) wings, particularly in low S/N spectra. We present an updated approach to multicomponent fitting of AGN spectra using the Python code fantasy. We present an extended model of Fe II emission based only on the atomic data, covering the wavelength range near the H\(\alpha\) line, which has been poorly studied in the past. We perform spectral fitting using the code fantasy, which allowed us to measure the spectral parameters of the broad H\(\gamma\), H\(\beta\) and H\(\alpha\) lines from the pure AGN spectra, as well as the iron emission in three bands: Fe II blue (4340-4680 A), Fe II green (5100-5600 A), and Fe II red (6100-6650 A). Our main conclusions can be summarized as follows: 1. The Fe II emission, if present in the vicinity of H\(\beta\) line, is also detected redward from H\(\alpha\) line, at a comparable strength. This red Fe II emission contaminates the broad H\(\alpha\) line red-wing, which can have an impact on the measured H\(\alpha\) flux and width. This can be particularly important for pop B objects (with broader emission lines), since iron blends near the H\(\alpha\) line are hidden in the underlying continuum and broad H\(\alpha\) wings; 2. The production of Fe II emission is strongly correlated with Eddington luminosity, and appears to be controlled by the same mechanism as the hydrogen Balmer lines, as shown by strong correlations with continuum and line luminosities. This implies that photoionization is governing the Fe II production, however the exact mechanism (e.g., collisional excitation, continuum or Ly\(\alpha\) fluorescence) is not constrained through this analysis; 3. Simultaneous multicomponent fitting of complex AGN spectra is a necessary approach for broad line parameter extraction, especially for reliable measurement of H\(\alpha\) spectral parameters (width and flux). The open-source code fantasy tested in this work appears to be well suited for modeling the spectrum of type 1 AGN and may prove useful in future studies of a large number of AGN spectra. To date, most Fe II templates have focused on the H\(\beta\) line, which is one of the best studied AGN emission lines. However, with current and future high-precision instruments that will focus more on the NIR spectrum, observation of the H\(\alpha\) line in distant quasars will be more present, and more Fe II templates and models in this wavelength range will be needed. Authors would like to thank the anonymous referee whose comments and suggestions helped to improve this manuscript. D.I. and L.C.P. acknowledge funding provided by the University of Belgrade - Faculty of Mathematics (the contract N451-03-47/2023-01/2000104) and Astronomical Observatory Belgrade (the contract N451-03-47/2023-01/200002) through the grants by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia. D.I. acknowledges the support of the Alexander von Humboldt Foundation. This research uses data from the SDSS Data Release 17 (Abdurro'uf et al., 2022). Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. SDSS (Abdurro'uf et al., 2022), NED, NIST (Kramida et al., 2022), SUPERAST (Kovacevic et al., 2022). astropy (Astropy Collaboration et al., 2013, 2018), sherpa (Burke et al., 2022), fantasy (Ilic et al., 2020; Rakic, 2022), sfdmap (Schlegel et al., 1998), PyAstronomy (Czesla et al., 2019), spectres (Carnall, 2017). ## Appendix A Fantasy predefined line lists We provide all predefined lists of standard AGN emission lines present in the fantasy code. These are: Hydrogen lines (Balmer and Paschen series), Helium lines (both He I and He II), strongest narrow emission lines ([O III], [N II]), other AGN narrow lines (e.g., [S II], [O I]), other AGN broad lines (Ca I, O I), coronal lines (e.g. [Fe X], [Ar V]), and forbidden Fe II lines (Table 3). The presented list is very extensive, and most of the lines are not detected in AGN (see e.g., mean quasar spectrum from Vanden Berk et al., 2006). However, we believe that such a comprehensive single list of possible emission lines is useful. When constructing these line lists (Table 3) we acknowledge the usage of a collection of detected emission lines in galaxies compiled by S. Drew Chojnowski11, the NIST database (Kramida et al., 2022), narrow lines listed in Veron-Cetty et al. (2004) and Park et al. (2022), some of the solar coronal lines (Del Zanna and DeLuca, 2018), as well as other papers cited in Section 3. Footnote 11: [http://astronomy.nmsu.edu/drewski/tableofemissionlines.html](http://astronomy.nmsu.edu/drewski/tableofemissionlines.html) For all listed lines, we provide the air wavelengths. Full data sets are also available in the machine readable format. \begin{table} \begin{tabular}{l c c c} \hline \hline Line & Wavelength (air) & Type & Group \\ & Å & & \\ \hline H\(\epsilon\) & 3970.08 & narrow, broad & hydrogen \\ H\(\delta\) & 4101.74 & narrow, broad & hydrogen \\ H\(\gamma\) & 4340.47 & narrow, broad & hydrogen \\ H\(\beta\) & 4861.33 & narrow, broad & hydrogen \\ H\(\alpha\) & 6562.82 & narrow, broad & hydrogen \\ Pa14 & 8598.39 & narrow, broad & hydrogen \\ Pa13 & 8665.02 & narrow, broad & hydrogen \\ Pa12 & 8750.47 & narrow, broad & hydrogen \\ Pa11 & 8862.78 & narrow, broad & hydrogen \\ Pa10 & 9014.91 & narrow, broad & hydrogen \\ Pa9 & 9229.01 & narrow, broad & hydrogen \\ Pa\(e\) & 9545.97 & narrow, broad & hydrogen \\ Pa\(\delta\) & 10049.37 & narrow, broad & hydrogen \\ Pa\(\gamma\) & 10938.09 & narrow, broad & hydrogen \\ \hline \end{tabular} Note. – Columns give line wavelength in air (in Å), a type of line in terms of line width expected in AGN spectra, and a group assigned within a predefined list in fantasy code. Table 3 is published in its entirety in the machine-readable format. Å portion is shown here for guidance regarding its form and content. \end{table} Table 3: Predefined lists of emission lines used in fantasy code.
2303.17024
Symmetry-breaking singular controller design for Bogdanov-Takens bifurcations with an application to Chua system
We provide a complete symmetry-breaking bifurcation control for equivariant smooth differential systems with Bogdanov-Takens singularities. Controller coefficient space is partitioned by critical controller sets into different connected regions. The connected regions provide a classification for all qualitatively different dynamics of the controlled system. Hence, a state feedback controller design with four small controller coefficients is proposed for an efficient and full singular symmetry-breaking control. Our approach works well for nonlinear control systems with both controllable and uncontrollable linearizations. Origin is a primary equilibrium for the uncontrolled system. This gives rise to two secondary local equilibria for the controlled system. These equilibria further experience tertiary fold and hysteresis type bifurcations. The secondary and primary equilibria experience Hopf and Bautin bifurcations leading to the appearance of one limit cycle from primary equilibrium, and either one or two from each secondary equilibria. The collisions of limit cycles with equilibria lead to either a heteroclinic cycle or four different homoclinic cycles. Each pair of limit cycles may respectively merge together and disappear. This is a saddle-node bifurcation of limit cycles. Different combinations of these give rise to a rich list of bifurcation scenarios. Finite determinacy of each of these bifurcations has been thoroughly investigated. This greatly influences their stabilization potential in applications. We consider Chua system with a quadratic state-feedback controller. Controlled Chua system experiences a pitchfork bifurcation, three Hopf bifurcations and two homoclinic bifurcations. There exist two different regions of controller coefficient choices for feedback regularization and two nearby regions for supercritical Hopf stabilization approach.
Majid Gazor, Nasrin Sadri
2023-03-29T21:08:59Z
http://arxiv.org/abs/2303.17024v1
# Symmetry-breaking singular controller design for ###### Abstract We provide a complete symmetry-breaking bifurcation control for \(\mathbb{Z}_{2}\)-equivariant smooth differential systems with Bogdanov-Takens singularities. Controller coefficient space is partitioned by _critical controller sets_ into different _connected regions_. The connected regions provide a classification for all qualitatively different dynamics of the controlled system. Hence, a state feedback controller design with four small controller coefficients is proposed for an efficient and full singular symmetry-breaking control. We show that our approach works well for nonlinear control systems with both controllable and uncontrollable linearizations. Asymmetric bifurcations are all associated with the controlled system and they start with a primary controlled pitchfork bifurcation from the origin. Origin is a primary equilibrium for the uncontrolled system. This gives rise to two secondary local equilibria \(E_{\pm}\) for the controlled system. These equilibria further experience tertiary fold and hysteresis type bifurcations. The secondary and primary equilibria experience Hopf and Bautin bifurcations leading to the appearance of one limit cycle \(\mathscr{C}_{0}\) from primary equilibrium, and either one or two from each secondary equilibria; namely, \(\mathscr{C}_{\pm}^{1}\) and \(\mathscr{C}_{\pm}^{2}\). The collisions of limit cycles (\(\mathscr{C}_{0}\) and \(\mathscr{C}_{\pm}^{1}\)) with equilibria lead to either a heteroclinic cycle \(\Lambda\) or four different homoclinic cycles (\(\Lambda_{\pm}\) and \(\Gamma_{\pm}\)). Each pair of limit cycles (\(\mathscr{C}_{\pm}^{2},\mathscr{C}_{\pm}^{1}\)) may respectively merge together and disappear. This is a _saddle-node bifurcation of limit cycles_. Different combinations of these give rise to a rich list of bifurcation scenarios. Finite determinacy of each of these bifurcations has been thoroughly investigated. Subcritical and supercritical types of bifurcations can be switched using small changes into controller coefficients. This greatly influences their stabilization potential in applications. Our symbolic estimation of critical controller sets provide a computationally feasible approach for bifurcation control of such systems. We derive novel estimates for the heteroclinic, homoclinic and limit cycles to facilitate the amplitude size management and frequency control of the nearby oscillating dynamics. To illustrate our approach, we consider Chua system with a quadratic state-feedback controller. Our approach provides estimated controller sets in terms of the original controller coefficients and constants of the controlled Chua system. Controlled Chua system experiences a pitchfork bifurcation, three Hopf bifurcations and two homoclinic bifurcations. We show that there exist two different regions of controller coefficient choices for feedback regularization and two nearby regions for supercritical Hopf stabilization approach. _Keywords:_ Singular control; Critical controller sets; Uncontrollable linearization; Subcritical and supercritical switching. _2010 Mathematics Subject Classification_: Primary: 34H20, 34K18, 34C20; Secondary: 58E25. Introduction Differential systems with symmetry (equivariant systems) frequently occur in many real life and engineering problems while qualitative changes are the intrinsic elements of their evolutions. A system is called _singular_ when it experiences a _qualitative change_ and each qualitative change is called a _bifurcation_. Hence, bifurcation control of equivariant singular systems is a natural contribution for the management of their qualitative evolutions. Due to the singularity around any qualitative change, an uncontrolled system may potentially experience varieties of desired and undesired dynamics. We distinguish different bifurcation scenarios for potential controlled dynamics, where they can be realised and/or switched as desired through a state-feedback singular bifurcation controller with small controller coefficients. These bifurcations can be quantitatively controlled, prevented, delayed or accelerated. Hence, the controlled system can experience any desired dynamics chosen from a rich list of bifurcation scenarios through tuning small controller coefficients. We refer to these by _bifurcation control problem_. Singularity lays an asset and important potential for engineering applications with high manoeuvring capability. Manoeuvrability here implies frequent quantitative and qualitative dynamics changes with minimal controller costs. Our proposed approach makes a full use of the internal singular dynamics of the uncontrolled system to enforce the desired dynamics. This is an alternative to many existing techniques in nonlinear control theory such as back-stepping method, input-state feedback linearization, Lyapunov functions, etc; _e.g.,_ see [39]. Most of these techniques are oblivious of the uncontrolled singular dynamics: the designed controllers (fully or partially) eliminate the internal (uncontrolled) dynamics and replace it with an already-known desired and non-singular dynamics. Thus, they mainly fail to exploit the benefits of singularities; this includes the manoeuvring capabilities. The main obstacle originates from the underlying complexity (highly rich dynamics) of singular systems. Bifurcation control stands to efficiently make use of the intrinsic singular dynamics of the uncontrolled system. This justifies to call our proposed controller approach as _singular control_. Therefore, bifurcation control leads to an effective approach with low-cost controllers and high manoeuvrability. Three main claimed contributions in this paper are as follows: (1) Complete symmetry-breaking classification for the highly rich bifurcation scenarios associated with \(\mathbb{Z}_{2}\)-equivariant Bogdanov-Takens singularity. (2) Novel symbolic estimates of critical controller sets, where they are sufficiently accurate for many of their potential applications. (3) An introduction of a practically feasible approach for singular control of linearly uncontrollable systems with arbitrary state dimension and two zero eigenvalues for its (non-hyperbolic) linearization. There are a rich list of bifurcation scenarios where they can all be realised through our proposed approach. These are only useful when the controller design is adaptable based on the physics of the problem. This is, of course, one of our main claimed contributions. Furthermore, bifurcations may switch their subcritical type with supercritical types (or vice versa) when small changes are applied to the controller coefficients; see Theorem 3.7 and Remark 4.3. These signify the importance for the study of symmetry breaking bifurcations due to their influence for the stabilizing approach in applications. Small modeling imperfections for singular systems lead to bifurcations and thus, they can be a dominant factor for determining the dynamics. For \(\mathbb{Z}_{2}\)-equivariant systems, bifurcations include the loss of symmetry; this is technically called a _symmetry breaking_ bifurcation. Hence, equivariant bifurcation control is not sufficient for systems whose noises and imperfections have the potentials for symmetry breaking. Thus, symmetry breaking bifurcation analysis and control for singular systems is necessary in these cases. Uncontrolled smooth differ ential systems whose linearization at a non-hyperbolic equilibrium has a pair of zero-eigenvalues (none semi-simple and \(\mathbb{Z}_{2}\)-equivariant mode cases) can be reduced to \[\tfrac{dx}{dt}=f(x,y),\ \tfrac{dy}{dt}=a_{0}x+g(x,y),\ f(-x,-y)=-f(x,y), \ g(-x,-y)=-g(x,y),\] (1.1) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ practical life problems. So, it is important to derive and present high order symbolic estimates for critical controller sets. Bifurcation theory has an extensive literature of more than a century; _e.g.,_ see [7, 10, 23, 32]. However, their application in control engineering has had a slow progress due to its technical challenges. Most theoretical results address _possible bifurcation scenarios_. However, they fail to precisely locate the desired dynamics in terms of the original parameters and constants of practical life problems. Bifurcation control has recently attracted some researchers and includes important contributions; see [8, 9, 17, 18, 20, 21, 25, 26, 30, 31, 42]. Kang [30] considered control systems with a single input and characterized generic normal form systems by their quadratic invariants and the equilibrium sets. Kang et al [31] considered a singular system with uncontrollable linearization that includes a single uncontrollable mode (a zero eigenvalue). They showed that there are always nearby bifurcated equilibria around the origin, where they are controllable. This is a significant contribution as many of approaches in nonlinear control cannot be applied to systems with uncontrollable linearizations. Our introduced approach in this paper can be considered as a generalization of Kang et al [31] to include uncontrollable modes with two zero eigenvalues. Wu and Yu [42] proposed a method to delay static and dynamic bifurcations. Hamzi et al [26] considered a family of singular control systems with two purely imaginary uncontrollable modes. They presented state-feedback controller for their stabilization and the quadratic invariants characterizing the generic Hopf bifurcation control. Hamzi [25] considered the singular control families with double zero uncontrollable modes. He obtained the quadratic invariants of the family. The quadratic invariants are used for synthesizing a quadratic stabilizing controller and for characterizing the generic normal forms. Our results generalizes his results for a degenerate normal form family and symmetry-breaking cases, where we addresses full singular bifurcation control problem. We studied bifurcation control for families of Bogdanov-Takens singular systems including a \(\mathbb{Z}_{2}\)-equivariant family in [18]. In this paper we skip the existing dynamics that are implied from our equivariant dynamical analysis in [18]; see Remark 3.12. Further, we have investigated the bifurcation control for a singular family on a three dimensional central manifold with two imaginary uncontrollable modes in [17]. Gazor and Shoghi [19, 20] considered applications of bifurcation control for sound intensities in music; also see [6, 7, 21, 32, 38, 32]. An efficient nonlinear time transformation method has been recently developed in [35] and applied to obtained highly accurate estimates for global bifurcations of homoclinic and heteroclinic varieties of codimension two singularities. We employ a novel generalisation of their approach to deal with our codimension four controlled system; also see [1, 2, 3, 6, 36, 38] for closely related results and techniques. This draft is organized as follows. Jet sufficiency of equilibrium bifurcations are discussed in Section 2. Finite determinancy In section 3, we study local symmetry breaking bifurcations for controlled system (1.3). We derive and present a rich list of controller manifolds in terms of controller coefficients in this section. These give rise to an effective tool not only to cause or prevent bifurcations but also to satisfy control objectives such as feedback regularization and stabilization techniques via supercritical Hopf and homoclinic/heteroclinic bifurcations. Further, we derive leading estimates for homoclinic and heteroclinic cycles as well as the amplitudes and angular frequencies of bifurcated limit cycles in terms of controller coefficients. These provide efficient criteria for the amplitude size control and frequency management of the nearby oscillating dynamics. Bifurcation control of \(\mathbb{Z}_{2}\)-equivariant systems is considered in section 4 using a single input quadratic controller. We show how a system with an uncontrollable linearization can be treated through our proposed symmetry-breaking bifurcation control. We show that the controlled system admits two saddle node controller manifolds, two Hopf controller sets (one is supercritical and the other is subcritical) and two homoclinic controller sets for linearly uncontrollable case; see Theorem 4.2 and its proof. Section 5 is dedicated to illustrate our results on Chua differential system. This system in the vicinity of its Bogdanov-Takens singularity may undergo a pitchfork singularity, three Hopf bifurcations and three homoclinic bifurcations. They can be all realised through an input state-feedback controller design. We explore feedback regularization of the origin and state feedback stabilization via supercritical Hopf bifurcations. We prove that there exist two regions of controller coefficient choices for feedback regularization while feedback stabilization approach admits controller coefficient choices from other two regions. Freedoms of choices for controller coefficients within these regions facilitate the amplitude size and frequency managements of the oscillating dynamics. We show how a very small single-input controller readily enforces all these control objective; _e.g.,_ see figures 9(b), 9(c) and 9(d). Finally, conclusions are drawn in section 3.2. ## 2 Finite determinacy for bifurcations of equilibria The infinite Taylor expansion of normal forms is an obstacle both in the theoretical analysis in bifurcation theory and in the practical computations using computer algebra systems; _e.g.,_ see [10, 11, 12, 17]. One usually truncates the infinite normal forms up to certain degree \(k\); this is called a _\(k\)-jet_ of the differential system. In other words, one ignores the higher order terms in normal form expansion. Higher order terms are never derived due to the complexity of formulas and impractical computations using computers. Hence, the qualitative equivalence of the truncated normal forms and the original normal forms is an important question and needs to be thoroughly investigated. The bifurcation analysis of the truncated normal forms may be inaccurate, misleading and/or essentially wrong when the question of _jet sufficiency_ is not investigated. Consider an equivalence relation and recall that a property is defined as a _qualitative property_ when it is invariant under the equivalence relation. When a truncated normal form reflects the qualitative dynamics of the original system, we refer to the system as a _finitely determined system_. The jet sufficiency refers to the degree upon which the truncated system is sufficient to fully represent the _qualitative dynamics_ of the original system. Different equivalence relations are necessary depending on the intended properties in our analysis. We first consider contact equivalence relation and appeal to singularity theory. Contact equivalence is the finest equivalence relation for equilibrium bifurcations of a given vector field. The results from singularity theory are compatible with normal forms of various types, i.e., normal forms, orbital normal forms, parametric normal forms, and also concepts such as universal asymptotic unfolding, etc; see [17]. We follow [13, 24] and consider one of the parameters as a distinguished parameter and denote it by \(\lambda\). Two mappings \(f(x,\lambda)\) and \(g(x,\lambda)\) are contact equivalent if there exists a local diffeomorphism germ \(X(x,\lambda)\) with \(X(0,0)=0\), locally invertible map \(\Lambda(\lambda)\), \(\Lambda(0)=0\), and a locally nonsingular \(n\times n\) matrix \(S(x,\lambda)\) such that \(f(x,\lambda)=S(x,\lambda)g(X(x,\lambda),\Lambda(\lambda))\); see [24, page 166] and [13]. **Remark 2.1**.: Any \(\mathbb{Z}_{2}\)-equivariant differential system (1.1) can be transformed into a \(\mathbb{Z}_{2}\)-equivariant differential system using permissible \(\mathbb{Z}_{2}\)-invariant polynomial transformations. Then, the remaining terms in the normal form system will be the \(\mathbb{Z}_{2}\)-equivariant terms from those remaining in the normal form system in [18, Thoerem 2.9]. As similar to [18, Theorem 3.3], any multiple parametric perturbation (1.2) (including \(\mathbb{Z}_{2}\)-breaking perturbation terms) of the \(\mathbb{Z}_{2}\)-equivariant differential system (1.1) can be transformed into (1.3). Since we are dealing with local bifurcations of vector fields, we define germs of vector fields at the origin. Two vector fields are defined as germ equivalent when there is a neighborhood where they are equal on that. Then, each equivalent class of vector fields from germ equivalent relation is called a germ vector field. The steady-state bifurcation problem associated with the normal form system of the generalized cusp case of Bogdanov-takens singularity is given by \(G(x,y,\lambda):=(G_{1},G_{2})=(0,0)\) where \(G_{1}(0,0,0)=G_{2}(0,0,0)=0\), \[(G_{1},G_{2}):=\left(a_{1}y^{3}+b_{0}xy^{2}+\lambda+\sum_{l=1}^{\lfloor\frac{N -1}{4}\rfloor}b_{l}xy^{4l},a_{0}x+b_{0}y^{3}+\sum_{l=1}^{\lfloor\frac{N-1}{4} \rfloor}b_{l}y^{4l+1}\right)+h.o.t., \tag{2.1}\] \(\lambda:=\mu_{0}\), \(a_{0}a_{1}b_{0}\neq 0\); see [18] for more details. **Theorem 2.2**.: _The germ \(G\) is contact equivalent with \(G+p\) for all \(p\in\overrightarrow{\mathcal{M}}^{4}\)._ Proof.: Normal form results and the approach in singularity theory are compatible. The argument relies on the fact that smooth changes of coordinates, time rescaling and reparametrization all are compatible with contact equivalence relation and transform a germ to its contact equivalent germ. Furthermore, formal normal forms can be extended into smooth cases using Borel lemma modulo flat parts. We follow [24, Definition 7.1, Proposition 1.4, Theorem 7.2 and Theorem 7.4] and instead prove that \(\overrightarrow{\mathcal{M}}^{4}\subseteq\mathcal{K}(G)\), where \(\overrightarrow{\mathcal{M}}\) is the generated module \[\overrightarrow{\mathcal{M}}:=\left\langle\binom{x}{0},\binom{0}{x},\binom{y}{0 },\binom{0}{y},\binom{\lambda}{0},\binom{0}{\lambda},\binom{0}{\lambda}\right\rangle \tag{2.2}\] over \(\mathscr{E}_{x,y,\lambda}.\) Thus, \(\overrightarrow{\mathcal{M}}^{4}\) is the space of all smooth vector field germs whose Taylor expansions do not have monomial vector field terms of degree less than \(4\). Here, \(\mathscr{E}_{x,y,\lambda}\) is the local ring of all smooth germs in \((x,y,\lambda)\)-variables. Define its unique maximal ideal by \(\mathcal{M}:=<x,y,\lambda>\). Note that the flat vector fields live in \(\overrightarrow{\mathcal{M}}^{3}\). Define an \(\mathscr{E}_{x,y,\lambda}\)-module \(\mathcal{K}(G)\) that is generated by \[\mathcal{M}^{2}\binom{G_{1x}}{G_{2x}},\mathcal{M}^{2}\binom{G_{1 \rho}}{G_{2\rho}},\mathcal{M}\binom{G_{1}}{0},\mathcal{M}\binom{G_{2}}{0}, \mathcal{M}\binom{0}{G_{1}},\mathcal{M}\binom{0}{G_{2}}. \tag{2.3}\] When \(\overrightarrow{\mathcal{M}}^{k+1}\subseteq\mathcal{K}(f)\), by [24, Theorem 7.2], \(G\) is \(k\)-sufficient with respect to contact equivalence relation. Now we choose \(a_{1}:=1\) to simplify the formulas. Then, define \[J:=\left\langle\binom{0}{xy}\right\rangle+\overrightarrow{\mathcal{M}}^{4}.\] We first recall Nakayama Lemma. For any \(\mathscr{E}_{x,y,\lambda}\)-modules \(J\) and \(\mathcal{K}\), the Nakayama Lemma implies that \(J\subseteq\mathcal{K}\) if and only if \(J\subseteq\mathcal{K}+\mathcal{M}J.\) Thereby, we denote \(\simeq\) for equalities modulo terms in \(\mathcal{M}J\): \[\lambda^{i}x^{j}y^{k}\binom{G_{1}}{0}\simeq\binom{\lambda^{i+1}x^{j}y^{k}}{0}, \hskip 14.226378pt\lambda^{i}x^{j}y^{k}\binom{0}{G_{1}}\simeq\binom{0}{ \lambda^{i+1}x^{j}y^{k}},\] where \(i+j+k=3\), \(i,j,k\geq 0.\) These conclude membership of \(\binom{\lambda^{4}}{0}\), \(\binom{\lambda x^{3}}{0}\), \(\binom{\lambda x^{2}y}{0}\), \(\binom{\lambda xy^{2}}{0}\), \(\binom{\lambda y^{3}}{0}\), \(\binom{0}{\lambda^{4}}\), \(\binom{0}{\lambda x^{3}}\), \(\binom{0}{\lambda x^{2}y}\), \(\binom{0}{\lambda xy^{2}}\), and \(\binom{0}{\lambda y^{3}}\in\mathcal{K}_{s}+\mathcal{M}J.\) Next, we consider \[x^{3}\binom{G_{2}}{0}\simeq\binom{-x^{4}}{0},\hskip 14.226378pty^{3}\binom{G_{2 }}{0}\simeq\binom{-xy^{3}}{0},\hskip 14.226378ptx^{3}\binom{0}{G_{2}}\simeq \binom{0}{-x^{4}},\hskip 14.226378pty^{3}\binom{0}{G_{2}}\simeq\binom{0}{-x ^{3}}.\] Similarly, we can show that \(\binom{x^{i+1}y^{j}}{0},\binom{0}{x^{i+1}y^{j}}\in\mathcal{K}_{s}+\mathcal{M}J\) for \(i+j=3\) and \(i,j\geq 0\). In the one hand, we have \[x^{i}y^{j}\binom{G_{2}}{0}\simeq\binom{x^{i+1}y^{j}}{0},\ \ \ \ x^{i}y^{j}\binom{0}{G_{2}}\simeq \binom{0}{x^{i+1}y^{j}}\] Since \(\binom{xy^{3}}{0}\in\mathcal{K}+\mathcal{M}J\) and \[xy\binom{G_{1x}}{G_{2x}}\simeq\binom{b_{0}xy^{3}}{-xy},\ \ \ \binom{0}{xy}\in \mathcal{K}+\mathcal{M}J.\] On the other hand, \[y\binom{0}{G_{2}}=\binom{0}{-xy+b_{0}y^{4}}\ \ \ \text{implies that }\ \ \ \binom{0}{y^{4}}\in\mathcal{K}+\mathcal{M}J.\] Finally, \(y^{2}\binom{G_{1y}}{G_{2y}}\simeq\binom{y^{4}}{0}\) completes the proof. ## 3 Symmetry breaking and critical controller sets Bifurcation control is facilitated by introducing _critical controller sets_ or _controller manifolds_. Critical controller sets are typically codimension-one bifurcation manifolds within the controller coefficient space. This is a necessary condition of critical controller manifolds to provide a partition for the controller coefficient space into a finite number of connected regions. When controller coefficients from critical controller sets are subjected to small perturbations, the qualitative dynamics of the controlled system changes. The neighbourhood validity of controller sets is greatly influenced by the relative geometry of these manifolds in four-dimension. For example, a limit cycle \(\mathscr{C}_{0}\) may collide with equilibrium \(E_{+}\) and disappear at a homoclinic controller set \(T_{HmC+}.\) It can alternatively collide with another equilibrium \(E_{-}\) and disappear at different homoclinic controller set \(T_{HmC-}\). Therefore, controller manifold \(T_{HmC+}\) is no longer valid if \(\mathscr{C}_{0}\) has already been disappeared through \(T_{HmC-}\), or vice versa. We provide local criteria for our derived critical controller manifolds. These criteria are helpful and necessary for both deriving critical controller sets and the distinction of the neighbourhood validity within the controller coefficient space. Further, we apply alternative normalized systems and truncation degrees in our formulation. We derive the bifurcation controller sets in terms of symbolic constants and unknown controller coefficients. This is important for many control engineering applications. Equation (1.3) is \(\mathbb{Z}_{2}\)-equivariant for \(\mu_{0}=\mu_{3}=0\); see [18] for \(\mathbb{Z}_{2}\)-equivariant bifurcation control. The symmetry-breaking occurs when either of \(\mu_{3}=0\) and \(\mu_{0}=0\) or both fail. Due to the geometric complexity of bifurcation controller sets in four dimensional space and errors of estimates, bifurcation controller sets are generally valid within certain neighborhoods. We provide these sets under two categories: symmetry breaking analysis using \(\mu_{3}\) (while \(|\mu_{0}|\ll 1\)) and symmetry-breaking analysis through \(\mu_{0}\) and \(\mu_{3}\) for \(\mu_{0}\mu_{3}\neq 0\). Here, we use the notation of the little \(\rho\) for parameters, where they are generally polynomial functions of the original control parameters. **Proposition 3.1**.: _For \(|\mu_{0}|=\sigma(||(\mu_{1},\mu_{2},\mu_{3})||^{4}),\) estimated primary pitchfork and Hopf bifurcation controller sets are_ \[T_{P}:=\left\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3})|\,\mu_{1}+{\mu_{2}}^{2}=0 \right\}\ \ \text{and}\ \ T_{H}:=\left\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3})|\,\mu_{2}=0,\mu_{1}>0 \right\}. \tag{3.1}\] * _[_18_, Theorem 5.1]__: We have a super-critical Hopf bifurcation when_ \(b_{0}<0.\) _Bifurcated limit cycle_ \(\mathscr{C}_{0}\) _appears when_ \(\mu_{2}>0\) _and is asymptotically stable. For_ \(b_{0}>0,\) _the system undergoes a sub-critical Hopf bifurcation and the limit cycle_ \(\mathscr{C}_{0}\) _exists when_ \(\mu_{2}<0\) _and is unstable. There is a local bifurcation of secondary equilibria_ \(E_{\pm}\) _from the origin (or an equilibrium slightly deviated from the origin) in nearby of_ \(T_{p}.\)__ * _Estimated radius and angular frequency for the bifurcated limit cycles_ \(\mathscr{C}_{0}\) _are given by_ \(\sqrt{-\frac{2\mu_{2}}{b_{0}}}\) _and_ \(\sqrt{-\mu_{1}}.\) _These are useful for magnitude and frequency management of the oscillating dynamics. There are a bifurcation controller set_ \(\mathscr{B}\) _and an estimated hysteresis controller set_ \(\mathscr{H}\) _given by_ \[\mathscr{B}:=\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3})|\,\mu_{0}=0\}\text{ and }\mathscr{H}:=\big{\{}(\mu_{0},\mu_{1},\mu_{2},\mu_{3})|\,\mu_{0}=\frac{8\mu_{ 3}(-\mu_{1})^{\frac{3}{2}}}{27(a_{1}+2b_{0}\sqrt{-\mu_{1}})^{2}}\big{\}}.\] (3.2) _Critical controller sets_ \(\mathscr{B}\) _and_ \(\mathscr{H}\) _contribute into the symmetry-breaking and bifurcation diagram classification of the pitchfork singularity for controlled system (_1.3_); see Figure_ 1_._ * _A symbolic approximation for the secondary equilibria of system (_1.3_) follows_ \[E_{\pm}:\quad(x_{\pm},y_{\pm}):=\bigg{(}\mu_{2}y_{\pm}+\mu_{3}y_{\pm}{}^{2}+b_ {0}y_{\pm}{}^{3},\frac{-\mu_{2}\mu_{3}\pm\sqrt{-a_{1}\mu_{1}-a_{1}\mu_{2}{}^{2 }-2b_{0}\mu_{1}\mu_{2}-2b_{0}\mu_{2}{}^{3}-\mu_{1}\mu_{3}{}^{2}}}{a_{1}+2b_{0} \mu_{2}+\mu_{3}{}^{2}}\bigg{)}.\] (3.3) Proof.: The eigenvalues of the linearised system at the origin are \(\mu_{2}\pm\sqrt{-\mu_{1}},\) where we treat \(\mu_{0}\) as a small perturbation. Hence, equation (3.1) is a Hopf bifurcation manifold and \(\sqrt{-\mu_{1}}\) stands for the leading term in the angular velocity of the oscillating dynamics. Consider \(|\mu_{2}|,|\mu_{3}|=\sigma(|\mu_{1}|).\) Note that parameters \(\mu_{2}\) and Then, a normalized amplitude equation in polar coordinates \((\rho,\theta)\) is \(\dot{\rho}=\mu_{2}\rho+\frac{b_{0}}{2}\rho^{3}+\sigma(\rho^{5},|\mu_{1}|^{2}).\) An estimated radius for the bifurcated limit cycle from this system is \(\sqrt{-\frac{2\mu_{2}}{b_{0}}};\) see proof of [18, Theorem 5.1]. Thus, the radius of the bifurcated limit cycle grows when \(\frac{\mu_{0}}{b_{0}}\) decreases. Further, we have a single zero singularity when \(\lambda_{1}=0\) for \(\lambda_{0}:=\mu_{2}-\sqrt{-\mu_{1}};\)_i.e.,_ \(\mu_{1}\leq 0.\) Then, a truncated and re-scaled differential equation on the center manifold (also see [27]) is given by \[\dot{x}=\mu_{0}+\lambda_{0}\sqrt{-\mu_{1}}x+\frac{\sqrt{-\mu_{1}}\mu_{3}}{2}x^ {2}+\left(\frac{b_{0}\sqrt{-\mu_{1}}}{4}+\frac{a_{1}}{8}\right)x^{3}+\sigma( ||(\mu,x)||^{4}). \tag{3.4}\] Now we appeal to singularity theory developed in [24]. By [24, Proposition 4.4], this equation is a universal unfolding for the pitchfork singularity. Critical set \(\mathscr{B}\) and hysteresis \(\mathscr{H}\) follow [24, Pages 140]. Treat \(\lambda_{0}\) as the distinguished bifurcation parameter and \(G(x,\lambda_{0},\mu_{0},\mu_{1},\mu_{3}):=\mu_{0}+\lambda_{0}\sqrt{-\mu_{1}}x +\frac{\sqrt{-\mu_{1}}\mu_{3}}{2}x^{2}+\left(\frac{b_{0}\sqrt{-\mu_{1}}}{4}+ \frac{a_{1}}{8}\right)x^{3}\). Hence, the bifurcation set \(\mathscr{B}\) is obtained from \(G=\frac{\partial G}{\partial x}=\frac{\partial G}{\partial\lambda_{0}}=0\) while the hysteresis controller set \(\mathscr{H}\) is given by \(G=\frac{\partial G}{\partial x}=\frac{\partial^{2}G}{\partial x^{2}}=0.\) Here, \[\frac{\partial G}{\partial x}=\lambda_{0}\sqrt{-\mu_{1}}+\mu_{3}\sqrt{-\mu_{1} }x+3\left(\frac{b_{0}}{4}\sqrt{-\mu_{1}}+\frac{a_{1}}{8}\right)x^{2},\] \(\frac{\partial G}{\partial\lambda_{0}}=x\sqrt{-\mu_{1}},\) and \(\frac{\partial^{2}G}{\partial x^{2}}=\mu_{3}\sqrt{-\mu_{1}}+3(\frac{b_{0}}{2} \sqrt{-\mu_{1}}+\frac{a_{1}}{4})x.\) Omitting \(x\) and \(\lambda_{0}\) from these equations gives rise to the governing equations for \(\mathscr{B}\) and \(\mathscr{H}\). To estimate the secondary equilibria \(E_{\pm}\), we derive \(x\) from the second steady-state equation corresponding with (1.3). Then, we substitute \(x\) into the first steady-state equation to obtain \((2b_{0}\mu_{2}+{\mu_{3}}^{2}+a_{1})y^{3}+2\mu_{2}\mu_{3}y^{2}+(\mu_{1}+{\mu_{2}} ^{2})y=0\) modulo \(\mu_{0}\) and terms of degree four and higher in \(y\). Roots of this cubic polynomial are \(y=0\) and \(y_{\pm}\) in (3.3). Hence, the second steady-state equation for (1.3) concludes the formula (3.3). **Theorem 3.2** (Saddle-node controller sets).: _There are two saddle-node bifurcations at critical controller sets estimated by \(T_{SN}^{\pm}=\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3})|\,\xi_{\pm}=0\}\) where_ \[\xi_{\pm}:=\mu_{0}-\tfrac{\mp 2\left(6b_{0}\mu_{1}\mu_{2}+3a_{1} \mu_{1}+3\mu_{3}{}^{2}\mu_{1}+3a_{1}\mu_{2}{}^{2}-\mu_{2}{}^{2}\mu_{3}{}^{2}+ 6b_{0}\mu_{3}{}^{3}\right)\sqrt{\mu_{2}{}^{2}\mu_{3}{}^{2}-6b_{0}\mu_{2}{}^{3 }-3\mu_{3}{}^{2}\mu_{1}-3a_{1}\mu_{2}{}^{2}-6b_{0}\mu_{1}\mu_{2}-3a_{1}\mu_{1 }}}}{27\left(a_{1}+{\mu_{3}}^{2}+2b_{0}\mu_{2}\right)^{2}}\] \[+\tfrac{2\left({\mu_{3}}^{2}{\mu_{3}}^{3}+18b_{0}\mu_{2}{}^{4}{ \mu_{3}}+9\mu_{1}\mu_{2}{}^{3}+18b_{0}\mu_{1}\mu_{2}{}^{2}{\mu_{3}}+9a_{1}\mu _{1}\mu_{2}{}\mu_{3}+9a_{1}\mu_{2}{}^{3}{\mu_{3}}\right)}}{27\left(a_{1}+{\mu_ {3}}^{2}+2b_{0}\mu_{2}\right)^{2}}. \tag{3.5}\] Proof.: The characteristic polynomial coefficients of \(\lambda^{2}+d_{1}\lambda+d_{2}\) for Jacobian matrix of (1.3) follows \(d_{1}\!:=\!4b_{0}y^{2}+3\mu_{3}y+2\mu_{2}\), and \[d_{2}\!:=\!5b_{0}{}^{2}y^{4}+8b_{0}\mu_{3}y^{3}+3\left({\mu_{3}}^{2}+2b_{0}\mu _{2}+a_{1}\right)y^{2}+4\mu_{2}\mu_{3}y+{\mu_{2}}^{2}+\mu_{1}. \tag{3.6}\] The scalars \(d_{1}\) and \(d_{2}\) are arrays of the first column of Routh table (_e.g.,_ see [41]). We claim that there are two local equilibria, where they undergo saddle-node bifurcations. Let \((x,y)=(x_{0},y_{0})\) be one of these two. Then, \((x_{0},y_{0})\) must satisfy \(d_{2}(y_{0})=0\) and the steady-state equations of (1.3). These provide a precise implicit formulation for the saddle-node singularity. Shift of coordinates \(Y=y-y_{0}\) and \(X=x-x_{0}\) give rise to \[\dot{X}\!=\!\big{(} \mu_{3}y_{0}+b_{0}{y_{0}}^{2}+\mu_{2}\big{)}X\!+(b_{0}y_{0}\mu_{ 2}+3a_{1}y_{0}+b_{0}\mu_{3}{y_{0}}^{2}+{y_{0}}^{3}{b_{0}}^{2}+b_{0}X)Y^{2}+(\mu _{3}+2b_{0}y_{0})XY\] \[+(\mu_{1}+\mu_{3}y_{0}\mu_{2}+2b_{0}{y_{0}}^{2}\mu_{2}+3a_{1}{y_ {0}}^{2}+3\mu_{3}{y_{0}}^{3}b_{0}+2{y_{0}}^{4}{b_{0}}^{2}+{\mu_{3}}^{2}{y_{0}} ^{2})Y+a_{1}Y^{3}+\xi_{\pm},\] \[\dot{Y}=(\mu_{2}+2\mu_{3}y_{0}+3b_{0}{y_{0}}^{2})Y-X+(\mu_{3}+3b_{ 0}y_{0})Y^{2}+b_{0}Y^{3}. \tag{3.7}\] The eigenvalues \(\lambda_{\pm}\) of Jacobian are then given by \[\mu_{2}+\tfrac{3}{2}\mu_{3}y_{0}+2b_{0}{y_{0}}^{2}\pm\tfrac{{\rm i}\sqrt{4y_{ 0}\mu_{2}\mu_{3}+8b_{0}{y_{0}}^{2}\mu_{2}+3{\mu_{3}}^{2}{y_{0}}^{2}+8{y_{0}}^{3 }b_{0}\mu_{3}+4{y_{0}}^{4}{b_{0}}^{2}+12a_{1}{y_{0}}^{2}+4\mu_{1}}}{2}.\] When \(\text{sign}(\mu_{2}+\tfrac{3}{2}\mu_{3}y_{0}+2b_{0}{y_{0}}^{2}\pm\tfrac{1}{2})=\pm 1\), there is a polynomial \(h_{\pm}(z,\xi_{\pm}):=\gamma_{0}\xi_{\pm}+\gamma_{1}z^{2}+\gamma_{2}\xi_{\pm} ^{2}+\gamma_{2}s\xi_{\pm}\), \(z\in\{X,Y\}\), for a quadratic approximation of the center manifold. We here assume that \(|\xi_{\pm}|\ll|y_{0}|.\) We apply the center manifold reduction procedure to obtain \(\gamma_{0}=-\tfrac{1}{4b_{0}{y_{0}}^{2}}\), \(\gamma_{1}=\tfrac{2b_{0}{y_{0}}^{2}-3a_{1}}{64b_{0}{y_{0}}^{3}{y_{0}}^{5}}\), \(\gamma_{2}=0\), and \(\gamma_{3}=\tfrac{4b_{0}{}^{2}{y_{0}}^{2}-3a_{1}}{64b_{0}{}^{4}{y_{0}}^{7}}.\) By a time-rescaling, the governing differential equation on the center manifold follows \[\dot{Y}=\big{(}\tfrac{5}{y_{0}}^{2}+\tfrac{3a_{1}}{16b_{0}}\big{)}Y^{2}+\xi_{ \pm}\big{(}\tfrac{3}{16b_{0}}+\tfrac{3a_{1}}{32b_{0}{}^{3}{y_{0}}^{2}}\big{)}Y+ \xi_{\pm}\big{(}{y_{0}}^{3}+\tfrac{3a_{1}\xi_{\pm}}{256b_{0}{}^{4}{y_{0}}^{4}}+ \tfrac{\xi_{\pm}}{128b_{0}{}^{2}{y_{0}}^{2}}\big{)}. \tag{3.8}\] The discriminant is given by \(\tfrac{\xi_{\pm}\big{(}\xi_{\pm}-48a_{1}{y_{0}}^{3}-160b_{0}{y_{0}}^{5}\big{)}} {64b_{0}{y_{0}}^{2}{y_{0}}^{6}}.\) This is a saddle-node bifurcation, where we have two new local equilibria for \(\text{sign}(3a_{1}y_{0}+10b_{0}{}^{2}{y_{0}}^{3})\xi_{\pm}<0\). We have no new equilibrium when \(sign(3a_{1}y_{0}+10b_{0}{}^{2}{y_{0}}^{3})\xi_{\pm}>0\) and \(|\xi_{\pm}|\) is sufficiently small. To obtain the symbolic estimated critical controller sets (3.5), we consider cubic truncations of \(d_{2}(y_{0})=0\) with respect to \(y_{0}.\) Then, \(y_{0}\) follows equation (3.3). This confirms our claim for two local equilibria with a saddle-node singularity. Substituting them into the truncated equation for \(d_{2}(y_{0})=0\), we derive the estimated critical controller sets (3.5). **Theorem 3.3** (Supercritical and subcritical Hopf bifurcations from \(E_{\pm}\)).: _We assume that \(|\mu_{0}|=\mathscr{O}(||(\mu_{1},\mu_{2},\mu_{3})||^{4}),\)\(|\mu_{3}|=\mathscr{O}(||\mu_{1},\mu_{2}^{2}||),\) and \(a_{1}>0\). Then, approximated Hopf controller sets for degenerate Hopf singularities associated with \(E_{+}\) and \(E_{-}\) are defined by \(T_{H\pm}\):_ \[\left\{\mu|\,2a_{1}{}^{\frac{7}{2}}\mu_{2}-3a_{1}{}^{\frac{5}{2}}\mu_{2}\mu_{3 }{}^{2}-4b_{0}a_{1}{}^{\frac{3}{2}}(a_{1}-2b_{0}\mu_{2}-\mu_{3}{}^{2})(\mu_{1} +\mu_{2}{}^{2})\pm\frac{a_{1}{}^{2}(6a_{1}-22b_{0}\mu_{2}-3\mu_{3}{}^{2})\mu_{ 3}\sqrt{-\mu_{1}-\mu_{2}{}^{2}}}{2}=0\right\}, \tag{3.9}\] _where \(\mu=(\mu_{0},\mu_{1},\mu_{2},\mu_{3}).\) Due to the restrictions on control coefficients, a full Bautin bifurcation does not occur here; see Theorem 3.7. Yet, one tertiary limit cycle \(\mathscr{C}_{\pm}^{1}\) bifurcates from either of the equilibria \(E_{\pm}\) when controller coefficients cross critical controller sets \(T_{H\pm}\) and \(b_{0}\eta_{\pm}>0\), respectively. Here, \(\eta_{\pm}:=\mu_{2}(-\mu_{1}-\mu_{2}{}^{2})\pm\frac{3}{2}\frac{\mu_{3}}{\sqrt {a_{1}}}+\frac{2b_{0}\sqrt{-\mu_{1}-\mu_{2}{}^{2}}}{a_{1}}.\) The bifurcation is supercritical when \(\eta_{\pm}>0\) and subcritical for \(\eta_{\pm}<0.\) The leading terms for the radius and angular velocity of the bifurcated limit cycles are \(\frac{8\sqrt{2}\sqrt{7a_{1}b_{0}{}^{-1}\mu_{2}-46(\mu_{1}+\mu_{2}{}^{2})}}{7a_ {1}\text{sign}(b_{0})}-\frac{64\sqrt{-\mu_{1}-\mu_{2}{}^{2}}}{7a_{1}}\) and \(\sqrt{2}(-\mu_{1}-\mu_{2}{}^{2}),\) respectively._ Proof.: To facilitate Hopf bifurcation analysis, we transform differential system (1.3) into an alternative normal form system \(\dot{\dot{x}}=\mu_{0}+\mu_{1}\ddot{y}+2\mu_{2}\ddot{x}+\mu_{2}{}^{2}\ddot{y}+ 3\mu_{3}\ddot{x}\ddot{y}+a_{1}\ddot{y}^{3}+4b_{0}\ddot{x}\ddot{y}^{2}+2b_{0}\mu _{2}\ddot{y}^{3}+b_{0}{}^{2}\ddot{y}^{5},\)\(\dot{\dot{y}}=-\ddot{x}.\) The estimated secondary equilibria \(E_{\pm}\) in the new coordinate system turns out to be \((\ddot{x},\ddot{y})=(0,y_{\pm}),\) where \(y_{\pm}\) is given in (3.3). Consider changes of controller coefficients \(\mu_{1}:=-\nu_{2}{}^{2}-\mu_{2}{}^{2}\) and \(\mu_{2}:=\nu_{3}\nu_{2}\). Then, four-degree truncated traces (multiplied with \(\nu_{2}{}^{-1}\)) of Jacobian matrices at the equilibria \((\ddot{x},\ddot{y})=(0,y_{\pm})\) follow \[8b_{0}a_{1}{}^{\frac{5}{2}}\nu_{2}\pm 6a_{1}{}^{3}\mu_{3}+4a_{1}{}^{ \frac{7}{2}}\nu_{3}\mp 3a_{1}{}^{2}\mu_{3}{}^{3}-16a_{1}{}^{\frac{3}{2}}b_{0}{}^{2} \nu_{3}\nu_{2}{}^{2}\] \[\mp 22b_{0}a_{1}{}^{2}\mu_{3}\nu_{2}\nu_{3}-8\mu_{3}{}^{2}b_{0}a_{1 }{}^{\frac{3}{2}}\nu_{2}-6a_{1}{}^{\frac{5}{2}}\mu_{3}{}^{2}\nu_{3}=0.\] This gives rise to Hopf controller manifolds for \(T_{H\pm}\) in (3.9). For these Hopf singularities, we consider the quadratic truncated traces of Jacobian matrices and obtain the variable \(\nu_{3}.\) This gives rise to the introduction of \(\eta_{\pm}:=\nu_{3}\pm\frac{3}{2}\frac{\nu_{4}}{\sqrt{a_{1}}}+\frac{2b_{0}\nu_ {2}}{a_{1}}.\) This is the same \(\eta_{\pm}\) as in the above in terms of \(\mu_{i}.\) Now we shift the equilibria to the origin via \(\tilde{x}=\tilde{x},\ \tilde{y}=\tilde{y}-y_{\pm}.\) Next, we apply linear transformations \(X=\sqrt{2}\sqrt{-\mu_{1}-\mu_{2}{}^{2}}\tilde{y},\)\(Y=\tilde{x},\) primary shift of coordinates and a time rescaling \(\tau=\nu_{2}t\) to obtain \(\dot{X}=-\sqrt{2}\nu_{2}{}^{2}Y\) and \(\dot{Y},\) where \(\dot{Y}\) is given by \[\left(\sqrt{2}\nu_{2}{}^{2}\mp\frac{9\sqrt{2}b_{0}\nu_{3}{}^{2} \mu_{3}}{2a_{1}\sqrt{a_{1}}}-\frac{7\sqrt{2}b_{0}{}^{2}\nu_{2}{}^{2}}{2a_{1}{} ^{2}}-\frac{3\sqrt{2}\nu_{2}{}^{2}\mu_{3}{}^{2}}{2a_{1}}\pm\frac{2\sqrt{2} \nu_{2}{}^{2}\mu_{3}}{\sqrt{a_{1}}}\pm\frac{8b_{0}\nu_{2}{}^{2}Y}{\sqrt{a_{1} }}\pm\frac{3\sqrt{2}\eta b_{0}\nu_{2}{}^{3}}{a_{1}}+3\mu_{3}\nu_{2}Y\right)X\] \[+\left(4b_{0}\nu_{2}Y\mp\frac{\sqrt{2}b_{0}{}^{2}\nu_{3}}{a_{1} \sqrt{a_{1}}}\pm\frac{3\sqrt{2}\eta b_{0}\nu_{2}{}^{2}}{\sqrt{a_{1}}}+\sqrt{2} \eta\nu_{2}\mu_{3}-\frac{\sqrt{2}b_{0}\nu_{2}{}^{2}\mu_{3}}{2a_{1}}\pm\frac{3 \sqrt{2a_{1}}\nu_{2}}{2}\right)X^{2}+h.o.t..\] By a Maple programming (_e.g.,_[22]), an estimated parametric normalized amplitude equation is given by \[\dot{\rho}=\rho(A_{1}+A_{2}R+A_{3}R^{2})+\mathscr{O}(\rho^{6}),\ \ A_{1}=\eta_{\pm}\sqrt{-\mu_{1}-\mu_{2}{}^{2}},\ \ A_{2}=-b_{0}\sqrt{-\mu_{1}-\mu_{2}{}^{2}}\pm\frac{9}{16}\mu_{3}\sqrt{a_{1}},\] and \(A_{3}=-\frac{7a_{1}b_{0}}{128}.\) Here \(R:=\rho^{2}.\) For sufficiently small values of \(\eta,\) the discriminant of \(A_{1}+A_{2}R+A_{3}R^{2}\) is always positive. Since \(|\mu_{3}|=\mathscr{O}(||\mu_{1},\mu_{2}^{2}||)\) and \(\frac{A_{2}}{A_{3}}=\frac{128\sqrt{-\nu_{3}{}^{2}-\nu_{2}}}{-7a_{1}}+\frac{72 \mu_{3}}{7b_{0}\sqrt{a_{1}}},\) the sum of its roots is always negative while \(\frac{A_{1}}{A_{3}}=-\frac{128\eta_{\pm}\sqrt{-\nu_{3}{}^{2}-\nu_{2}}}{7a_{1}b_{0}}\) is negative iff \(b_{0}\eta_{\pm}>0.\) Indeed, we have a positive root only when \(b_{0}\eta_{\pm}>0\) and otherwise, there is no positive root. These determine when the system admits a local limit cycle. Despite the degeneracy of Hopf singularity, controller restrictions limit the bifurcations to at most one limit cycle from either of \(E_{\pm}.\) Figures 2 demonstrate the estimated critical controller sets associated with system (3) when \(\mu_{3}:=\pm 0.1\) for \(\mu_{1}=\sigma(||\mu_{2},\mu_{3}||^{2})\) and \(|\mu_{0}|\ll 1.\) These figures include a pitchfork controller set \(T_{P}\), where two equilibria \(E_{\pm}\) collide with the origin. Hopf controller sets for the origin and \(E_{\pm}\) are denoted by \(T_{H}\), \(T_{H+}\) and \(T_{H-}\), according to Proposition 3.1 and Theorem 3.3. Each of the bifurcated limit cycles disappears when controller coefficients pass through the homoclinic controller sets \(T_{HmC}\) and \(T_{HmC\pm}\). More precisely, the limit cycles \(\mathscr{C}_{\pm}^{1}\) (bifurcated from \(E_{\pm}\)) grow in size and collide with the origin. These construct homoclinic cycles \(\Gamma_{\pm}\); see Figure 3(c). The next theorem deals with deriving the corresponding estimated controller sets \(T_{HmC\pm}\) and homoclinic orbits. **Remark 3.4**.: Simultaneous collisions of \(\mathscr{C}_{+}^{1}\) and \(\mathscr{C}_{-}^{1}\) with the origin give rise to a saddle-connection (double homoclinic). This is the only dynamics possibility for the equivariant cases; see [18]. When \(\mu_{0}=\sigma(|\mu_{1}|^{2})\), \[T_{SC}:=\Big{\{}(\mu_{0},\mu_{1},\mu_{2},\mu_{3})|\,\mu_{2}=\tfrac{8b_{0}}{5a _{1}}\mu_{1}+\sqrt{-\mu_{1}}\sigma(|\mu_{1}|,|\tfrac{\mu_{0}{}^{2}}{\mu_{1}{} ^{3}}|)\Big{\}}\] is the estimated saddle-connection variety. This is a tuning controller coefficient manifold between (in the middle of) \(T_{HmC+}\) and \(T_{HmC-}\); see [18, Lemma 5.6] and equations (20). The relative geometry of these manifolds determine their validity. For example, when the limit cycles \(\mathscr{C}_{+}^{1}\) and \(\mathscr{C}_{-}^{1}\) have already disappeared through homoclinic bifurcations, the saddle-connection variety is no longer valid. An efficient nonlinear time transformation method has been recently developed and applied for global bifurcation varieties of homoclinic and heteroclinic varieties of codimension two singularities [1, 2, 35]. This is an efficient alternative approach to the classical use of Melnikov functions; Figure 1: Controller varieties \(\mathscr{B}\) and \(\mathscr{H}\) in equations (18) and the numerical steady-state bifurcation diagrams associated with equation (19) and controller coefficient regions I-IV. Figure 2: Estimated critical controller sets for system (3) when \(\mu_{0}:=0\). _e.g.,_ see [29, 32]. Both approaches have been usually applied using one-small scaling variable. Since all parameters are scaled using one parameter, the approach typically lead to a one-dimensional transition variety and fits well within a codimension-two singularity. Transition varieties must have a dimension of three in order that they would partition the parameter space in four dimensions. Although the scaling constants play a role in accommodating the higher dimensional transitions sets (_e.g.,_ see [36]), we include three scaling parameters \(\epsilon_{1},\epsilon_{2},\epsilon_{3}.\) We derive an estimation for controller sets for homoclinic and heteroclinic bifurcations. Our symbolic estimations are accurate enough for many control engineering applications. Higher order approximations than our derived formulas are also feasible, but it is beyond the scope of this paper; _e.g.,_ see [1, 2, 35, 36] for highly accurate one- and two-dimensional transition varieties. Symbolic estimations for these bifurcations are useful for an efficient management of the nearby oscillating dynamics. **Theorem 3.5** (Homoclinic cycles \(\Gamma_{\pm}\)).: _When \(a_{1}>0\) and \(\mu_{0}=\mathscr{o}(|\mu_{1}|^{2}),\) the bifurcated limit cycles disappear via two distinct quaternary homoclinic controller sets estimated by_ \[T_{HmC\pm}:=\left\{\mu\big{|}\,\mu_{2}=\frac{8b_{0}}{5a_{1}}\mu_{1}\pm\frac{9 \sqrt{2}\pi}{32}\mu_{3}\sqrt{-\mu_{1}}\mp\frac{9\sqrt{2}\pi}{32}\frac{\mu_{0 }}{\sqrt{-\mu_{1}}}+\sqrt{-\mu_{1}}\,\mathscr{o}\left(|\mu_{1}|,\left|\frac{ \mu_{0}{}^{2}}{\mu_{1}{}^{3}}\right|\right)\right\}, \tag{3.10}\] _for \(\mu=(\mu_{0},\mu_{1},\mu_{2},\mu_{3}).\) The leading estimated terms for the homoclinic cycles \(\Gamma_{\pm}\) give rise to an effective criteria for the magnitude control of the nearby oscillating dynamics. These are given by_ \[(x(\varphi),y(\varphi))=\left(-\frac{\sin^{2}(\varphi)\cos(\varphi)\sqrt{3- cos(2\varphi)}}{\sqrt{a_{1}}}\mu_{1},\pm\frac{\sqrt{2}}{2}\sqrt{-\mu_{1}}( \cos(2\varphi)-1)\right)+(\mathscr{o}(|\mu_{1}|^{\frac{3}{2}}),\mathscr{o}(| \mu_{1}|)),\] _for \(\varphi\in[0,\pi].\)_ Proof.: We apply a nonlinear time transformation method and include multiple scaling parameters \(\epsilon_{i}\) for \(i=1,2,3;\) see [2, 35, 36]. Namely, we use the rescaling transformations \(x=\epsilon_{1}{}^{2}\tilde{x},y=\epsilon_{1}\tilde{y},\) \[t=\epsilon_{1}{}^{-1}\tilde{t},\mu_{0}=\epsilon_{1}{}^{3}\left( \gamma_{1}+\gamma_{01}\epsilon_{1}+\gamma_{02}\epsilon_{2}\right),\mu_{1}= \epsilon_{1}{}^{2}\left(\gamma_{2}+\epsilon_{1}\gamma_{11}+\epsilon_{2} \gamma_{12}+\epsilon_{3}\gamma_{13}\right),\] \[\mu_{2}=\epsilon_{1}{}^{2}\gamma_{21}+\epsilon_{1}\epsilon_{2} \gamma_{22}+\epsilon_{1}\epsilon_{3}\gamma_{23}+\epsilon_{1}\mathscr{o}(|( \epsilon_{1},\epsilon_{2},\epsilon_{3})|^{2}),\mu_{3}=\epsilon_{1}\gamma_{31 }+\epsilon_{2}\gamma_{32}+\epsilon_{3}\gamma_{33}+\epsilon_{1}\epsilon_{2} \gamma_{34}. \tag{3.11}\] These transform the differential system (1.3) into \[\dot{\tilde{x}} = \gamma_{1}\left(\gamma_{2}\epsilon_{1}+\gamma_{02}\epsilon_{2} \right)+\gamma_{2}\left(1+\epsilon_{1}\gamma_{11}+\epsilon_{2}\gamma_{12}+ \epsilon_{3}\gamma_{13}\right)\tilde{y}+\left(\epsilon_{1}\gamma_{21}+ \epsilon_{2}\gamma_{22}+\epsilon_{3}\gamma_{23}\right)\tilde{x}+a_{1}\tilde{y }^{3} \tag{3.12}\] \[+\left(\epsilon_{1}\gamma_{31}+\epsilon_{2}\gamma_{32}+\epsilon_{ 3}\gamma_{33}+\epsilon_{1}\epsilon_{2}\gamma_{34}\right)\tilde{x}\tilde{y}+ \epsilon_{1}b_{0}\tilde{x}\tilde{y}^{2},\] \[\dot{\tilde{y}} = -\tilde{x}+\left(\epsilon_{1}\gamma_{21}+\epsilon_{2}\gamma_{22}+ \epsilon_{3}\gamma_{23}\right)\tilde{y}+\left(\epsilon_{1}\gamma_{31}+ \epsilon_{2}\gamma_{32}+\epsilon_{3}\gamma_{33}+\epsilon_{1}\epsilon_{2} \gamma_{34}\right)\tilde{y}^{2}+\epsilon_{1}b_{0}\tilde{y}^{3}.\] The unperturbed system, _i.e.,_ when \(\epsilon=(\epsilon_{1},\epsilon_{2},\epsilon_{3})=\mathbf{0},\) is a Hamiltonian system with Hamiltonian \(H=\gamma_{1}\tilde{y}+\frac{1}{2}\tilde{x}^{2}+\frac{1}{2}\gamma_{2}\tilde{y }^{2}+\frac{1}{4}a_{1}\tilde{y}^{4}.\) We further Taylor-expand the new state variables and a time-rescaling transformation in terms of the scaling parameters \(\epsilon_{i}\) for \(i=1,2,3\) as \[\tilde{x}(\varphi):=\tilde{x}_{0}(\varphi)+\sum\epsilon_{j}{}^{i} \tilde{x}_{ij}(\varphi),\quad\tilde{y}(\varphi):=\tilde{y}_{0}(\varphi)+\sum \epsilon_{j}{}^{i}\left(p_{ij}\cos(2\varphi)+q_{ij}\right),\] \[\tilde{t}=\Phi\tau,\quad\Phi:=\phi_{0}+\sum\epsilon_{j}{}^{i} \phi_{ij}, \tag{3.13}\] where the sum \(\sum\) without indices stands for the double sum \(\sum_{i=1}^{\infty}\sum_{j=1}^{3}\) and \(\varphi\in[0,\pi].\) Let \(\gamma_{1}:=0,\)\(\gamma_{01}:=0,\)\(\gamma_{02}:=0,\)\(\gamma_{2}:=-1,\)\(\gamma_{34}:=1.\) Then, Hamiltonian of the unperturbed system holds a homoclinic cycle that connects the stable and unstable manifolds of the origin, _i.e.,_ the homoclinic orbit follows \(H(\tilde{x},\tilde{y})=0\). When the rescaling variables \(\epsilon_{i}\) for \(i=1,2,3\) becomes non-zero, the homoclinic cycle still holds for a homoclinic variety of codimension-one in the parameter space. The idea of the nonlinear time transformation method is to iteratively calculate the homoclinic cycle and homoclinic variety in terms of powers of \(\epsilon_{i}.\) We here only deal with zero and first order approximations, _i.e.,_\((p_{0},q_{0},x_{0},\phi_{0})\) and \((p_{1j},q_{1j},x_{1j},\phi_{1,j})\) for \(j=1,2,3\). We remark that there is only a homoclinic cycle for system (3.12). However, this will turn out to be two homoclinic cycles \(\Gamma_{\pm}\) for (1.3), depending on the sign of \(\epsilon_{1}\) in (3.18). The zero order approximation is given by \((\tilde{x}_{0}(\varphi),\tilde{y}_{0}(\varphi))\), where we assume that \[\tilde{y}_{0}:=p_{0}\cos(2\varphi)+q_{0}\quad\text{ and }\quad\tilde{x}_{0}(0)= \tilde{x}_{0}(\tfrac{\pi}{2})=0. \tag{3.14}\] Hence, \((\tilde{y}_{0}(0),\tilde{y}_{0}(\tfrac{\pi}{2}))=(p_{0}+q_{0},q_{0}-p_{0}).\) Since Hamiltonian is constant over the homoclinic cycle, we have \(H(\tilde{x}_{0}(\tfrac{\pi}{2}),\tilde{y}_{0}(\tfrac{\pi}{2}))=H(0,p_{0}+q_{0})\). Furthermore, \(\frac{\partial H}{\partial\tilde{y}}(0,p_{0}+q_{0})=0\) due to the fact that \((\tilde{x}_{0}(0),\tilde{y}_{0}(0))\) is an equilibrium for the unperturbed Hamiltonian system. These equations give rise to \[p_{0}=\tfrac{\sqrt{2}}{2\sqrt{a_{1}}},\quad q_{0}=-\tfrac{\sqrt{ 2}}{2\sqrt{a_{1}}},\qquad\tilde{y}_{0}=\tfrac{\sqrt{2}}{2\sqrt{a_{1}}}\cos(2 \varphi)-\tfrac{\sqrt{2}}{2\sqrt{a_{1}}},\] \[\tilde{x}_{0}=\pm\tfrac{\sin^{2}(\varphi)\cos(\varphi)\sqrt{3- cos(2\varphi)}}{\sqrt{a_{1}}},\quad\text{ and }\quad\phi_{0}(\varphi):=-\tfrac{\tilde{x}_{0}}{\tilde{y}_{0}^{ \prime}}.\] Let \(q_{1j}:=0\) for \(j=1,2,3.\) Then, \(\tilde{y}_{11}=p_{11}\cos(2\varphi)\), \(\tilde{y}_{12}=p_{12}\cos(2\varphi)\), \(\tilde{y}_{13}=p_{13}\cos(2\varphi)\) and the first-order approximation follows \[\tilde{x}=\tilde{x}_{0}+\epsilon_{1}\tilde{x}_{11}+\epsilon_{2} \tilde{x}_{12}+\epsilon_{3}\tilde{x}_{13},\;\tilde{y}=\tilde{y}_{0}+\epsilon _{1}p_{11}\cos(2\varphi)+\epsilon_{2}p_{12}\cos(2\varphi)+\epsilon_{3}p_{13} \cos(2\varphi). \tag{3.15}\] Next, the terms of the first-order in terms of \(\epsilon_{i}\) for \(i=1,2,3\) in \(\phi\dot{x}\) give rise to \[\phi_{0}x_{11}^{\prime}+p_{11}\cos\left(2\varphi\right)-x_{0} \gamma_{21}-x_{0}y_{0}\gamma_{31}-y_{0}\gamma_{11}-\gamma_{01}-3a_{1}{y_{0}}^ {2}p_{11}\cos\left(2\varphi\right)+\phi_{11}x_{0}^{\prime}-b_{0}x_{0}{y_{0}}^ {2}=0,\] \[\phi_{0}x_{1i}^{\prime}+p_{1i}\cos\left(2\varphi\right)-x_{0} \gamma_{3(i+1)}-x_{0}y_{0}\gamma_{4(i+1)}-y_{0}\gamma_{2(i+1)}\] \[-\gamma_{1(i+1)}-3a_{1}{y_{0}}^{2}p_{1i}\cos\left(2\varphi \right)+\phi_{1i}x_{0}^{\prime}=0, \tag{3.16}\] see also [35, Equation 2.22a and 2.22b]. Now consider the first-order \(\epsilon_{i}\)-terms in \(\phi\dot{y}\) along with equations (3.16). By eliminating \(\phi_{1j}\)-terms from these equations, an integrating factor and an integration, similar to the proof of [35, Equation 2.30], we derive \[\int_{0}^{\varphi}y_{0}^{\prime}\big{(}y_{0}\gamma_{11}-p_{12} \cos\left(2\varphi\right)+x_{0}\gamma_{21}+y_{0}x_{0}\gamma_{31}+b_{0}x_{0}{y _{0}}^{2}+\gamma_{01}+3a_{1}{y_{0}}^{2}p_{12}\cos\left(2\varphi\right)\big{)}d\varphi\] \[+x_{0}x_{11}+y_{11}g(y_{0})+\int_{0}^{\varphi}x_{0}^{\prime} \left({y_{0}}^{2}\gamma_{31}+\gamma_{21}y_{0}+b_{0}{y_{0}}^{3}\right)d\varphi=0, \tag{3.17}\] \[\int_{0}^{\varphi}y_{0}^{\prime}\big{(}y_{0}\gamma_{2(i+1)}-p_{1i} \cos\left(2\varphi\right)+x_{0}\gamma_{3(i+1)}+y_{0}x_{0}\gamma_{4(i+1)}+ \gamma_{1(i+1)}+3a_{1}{y_{0}}^{2}p_{1i}\cos(2\varphi)\big{)}d\varphi\] \[+x_{0}x_{1i}+y_{1i}g(y_{0})+\int_{0}^{\varphi}x_{0}^{\prime} \left({y_{0}}^{2}\gamma_{4(i+1)}+\gamma_{3(i+1)}y_{0}\right)d\varphi=0,\text{ for }i=2,3.\] Evaluating equation (3.16) at \(\varphi=\pi\) and (3.17) at \(\varphi=\pi,\pi/2\), we obtain nine number of linear equations. These give rise to the scaling parameters \[\gamma_{21}=-\tfrac{8b_{0}}{5a_{1}}+\tfrac{9\sqrt{2}\pi}{32}\gamma_{31},\quad \gamma_{22}=\tfrac{9\sqrt{2}\pi}{32}\gamma_{32},\quad\gamma_{23}=\tfrac{9 \sqrt{2}\pi}{32}\gamma_{33}, \tag{3.18}\] \[\gamma_{11}=0,\quad\gamma_{12}=0,\quad\gamma_{13}=0,\quad\epsilon_{1}:=\pm \sqrt{-\mu_{1}}.\] Finally, we substitute these into the equation for \(\mu_{2}\) in (3.11) and derive transition sets \(T_{HmC\pm}\) given in equation (3.10). Theorem 3.3 implies that control coefficients \((\mu_{0},\mu_{1},\mu_{2},\mu_{3})\) for \(|\mu_{0}|=\mathscr{o}(||(\mu_{1},\mu_{2},\mu_{3})||^{4})\), \(|\mu_{3}|=\mathscr{o}(||\mu_{1},\mu_{2}^{2}||)\) are not enough for fully unfolding a Bautin bifurcation around \(E_{\pm}\). Now we show that system (1.3) undergoes full Bautin bifurcation scenarios (in particular, bifurcations of two limit cycles and saddle-node bifurcation of limit cycles) when these restrictions on control coefficients are removed. For the distinction of different bifurcation scenarios, we will instead denote the equilibria with \(E_{\pm}^{*}\) in Lemma 3.6, Theorem 3.7 and Remark 3.8. **Lemma 3.6** (Critical controller sets and normalized amplitude equation for generalised Hopf singularity).: _Let \(9{\mu_{3}}^{2}\geq 32b_{0}\mu_{2}\), \(\delta:=\sqrt{9{\mu_{3}}^{2}-32b_{0}\mu_{2}},\) and \(\zeta_{\pm}\) be given by_ \[\zeta_{\pm} =\mu_{0}-\tfrac{\mu_{3}(7{\mu_{2}}^{2}+12{\mu_{1}})}{32b_{0}}+ \tfrac{9{\mu_{2}}{\mu_{3}}(3{\mu_{3}}^{2}+16a_{1})}{256{b_{0}}^{2}}-\tfrac{27{ \mu_{3}}^{3}({\mu_{2}}^{2}+16a_{1})}{2048{b_{0}}^{3}} \tag{3.19}\] \[\pm\tfrac{\delta\left(9{{\mu_{3}}^{4}}-56b_{0}{\mu_{2}}{\mu_{3}} ^{2}+64{b_{0}}^{2}{\mu_{2}}^{2}+144a_{1}{\mu_{3}}^{2}-128a_{1}b_{0}{\mu_{2}}+2 56{b_{0}}^{2}{\mu_{1}}\right)}{2048{b_{0}}^{3}}.\] _Further, let \(\kappa:=432a_{1}{\mu_{3}}^{2}-3{\mu_{3}}\delta\left(48a_{1}+16b_{0}{\mu_{2}}- 3{\mu_{3}}^{2}\right)-768a_{1}b_{0}{\mu_{2}}+512{b_{0}}^{2}{\mu_{1}}+192b_{0} {\mu_{2}}{\mu_{3}}^{2}-384{b_{0}}^{2}{\mu_{2}}^{2}-27{\mu_{3}}^{4}\) and \(\zeta=\mathscr{o}(\kappa)\). Then,_ 1. _There are two generalized Hopf singularities for_ \(E_{\pm}^{*}\) _at_ \(\zeta_{\pm}=0\)_._ 2. _These generalized Hopf singularities are determined by the cubic jet of the system (_1.3_)._ 3. _The normalized amplitude equation of the cubic truncated system is given by_ \[\dot{\rho}_{\pm}=\pm 256\,\delta\zeta_{\pm}\rho+\tfrac{\kappa}{2b_{0}}\rho^{3}-6 4a_{1}b_{0}\rho^{5}+\mathscr{o}(\rho^{7}).\] (3.20) Proof.: Since we are dealing with a Bautin bifurcation, we initially consider the truncated normal form system (1.3) up to degree five, that is, \[\dot{x}=\mu_{0}+\mu_{1}y+\mu_{2}x+\mu_{3}xy+a_{0}y^{3}+b_{0}xy^{2}+b_{1}xy^{4},\,\dot{y}=-x+\mu_{2}y+\mu_{3}y^{2}+b_{0}y^{3}+b_{1}y^{5}. \tag{3.21}\] We can show that the corresponding second Lyapunov coefficients is estimated by \[\tfrac{3}{256}b_{0}b_{1}\kappa^{2}-64a_{0}b_{0}-15a_{0}b_{0}b_{1}\delta^{2}+ \tfrac{53}{16}b_{0}\delta^{2}.\] For all values of \(b_{1}\) and sufficiently small choices for the parameters, we can ensure that the second Lyapunov coefficient is always non-zero. In other words, Bautin bifurcation here is completely determined by the cubic terms of the differential system (1.3). We can thus truncate the normal form system (1.3) at degree three, _i.e.,_ let \(b_{1}:=0\). Now recall \(d_{1}\) and \(d_{2}\) from the first column of Routh table in equations (3.6). We solve \(\dot{y}=0\) for \(x\) and substitute it into \(\dot{x}=0\) to obtain \[\mu_{0}+\mu_{1}y+{\mu_{2}}^{2}y+2\mu_{2}\mu_{3}y^{2}+2b_{0}\mu_{2}y^{3}+{\mu_ {3}}^{2}y^{3}+2b_{0}\mu_{3}y^{4}+a_{1}y^{3}+{b_{0}}^{2}y^{5}=0. \tag{3.22}\] Hopf bifurcation occurs when \(d_{1}=0\) and \(d_{2}>0\). Thus, there are two local Hopf singularities at \[E_{\pm}^{*}:=\left(x_{\pm}^{*},y_{\pm}^{*}\right),\quad\text{ for }\ y_{\pm}^{*}:=\tfrac{-3\mu_{3}\pm\delta}{8b_{0}}\text{ and }\ x_{\pm}^{*}:=\mu_{2}y_{\pm}^{*}+\mu_{3}{y_{\pm}^{*}}^{2}+{b_{0}}y_{\pm}^{* }{}^{3}.\] Their associated controller sets follow (3.19). Using shift of coordinates \((x,y)-(x_{\pm}^{*},y_{\pm}^{*})\) on equation (1.3) and transforming the linear part into Jordan canonical form, we obtain \(\dot{x}=-\tfrac{1}{32}\sqrt{\kappa}y\pm\tfrac{3}{8}\delta x^{2}-\tfrac{1}{8} \mu_{3}x^{2}+b_{0}x^{3}\) and \[\dot{y}=\tfrac{1}{32}\sqrt{\kappa}x+\tfrac{64a_{1}b_{0}x^{3}-64b_{0}\eta^{2}+9 {\mu_{3}}^{3}x^{2}\pm 8b_{0}\delta{\mu_{2}}x^{2}\mp 3\delta{\mu_{3}}^{2}x^{2}-72a_{1}{ \mu_{3}}x^{2}-40x^{2}b_{0}{\mu_{2}}{\mu_{3}}-24a_{1}\delta x^{2}}{2b_{0}\sqrt{ \kappa}}\] \[+\tfrac{1}{4}xy\left(4b_{0}x+\mu_{3}\pm\delta\right).\] A computer programming (_e.g.,_[16]) gives rise to the normalized equation (3.20). **Theorem 3.7** (Bautin bifurcations from \(E_{\pm}^{*}\)).: _Let \(9{\mu_{3}}^{2}\geq 32b_{0}\mu_{2}\) and \(\zeta_{\pm}=\varrho(\kappa).\) Then for \(a_{1}<0,\) there are one supercritical and one subcritical Hopf controller sets estimated by_ \[T_{H\pm}^{Sup}:=\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3})\,|\,\zeta_{+}= 0,b_{0}<0\}\ \ \text{and}\] \[T_{H\pm}^{Sub}:=\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3})\,|\,\zeta_{- }=0,b_{0}>0\}\,. \tag{3.23}\] _When controller coefficients cross \(T_{H+}^{Sup_{1}}\) given by (3.19) and \(\zeta_{+}=\varrho(\kappa),\) one stable limit cycle \(\mathscr{C}_{+}^{1}\) bifurcates from \(E_{+}^{*}.\) As for \(T_{H-}^{Sub_{1}}\) when \(\zeta_{-}=\varrho(\kappa),\) the bifurcation causes an unstable local limit cycle \(\mathscr{C}_{-}^{1}\) encircling \(E_{-}^{*}.\) Both of these limit cycles are considered as tertiary limit cycles. Two simultaneous limit cycles surrounding \(E_{+}^{*}\) (or \(E_{-}^{*}\)) do not appear when \(a_{1}<0\) and naturally, saddle-nodes of limit cycles does not occur in this case. For \(a_{1}>0\) and \(\zeta_{\pm}=\varrho(\kappa),\) two subcritical and supercritical Hopf controller sets occur through the estimated manifolds_ \[T_{H\pm}^{Sub_{1}}=\left\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3}) \right|26214a_{1}\delta\zeta_{\pm}-b_{0}\kappa^{2}=0,\;\pm b_{0}\zeta_{\pm}>0,b_{0}<0,\text{ and }\;a_{1}>0\right\}, \tag{3.24}\] \[T_{H\pm}^{Sup_{1}}=\left\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3}) \right|26214a_{1}\delta\zeta_{\pm}-b_{0}\kappa^{2}=0,\;\pm b_{0}\zeta_{\pm}>0,b_{0}>0,\text{ and }\;a_{1}>0\right\},\] _and_ \[T_{H\pm}^{Sup_{2}}=\left\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3}) \right|26214a_{1}\delta\zeta_{\pm}-b_{0}\kappa^{2}>0,\;\zeta_{-}=0,b_{0}<0, \text{ and }\;a_{1}>0\right\}, \tag{3.25}\] \[T_{H\pm}^{Sub_{2}}=\left\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3}) \right|26214a_{1}\delta\zeta_{\pm}-b_{0}\kappa^{2}>0,\;\zeta_{+}=0,b_{0}>0, \text{ and }\;a_{1}>0\right\}.\] _When controller coefficients are close to \(T_{H+}^{Sub_{1}}\) (\(T_{H-}^{Sup_{1}}\)) and \(a_{1}b_{0}\zeta_{+}>0\) (\(a_{1}b_{0}\zeta_{-}<0\)), we have only one tertiary limit cycle \(\mathscr{C}_{+}^{1}\) (\(\mathscr{C}_{-}^{1}\)) encircling \(E_{+}^{*}\) (\(E_{-}^{*}\), respectively). However, a second small limit cycle \(\mathscr{C}_{+}^{2}\) (\(\mathscr{C}_{-}^{2}\)) bifurcates from \(E_{+}^{*}\) (\(E_{-}^{*}\)) as soon as control coefficients cross \(T_{H+}^{Sub_{2}}\) (\(T_{H-}^{Sup_{2}}\)) and \(a_{1}b_{0}\zeta_{+}\) (\(a_{1}b_{0}\zeta_{-}\)) becomes negative (positive). Here, we have two pairs of limit cycles (\(\mathscr{C}_{\pm}^{1}\), \(\mathscr{C}_{\pm}^{2}\)) surrounding \(E_{\pm}^{*}\), where \(\mathscr{C}_{\pm}^{2}\) lives inside \(\mathscr{C}_{\pm}^{1}\). Two more estimated critical controller sets follow_ \[T_{SNL}^{\pm}:=\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3})\big{|}\,26214a_{1}\delta \zeta_{\pm}-b_{0}\kappa^{2}=0,\pm b_{0}\zeta_{\pm}<0,a_{1}>0\}, \tag{3.26}\] _where the two limit cycles (\(\mathscr{C}_{\pm}^{1}\), \(\mathscr{C}_{\pm}^{2}\)) coalesce and disappear as a saddle-node bifurcation of limit cycles._ Proof.: Recall the 3-jet normal form amplitude equation (3.20). Let \[p_{\pm}(\rho):=A\rho^{2}+B\rho+C_{\pm},\ \ \text{where}\ \ A:=-64a_{1}b_{0},\qquad B:= \tfrac{1}{2}b_{0}\kappa,\qquad C_{\pm}:=\pm 256\delta\zeta_{\pm}.\] Positive roots of \(p_{\pm}\) correspond with the limit cycles bifurcated from \(E_{\pm}^{*}.\) We first remark that \(\delta\) is always non-negative. Since \(\kappa>0,\)\(\frac{-B}{A}=\frac{\kappa}{128a_{1}}\) is always negative for \(a_{1}<0\) and at most one limit cycle can bifurcate. In the case of \(a_{1}<0,\) we have one limit cycle for \(\frac{C_{\pm}}{A}=\mp\frac{4\delta\zeta_{\pm}}{a_{1}b_{0}}<0,\) and no positive root for \(\text{sign}(\frac{C_{\pm}}{A})=\text{sign}(\pm b_{0}\zeta_{\pm})>0.\) When the limit cycle exists, _i.e.,_\(\pm b_{0}\zeta_{\pm}<0,\) the limit cycle is asymptotically stable for \(\pm\zeta_{\pm}>0.\) Therefore, \(\zeta_{\pm}=0\) is a critical controller set for the appearance of a limit cycle and it is supercritical when \(b_{0}<0.\) This critical controller set is subcritical for positive values of \(b_{0}\). These arguments justify \(T_{H\pm}^{Sup}\) and \(T_{H\pm}^{Sub}\) in (3.23). Let \(a_{1}>0.\) Hence, \(\frac{-B}{A}>0.\) Assume that \(B^{2}-4AC_{\pm}=\pm\frac{b_{0}\left(102\times 257a_{1}\delta\zeta_{\pm}-b_{0} \kappa^{2}\right)}{4}>0.\) When \(\frac{C_{\pm}}{A}<0,\) the polynomial \(p_{\pm}\) has exactly one positive root while for \(\frac{C_{\pm}}{A}>0,\) we have two positive roots for \(p_{\pm}.\) Therefore, \(\mbox{\it sign}(\frac{C_{\pm}}{A})=\mbox{\it sign}(\mp b_{0}\zeta_{\pm})<0\) and \(B^{2}-4AC_{\pm}=0\) is a critical controller manifold where one limit cycle bifurcates from \(E_{\pm}^{*}.\) The bifurcated limit cycle is asymptotically stable when \(C_{\pm}>0;\)_i.e.,_\(b_{0}>0.\) Thus, \(\pm b_{0}\zeta_{\pm}>0\) and \(B^{2}-4AC_{\pm}=0\) gives rise to a supercritical Hopf controller set \(T_{H\pm}^{Sup_{1}}\) in (3.24), where a small limit cycle bifurcates from \(E_{\pm}^{*}.\) Furthermore, \(\frac{C_{\pm}}{A}=0\) and \(B^{2}-4AC_{\pm}>0\) is another critical controller set where one small limit cycles is born in the interior of an already existed larger limit cycle. The new small bifurcated limit cycle is asymptotically stable when \(\mp b_{0}\zeta_{\pm}>0\) and \(\pm\zeta_{\pm}>0.\) This is equivalent with \(b_{0}<0\). These conditions determine the supercritical controller set \(T_{H\pm}^{Sup_{2}}\) in equation (3.25). Thus, the argument for subcritical controller manifold \(T_{H\pm}^{Sub_{2}}\) for \(b_{0}>0\) is similar. For \(\frac{C_{\pm}}{A}>0\), we have two different cases: 1. We have two positive roots for \(B^{2}-4AC_{\pm}>0.\) 2. There is no limit cycle for \(B^{2}-4AC_{\pm}<0.\) Hence, when \(\pm b_{0}\zeta_{\pm}<0\) and \(a_{1}>0\) hold, \(B^{2}-4AC_{\pm}=0\) is a saddle-node controller set \(T_{SNL}^{\pm}\) of limit cycles in (3.26), where two limit cycles coalesce and disappear. **Remark 3.8** (Basin of attraction for stabilization approach via bifurcated stable limit cycles).: Bautin bifurcation described in Theorem 3.7 includes supercritical and subcritical bifurcations of tertiary limit cycles \(\mathscr{C}_{\pm}^{1}\) and \(\mathscr{C}_{\pm}^{2}\). They can be used to stabilize the system when the limit cycle is stable. However, it is important to notice about their basin of attractions. When \(\mathscr{C}_{\pm}^{2}\) is stable, the basin of attraction is the interior of \(\mathscr{C}_{\pm}^{1}\setminus\{E_{\pm}^{*}\}\). For example, Figure 4(b) illustrates the stable limit cycle \(\mathscr{C}_{+}^{2}\). Its basin of attraction is the interior of \(\mathscr{C}_{+}^{1}\) (except \(E_{+}^{*}\)). The basin of attraction for the stable cases of \(\mathscr{C}_{\pm}^{1}\) includes the region encircled by \(\mathscr{C}_{\pm}^{1}\) and \(\mathscr{C}_{\pm}^{2}\). However, the complete description for basin of attraction for the exterior of \(\mathscr{C}_{\pm}^{1}\) depends on the dynamics of the system. For instance, Figure 6(d) demonstrates stable limit cycle \(\mathscr{C}_{+}^{1}\) living inside unstable limit cycle \(\mathscr{C}_{0}\). The basin of attraction for \(\mathscr{C}_{+}^{1}\) consists of the region between the stable manifolds (red and blue curves in Figure 6(d)) of the origin (excluding the equilibria). This region includes the interior of \(\mathscr{C}_{+}^{1}\). The limit cycle \(\mathscr{C}_{0}\) grows in size to collide with either of the secondary equilibria as controller coefficients go away from the corresponding Hopf controller set (3.1), _i.e.,_\(\frac{\mu_{2}}{b_{0}}\) decreases. The limit cycle collides with either \(E_{+}\) or \(E_{-}\) giving rise to homoclinic cycles \(\Lambda_{\pm}\) or simultaneously collides with both of them. The latter leads to a heteroclinic cycle \(\Lambda\); see Figure 3(d). The following two theorems deal with heteroclinic and homoclinic bifurcations. The limit cycle \(\mathscr{C}_{0}\) may alternatively collide with the origin and disappear through a homoclinic bifurcation; _e.g.,_ see Figure 6(e) where limit cycle \(\mathscr{C}_{0}\) collides with the saddle and disappear as in Figure 6(f). Estimated controller set is then given by \(T_{HmC}:=\left\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3})|\,\mu_{2}=\frac{8b_{0}}{5a_ {1}}\mu_{1}\right\}.\) **Theorem 3.9** (Heteroclinic cycle \(\Lambda\) when \(a_{1}<0\)).: _Let \(|\mu_{0}|=\sigma(|\mu_{1}|^{2})\) and \(a_{1}<0.\) Then, there is a heteroclinic cycle \(\Lambda.\) This connects the equilibrium \(E_{+}\) with the saddle \(E_{-}\). The corresponding heteroclinic bifurcation occurs at the heteroclinic controller manifold approximated by_ \[T_{HtC}:=\left\{(\mu_{0},\mu_{1},\mu_{2},\mu_{3})|\,\mu_{2}=\frac{2b_{0}}{5a_ {1}}\mu_{1}+\frac{9}{16}\mu_{3}\sqrt{\mu_{1}}-\frac{9}{16}\frac{\mu_{0}}{\sqrt {\mu_{1}}}+\sigma(||(\mu_{1},\mu_{3})||^{\frac{3}{2}})\right\}. \tag{3.27}\] _The most leading estimated terms for \(\Lambda\) are_ \[(x,y)=\left(\mu_{1}\frac{\sqrt{2}}{2}\sin(2\varphi)\sqrt{a_{1}\cos^{2}(2 \varphi)+2+a_{1}},\frac{\sqrt{\mu_{1}}}{\sqrt{-a_{1}}}\cos(2\varphi)\right)+( \sigma(|\mu_{1}|^{\frac{3}{2}}),\sigma(|\mu_{1}|)).\] Proof.: Here, we use the rescaling transformations (3.11) and (3.13) when \(\gamma_{1}:=0\), \(\gamma_{02}:=0\), \(\gamma_{2}:=1\), \(\gamma_{11}:=\gamma_{12}:=\gamma_{13}:=0\), and \(\gamma_{34}:=1.\) The unperturbed heteroclinic orbit connects the two saddles \(\left(0,\pm\frac{1}{\sqrt{-a_{1}}}\right)\). We assume that equations (3.14) hold. Similar to [35, Equation 2.11], we have \[y_{0}(0)=-\frac{1}{\sqrt{-a_{1}}}=p_{0}+q_{0},y_{0}(\frac{\pi}{2 })=\frac{1}{\sqrt{-a_{1}}}=q_{0}-p_{0},\text{ and thus,}\] \[\tilde{y}_{0}=p_{0}\cos(2\varphi)+q_{0}=\frac{1}{\sqrt{-a_{1}}} \cos(2\varphi),q_{0}=0.\] We compute the unperturbed heteroclinic orbit via \(H(\tilde{x}_{0},\tilde{y}_{0})=H(0,\pm(-a_{1})^{\frac{-1}{2}})\) where \(H(\tilde{x},\tilde{y})=\frac{1}{2}\tilde{x}^{2}-\frac{1}{2}\tilde{y}^{2}+ \frac{1}{4}a_{1}\tilde{y}^{4}.\) Therefore, \(\tilde{x}_{0}=\frac{\sqrt{2}}{2}\sin(2\varphi)\sqrt{a_{1}\cos^{2}(2\varphi)+2 +a_{1}}.\) The first-order terms for \(i=1,2,3\) in \(\phi\dot{x}\) follow equations (3.16) and equations (3.17) hold. We need the first-order terms in \(\phi\dot{y}\) given by (see [35, Equations 2.22a and 2.22b]) \[\phi_{0}y_{11}^{\prime}-\gamma_{21}y_{0}-{y_{0}}^{2}\gamma_{31}-b_{0}{y_{0}} ^{3}+\phi_{11}y_{0}^{\prime}=0,\phi_{0}y_{1i}^{\prime}-\gamma_{3(i+1)}y_{0}- {y_{0}}^{2}\gamma_{4(i+1)}+\phi_{1i}y_{0}^{\prime}=0, \tag{3.28}\] for \(i=2,3.\) We evaluate equations (3.16) and (3.28) at \(\varphi=0,\frac{\pi}{2},\) while equations (3.17) are computed at \(\varphi=\pi/2.\) These lead to fifteen linear equations and \(\gamma_{21}=\frac{2b_{0}}{5a_{1}}+\frac{9}{16}\gamma_{31},\)\(\gamma_{32}=\frac{16}{9}\gamma_{22},\) and \(\gamma_{33}=\frac{16}{9}\gamma_{23}.\) Thus, transition varieties (3.27) are derived by substitution of these values into the rescaling transformation for \(\mu_{2}.\) **Theorem 3.10** (Homoclinic controller manifolds for \(\Lambda_{\pm}\)).: _Let \(a_{1}<0.\) There are two homoclinic cycles \(\Lambda_{\pm}\) connecting the stable and unstable manifolds of \(E_{\pm}\) at homoclinic controller manifolds estimated by_ \[T^{\pm}_{H_{mC}}:=\Big{\{}(\mu_{0},\mu_{1},\mu_{2},\mu_{3})|\,\mu_{1}=\frac{10 ^{\frac{2}{3}}}{(-a_{1})^{\frac{2}{2}}}\mu_{0}{}^{\frac{2}{3}}-\frac{49.1920 4541}{(-a_{1})^{\frac{2}{2}}}b_{0}\mu_{0}-\frac{8.203865604}{10^{\frac{2}{3}} (-a_{1})^{\frac{1}{6}}}\mu_{2}\mu_{0}{}^{\frac{1}{3}}+\frac{4.355675048}{10^{ \frac{2}{3}}(-a_{1})^{3}}\mu_{3}\mu_{0}{}^{\frac{2}{3}}\Big{\}}. \tag{3.29}\] _The homoclinic \(\Lambda_{+}\) occurs when \(\mu_{0}>0\) while \(\Lambda_{-}\) corresponds with negative values of \(\mu_{0}\). The leading estimated terms for \((x,y)\)-coordinates of the homoclinic cycles \(\Lambda_{\pm}\) are_ \[\mp 0.7157063998\cos(2\varphi)\mp 0.2299428741\qquad\qquad\text{and}\] \[\mp 0.3622053022\sqrt{2}\sin(\varphi)\sin(2\varphi)\sqrt{cos(2 \varphi)+2.28512403471548},\quad\text{ for }\varphi\in[0,\pi],\] _respectively. This is useful for the management of its nearby oscillating dynamics._ Proof.: We first use transformations \(x=(-a_{1})^{\frac{3}{2}}\,\hat{x},\)\(y=(-a_{1})^{\frac{1}{2}}\,y,\) and time rescaling \(t=-\frac{1}{a_{1}}\tau\) to Figure 4: Numerical phase portraits 4(a)-4(g) are associated with regions \(a-h\) in 3(a) and (1.3). change the coefficient \(a_{1}\) to \(-1\). Then, we have \[\dot{\hat{x}}=(-a_{1})^{\frac{-5}{2}}\,\mu_{0}+(-a_{1})^{-2}\,\mu_{1 }\hat{y}+(-a_{1})^{-1}\,\mu_{2}\hat{x}-\hat{y}^{3}+(-a_{1})^{\frac{-1}{2}}\,\mu_{ 3}\hat{x}\hat{y}+b_{0}\hat{x}\hat{y}^{2}, \tag{3.30}\] \[\dot{\hat{y}}=-\hat{x}+(-a_{1})^{-1}\,\mu_{2}\hat{y}+(-a_{1})^{ \frac{-1}{2}}\,\mu_{3}\hat{y}^{2}+b_{0}\hat{y}^{3}.\] Let \(\mu_{0}^{*}:=(-a_{1})^{\frac{-5}{2}}\,\mu_{0},\,\mu_{1}^{*}:=(-a_{1})^{-2}\, \mu_{1},\,\mu_{2}^{*}:=(-a_{1})^{-1}\,\mu_{2},\,\mu_{3}^{*}:=(-a_{1})^{\frac{-1 }{2}}\,\mu_{3}.\) Next, we replace \(\mu_{i}^{*}\) with \(\mu_{i}\) and \(\hat{y}\) with \(\hat{y}\) for simplicity. Now apply the rescaling transformations (3.11) and expansion (3.13) when \[\gamma_{1}=\pm 0.1,\gamma_{01}=\gamma_{02}=0,\gamma_{2}=-1,\gamma_{11}= \gamma_{13}=0,\gamma_{12}=1,\gamma_{21}=\gamma_{22}=0,\gamma_{34}=0. \tag{3.31}\] Recall that the unperturbed system is Hamiltonian and it holds a homoclinic cycle \(\Lambda_{+}\) for \(\gamma_{1}>0\). This connect the stable and unstable manifolds of the saddle \(E_{+}.\) The homoclinic cycle \(\Lambda_{-}\) happens for \(\gamma_{1}<0\) and corresponds with \(E_{-}\). Following the proof of Theorem 3.5, we apply equations (3.14), \((\tilde{y}_{0}(0),\tilde{y}_{0}(\frac{\pi}{2}))=(p_{0}+q_{0},q_{0}-p_{0})\), \(H(\tilde{x}_{0}(\frac{\pi}{2}),\tilde{y}_{0}(\frac{\pi}{2}))=H(0,p_{0}+q_{0})\), and \(\frac{\partial H}{\partial\tilde{y}}(0,p_{0}+q_{0})=0\) to obtain \[\tilde{y}_{0}=\mp 0.7157063998\cos(2\varphi)\mp 0.2299428741,\ p_{0}=\mp 0.715706 3998,\ q_{0}=\mp 0.2299428741.\] where \(\tilde{x}_{0}\) is obtained from \(H(\tilde{x}_{0},\tilde{y}_{0})=\frac{1}{2}\tilde{x_{0}}^{2}-\frac{1}{2}\tilde {y_{0}}^{2}+\frac{1}{4}a_{1}\tilde{y_{0}}^{4}\pm\frac{1}{10}\tilde{y}_{0}=H(0, p_{0}+q_{0}).\) Here, equations (3.16) and (3.17) hold and we evaluate them at \(\varphi=\pi\) and \(\varphi=\pi,\pi/2\), respectively. We obtain \(\gamma_{23}=\pm 0.2464356892\gamma_{33}\) and \(\gamma_{31}:=\pm 1.129378222b_{0}.\) A substitution into the rescaling transformations completes the proof. **Remark 3.11** (Equilibria \(E_{\pm}\) are initially encircled by limit cycles \(\mathscr{C}_{\pm}^{1}\) and \(\mathscr{C}_{\pm}^{2}\), respectively).: When the limit cycles \(\mathscr{C}_{\pm}^{1}\) are bifurcated, \(\mathscr{C}_{+}^{1}\) and \(\mathscr{C}_{+}^{2}\) encircle the secondary equilibrium \(E_{+}\) while \(\mathscr{C}_{-}^{1}\) and \(\mathscr{C}_{+}^{2}\) surround \(E_{-}\). However, steady-state bifurcations may lead to disappearances and appearances of \(E_{\pm}\) inside these limit cycles. For instance, Figure 4(h) illustrates unstable limit cycle \(\mathscr{C}_{+}^{1}\) encircling spiral sink \(E_{-}.\) This is because \(\mathscr{C}_{+}^{1}\) and \(\mathscr{C}_{+}^{2}\) are initially bifurcated through a saddle-node bifurcation of limit cycles and surrounded \(E_{+}\) for controller coefficients from region \(b\) in 3(a). The limit cycle \(\mathscr{C}_{+}^{2}\) disappears through a subcritical Hopf bifurcation in Figure 4(c) and next, two equilibria (\(E_{-}\) and a saddle) are born through a saddle-node bifurcation at \(T_{SN}^{-}\). Then, \(E_{+}\) and the saddle point disappear at controller manifold \(T_{SN}^{+}.\) This leaves \(E_{-}\) as the only equilibrium living inside the limit cycle \(\mathscr{C}_{+}^{1}.\) Now we briefly discuss the differences between symmetry breaking bifurcations and symmetry preserving bifurcations in the following remark. **Remark 3.12** (Symmetry preserving bifurcations versus symmetry breaking bifurcations).: There are some essential differences between \(\mathbb{Z}_{2}\)-equivariant bifurcations of Bogdanov-Takens singularity and its symmetry-breaking bifurcations. At the start of the analysis, we observe the asymmetric bifurcations through the appearances of secondary equilibria. Next, each of these may undergo asymmetric Hopf singularity where two tertiary limit cycles \(\mathscr{C}_{\pm}^{1}\) come to existence; \(\mathscr{C}_{+}^{1}\) surrounds \(E_{+}\) while \(\mathscr{C}_{+}^{1}\) encircles \(E_{-}\). Bifurcated limit cycles \(\mathscr{C}_{\pm}^{1}\) simultaneously collide with the origin and construct a double homoclinic cycle for the equivariant cases. The symmetry breaking, instead, causes these to construct two different _quaternary homoclinic cycles_\(\Gamma_{\pm}\). We have, of course, been naturally looked for these bifurcations. However, the symmetry-breaking parameters unexpectedly allow the system to experience two different Bautin bifurcations from \(E_{\pm}.\) These are the alternatives to Hopf bifurcations from \(E_{\pm}\) for the symmetric bifurcations. For Bautin bifurcations, there are an ordered subcritical and supercritical bifurcations of limit cycles leading to two simultaneous limit cycles \(\mathscr{C}_{+}^{1}\) and \(\mathscr{C}_{+}^{2}\). Limit cycle \(\mathscr{C}_{+}^{2}\) has smaller amplitude than \(\mathscr{C}_{+}^{1}\) so that the smaller limit cycle \(\mathscr{C}_{+}^{2}\) lives in the interior of \(\mathscr{C}_{+}^{1}\). These limit cycles first merge to a single _bistable limit cycle_ and then, disappear when controller coefficients cross the _saddle-node controller manifolds_ of _limit cycles_. This is different from the saddle-node bifurcation of limit cycles in [18, Lemma 5.7]. A limit cycle \(\mathscr{C}_{0}\) bifurcates from the primary equilibrium. For the equivariant bifurcation cases, its symmetric feature causes only a heteroclinic cycle \(\Lambda\), _i.e.,_ it simultaneously collides with the secondary equilibria \(E_{+}\) and \(E_{-}.\) For symmetry-breaking bifurcations, the bifurcated limit cycle \(\mathscr{C}_{0}\) collides with either \(E_{+}\) or \(E_{-}.\) Thus, we have two different homoclinic cycles \(\Lambda_{\pm}\). The heteroclinic cycle \(\Lambda\) is still possible for symmetry-breaking cases by appropriately tuning the controller coefficients; see Theorem 3.9. ## 4 Nonlinear controllers for linearly uncontrollable cases In this section we consider bifurcation control problem for a different differential system type (1.1) with a single input controller whose linearization at the origin is either controllable or it is linearly uncontrollable. Since local symmetry-breaking bifurcations of \(\mathbb{Z}_{2}\)-equivariant control systems of type (1.2) are determined by their cubic Taylor expansion, we consider cubic truncated controlled differential system (1.2), _i.e.,_ \[\dot{x}:=c_{1}x^{3}+c_{2}y^{3}+c_{3}xy^{2}+c_{4}yx^{2}+u_{1},\ \dot{y}:=-x+c_{5}x^{3}+c_{6}y^{3}+c_{7}xy^{2}+c_{8}yx^{2}+u_{2}. \tag{4.1}\] Here, we only treat single state-feedback input controller systems; _i.e.,_ we choose either \(u_{1}:=0\) or \(u_{2}:=0.\) When \(u_{1}\neq 0,\) the linearization of system (4.1) at the origin is linearly controllable; _i.e.,_ it satisfies Kalman controllability rank condition. For the case \(u_{2}\neq 0,\) the linearized system has an uncontrollable mode. Many nonlinear techniques from nonlinear control theory fails for systems with uncontrollable modes. In this section, we show that the available controller coefficients can be exploited to enforce rich lists of bifurcation scenarios for both controllable and uncontrollable cases of system (4.1); see Figures 6 and 8. **Theorem 4.1** (When linearization satisfies Kalman condition).: _Let \(u_{2}:=0,\)\(u_{1}:=\upsilon_{1}x+\upsilon_{2}y+\upsilon_{3}xy,\) and \(\upsilon_{i}\) stand for controller coefficients. Then, controlled system (4.1) admits a Hopf controller manifold, a pitchfork and a heteroclinic controller manifold given by \(T_{H}=\{(\upsilon_{1},\upsilon_{2},\upsilon_{3})|\,\upsilon_{1}=0\},\)\(T_{P}=\{(\upsilon_{1},\upsilon_{2},\upsilon_{3})|\,\upsilon_{2}=0\},\) and_ \[T_{H\circ C}=\Big{\{}(\upsilon_{1},\upsilon_{2},\upsilon_{3})|\,\frac{\upsilon _{1}}{2}-\frac{\upsilon_{3}+3\alpha_{6}}{10c_{2}}\upsilon_{2}+\frac{120c_{7}+1 20c_{4}-90c_{8}-270c_{1}}{720}\upsilon_{1}\upsilon_{2}+\frac{(c_{3}+3c-6)(9c_{ 1}+3c_{8}-4c_{4}-4c_{7})}{60c_{2}}\upsilon_{2}{}^{2}=0\Big{\}}.\] Figure 5: Estimated bifurcation controller sets of system (4.1) for the uncontrollable linearization case _Estimated homoclinic controller manifolds \(T_{HmC}\) and \(T_{HmC_{\pm}}\) in the controller coefficient space \((\upsilon_{1},\upsilon_{2},\upsilon_{3})\) are_ \[\tfrac{\upsilon_{1}}{2}+\tfrac{77c_{2}{}^{2}+213c_{2}{}^{2}+80c_{ 3}c_{7}+222c_{3}c_{6}}{320c_{2}(c3+3c_{6})\upsilon_{1}{}^{-2}}+\tfrac{c_{3}{}^ {3}-7c_{0}c_{2}{}^{2}-21c_{6}{}^{2}c_{6}+8c_{3}c_{2}c_{4}+8c_{3}c_{2}c_{7}}{30 c_{2}{}^{3}\upsilon_{2}{}^{-2}}+\tfrac{135c_{0}{}^{3}+360c_{1}c_{2}{}^{2}+120c_{2}c_{ 6}-120c_{3}c_{4}c_{6}-408c_{3}c_{6}c_{7}}{240c_{2}{}^{2}(c_{3}+3c_{6})\upsilon_ {1}{}^{-1}\upsilon_{2}-1}\] \[-\tfrac{2\upsilon_{2}}{5c_{2}}(c_{3}+3c_{6})+\tfrac{27c_{0}{}^{3}- 72c_{1}c_{2}{}^{2}-24c_{2}{}^{2}c_{6}+24c_{2}c_{3}c_{4}+24c_{3}c_{6}}{30c_{2}{ }^{3}}\upsilon_{2}{}^{2}+\tfrac{87c_{0}{}^{2}-59c_{0}{}^{3}-163c_{0}c_{2}{}^{2}- 40c_{2}c_{3}c_{4}}{240c_{2}{}^{2}(c_{3}+3c_{6})}\upsilon_{1}\upsilon_{2}=0,\] \[\tfrac{\upsilon_{1}}{2}-\tfrac{2\upsilon_{2}}{5c_{2}}\left(c_{3}+3 c_{6}\right)\mp\left(\tfrac{3\sqrt{2}\pi}{32}+\tfrac{\sqrt{2}\pi(21c_{6}+13c_{3})}{480c_{2}} \upsilon_{1}\right)\upsilon_{3}\sqrt{-\upsilon_{2}}\mp\tfrac{(450c_{3}c_{6}+891 c_{6}{}^{2}-189c_{3}{}^{2}-360c_{2}c_{7})\sqrt{2}\pi}{7680c_{2}{}^{2}}\upsilon_{3} \upsilon_{2}\sqrt{-\upsilon_{2}}=0.\] Proof.: Using near-identity changes of coordinates and a Maple program, the system (4.1) for \(u_{2}:=0\) and \(u_{1}:=\upsilon_{0}+\upsilon_{1}x+\upsilon_{2}y+\upsilon_{3}xy\) can be transformed into (1.3) where \(\mu_{1}=\upsilon_{2}+\sigma(||\upsilon||^{2}),\mu_{2}=\frac{\upsilon_{1}}{2}+ \sigma(||\upsilon||^{2}),\) \[\mu_{0}=\upsilon_{0}+\tfrac{288c_{2}{}^{2}c_{6}+864c_{1}c_{2}{}^{2}+77c_{3}{}^ {2}c_{6}+231c_{3}c_{6}{}^{2}-288c_{2}c_{4}c_{6}-96c_{2}c_{3}c_{4}-288c_{2}c_{4} c_{7}-96c_{2}c_{3}c_{7}-11c_{3}{}^{3}-297c_{6}{}^{3}}\upsilon_{0}\upsilon_{2}\] \[+\tfrac{10c_{3}c_{6}+48c_{2}c_{7}+23c_{3}{}^{2}-33c_{6}{}^{2}}{3 2c_{2}(c_{3}+3c_{6})}\upsilon_{0}\upsilon_{1},\] \[\mu_{3}=\tfrac{1442c_{1}c_{2}{}^{2}+212c_{3}c_{6}{}^{2}+72c_{3}{}^ {2}c_{6}+482c_{2}c_{6}-482c_{2}c_{6}c_{7}-162c_{2}c_{3}c_{7}-482c_{2}c_{4}c_{6} -162c_{2}c_{3}c_{4}-272c_{3}{}^{3}-c_{3}}\upsilon_{0}+\tfrac{\upsilon_{3}}{3} +\sigma(||\upsilon||^{2}). \tag{4.2}\] This implies that the controlled differential system (4.1) is fully unfolded. Here, we let \(\nu_{0}\) for simplicity. The controller curve \(T_{H_{\pm}}\) are derived by substitution in (3.9). The controller sets for (4.1) when \(\upsilon_{0}=0\) are derived from equations (3.1), (3.10), and (3.27) as claimed. In order to illustrate the numerical \(\mathbb{Z}_{2}\)-breaking controller bifurcation varieties, we choose \(c_{i}:=1,\,i=1\ldots 8,\) the controller input \(\upsilon_{3}:=\pm 0.3,\) and obtain Figures 7(a)-7(b). These numerical controller manifolds are highly accurate over the plotted intervals. Next, input pairs \((\upsilon_{1},\upsilon_{2})\) for values \((0.005,0.005),(-0.005,0.005),\)\((-0.025,-0.0014),\)\((-0.025,-0.0027),\)\((-0.025,-0.005),\)\((-0.025,-0.01),\)\((-0.025,-0.016),\) and \((-0.025,-0.02)\) are chosen from each connected region labeled (a)-(h) in Figure 7(a). We depict the numerical controlled phase portraits in Figures 6(a)-6(h), respectively. **Theorem 4.2** (Linearly uncontrollable cases).: _Let \(u_{1}:=0\) and \(u_{2}:=\nu_{0}+\nu_{1}y+\nu_{2}y^{2}.\) There are two saddle node controller manifolds \(T_{SN\pm}\), two Hopf controller sets \(T_{H\pm}\) and two homoclinic controller manifolds \(T_{HmC\pm}\). For simplicity, let \(c_{1}=-1,\)\(c_{2}=1,\)\(c_{3}=1,\)\(c_{4}=-1,\)\(c_{5}=-1,\)\(c_{6}=1,\)\(c_{7}=1,\) and \(c_{8}=-1.\) Then, these controller manifolds follow_ \[T_{SN\pm}=\left\{(\nu_{0},\nu_{1},\nu_{2})|\,4\nu_{0}\nu_{2}=1\pm 2 \nu_{1}+{\nu_{1}}^{2}\right\},\qquad T_{H\pm}=\left\{(\nu_{0},\nu_{1},\nu_{2})|\, \nu_{1}=\pm 1\mp\sqrt{1+4\nu_{0}\nu_{2}}\right\},\] \[\text{and}\qquad T_{HmC\pm}=\left\{(\nu_{0},\nu_{1},\nu_{2})|\, \tfrac{1}{2}\nu_{1}+\tfrac{16}{45}{\nu_{0}}^{2}+\tfrac{37}{80}{\nu_{1}}^{2}- \tfrac{1}{3}\nu_{0}\nu_{2}\mp\tfrac{\sqrt{2}\pi\left(16{\nu_{0}}^{2}+3{\nu_{1} }^{2}\right)(4\nu_{0}-3\nu_{2})}{32\sqrt{48{\nu_{0}}^{2}+9{\nu_{1}}^{2}}}=0 \right\}.\] Proof.: Here we deal with bifurcation control of system (4.1) when Equations (4.2) is now replaced with \(\mu_{0}=0,\)\(\mu_{2}=\tfrac{3c_{2}c_{4}-c_{2}^{2}}{3c_{2}}\nu_{0}^{2},\)\(\mu_{3}=\tfrac{2(3c_{2}c_{4}+3c_{7}c_{2}-c_{2}^{2}-3c_{3}c_{6})}{9c_{2}}\nu_{0}+ \tfrac{2}{3}\nu_{5},\) and \(\mu_{1}\) estimated by \[\mu_{1}=\tfrac{32(3c_{6}+c_{3})(c_{3}{}^{3}+3c_{6}c_{3}{}^{2}-6c_{2}c_{3}c_{4}-6c _{2}c_{3}c_{7}+27c_{1}c_{2}{}^{2}+9c_{2}c_{3}c_{8})\nu_{0}{}^{2}+9c_{2}(64c_{2}c_{4} +80c_{2}c_{7}-23c_{3}-58c_{3}c_{6}+81c_{6}{}^{2}){\nu_{1}}^{2}}{576c_{2}{}^{2}(3 c_{6}+c_{3})}+\tfrac{288c_{2}\nu_{1}-192c_{3}\nu_{2}}{576c_{2}}.\] The claims are then followed from Theorems 3.2, 3.3, and 3.5. Replacing the values for \(c_{i}\)-s, system (4.1) has four equilibria given by \[(x_{1}^{\pm},y_{1}^{\pm})=\left(\tfrac{1+\nu_{1}\pm\sqrt{1+2 \nu_{1}^{2}-4\nu_{0}\nu_{2}}}{2\nu_{2}},\tfrac{1+\nu_{1}\pm\sqrt{1+2\nu_{ 2}+\nu_{1}^{2}-4\nu_{0}\nu_{2}}}{2\nu_{2}}\right),\] \[(x_{2}^{\pm},y_{2}^{\pm})=\left(\tfrac{1-\nu_{1}\pm\sqrt{1-2\nu_{1} +\nu_{1}^{2}- We take \(\nu_{1}=0.1\) and depict critical controller manifolds in terms of controller coefficients \((\nu_{0},\nu_{2})\) in Figures 5. We choose \((-0.7,-0.7)\), \((-0.4,-0.6)\), \((0.1,-0.2)\), \((0.1,-0.5)\), \((0.1,-0.66)\), and \((0.3,-0.1)\) for \((\nu_{0},\nu_{2})\) from regions 1-6 in the first figure of 5. Then, the numerical phase portraits are illustrated in Figures 8(a)-8(e), respectively. System (4.1) has no equilibrium for controller coefficients choices from region 1. Two small equilibria are bifurcated via a fold bifurcation at \(T_{SN+}\); see Figure 8(b). Another saddle-node bifurcation occurs at \(T_{SN-}\) and two new equilibria are born. Thus, there are four equilibria for controller coefficient choices from region 3. As for controller coefficients of region 4, a subcritical Hopf bifurcation gives rise to an unstable limit cycle; see Figure 8(d). The limit cycle disappears at homoclinic controller set \(T_{HmC+}\) and therefore, there is no limit cycle for coefficients from region 5. Control choices associated with region 6 lead to a stable limit cycle. **Remark 4.3** (Subcritical and supercritical type switching for bifurcation varieties).: Linear sta Figure 6: Controlled phase portraits for linearly controllable case of (4.1). Figures 6(a)-6(i) are associated with control coefficients chosen from regions (a)-(i) in Figure 7(a). bility analysis provides a local approach and its neighborhood validity is essential for practical life problems. The neighborhood validity can be very small if nonlinear terms give rise to a subcritical Hopf bifurcation. The basin of attraction for the asymptotically stable equilibrium in subcritical cases is merely the interior of the small bifurcated unstable limit cycle. Therefore, the _linear stability of this kind_ fails in many control engineering applications. _Primary supercritical Hopf bifurcation_ has been widely considered as a _safe controller design_; _e.g.,_ see [26]. This is due to its expected large basin of attraction for the bifurcated asymptotically stable limit cycle. Figure 5 illustrates that controller coefficient choices from region 4 correspond with unstable limit cycle in Figure 8(d) while choices from region 6 cause stable limit cycle in Figure 8(f). The same phenomenon occurs for regions \(g\) and \(i\) in Figure 7(a). We have stable limit cycle \(\mathscr{C}_{-}^{1}\) in Figure 6(g) while \(\mathscr{C}_{-}^{1}\) is unstable in Figure 6(i). They play a potential role for stabilizing the system. Controller coefficients from regions 6 and \(i\) from 6(i) and 7(a) fail the restrictions in Theorem 3.3. These imply the existence of Bautin bifurcation in Theorem 3.7. Figure 8: Controlled phase portraits 8(a)-8(e) corresponding with regions 1-5 partitioned by the estimated controller sets in Figure 5 for uncontrollable linearization case of (4.1) Figure 7: Symmetry-breaking controller set for linearly controllable case of (4.1), _i.e.,_\(u_{2}:=0\). ## 5 Chua system The main goal in this section is to illustrate how controlled Chua system experiences a rich list of bifurcation scenarios by appropriately choosing small controller coefficients. There have been an extensive literature on the dynamics study of Chua circuit systems; _e.g.,_ see [34, 40, 43]. For example, Zhao et. al. [43] investigated Chua systems for Hopf and generalized Hopf bifurcations. Yang and Zhao [40] considered the modifed Chua's circuit system with a delayed feedback: a delayed system undergoes Hopf and Hopf-zero bifurcations and stabilized the system through either a stable periodic orbit or a stable equilibrium. Puebla et. al. [34] implemented a linear PI compensator for a tracking control. Chua circuit is an electrical circuit and experiences Bogdanov-Takens, Hopf-zero and Hopf bifurcations. Consider the controlled Chua system \[\dot{x}=\alpha(y-ax^{3}-cx),\quad\dot{y}=x-y+z,\quad\dot{z}=-\beta y+u,\quad u =\nu_{0}+\nu_{1}x+\nu_{2}y+\nu_{3}xy, \tag{5.1}\] where \(u\) stands for a linear state-feedback input with small gain parameters \(\nu_{i}\) for \(i=0,1,2,3\). State variables \(x\) and \(y\) represent the voltage of the capacitor and \(z\) is the electric current in the inductor. The uncontrolled system (\(\nu_{i}=0\) for \(i\leq 4\)) is \(\mathbb{Z}_{2}\)-equivariant whose symmetry is given by the reflection \((x,y,z)\longrightarrow(-x,-y,-z)\). Uncontrolled Chua circuit system has three meaningful equilibria: the origin and \(e_{\pm}:(x_{\pm},y,z_{\pm})=\left(\pm\sqrt{-\frac{c}{a}},0,\mp\sqrt{-\frac{c} {a}}\right).\) The origin experiences Hopf and Hopf-zero bifurcations when \((\beta=-\alpha(c-1)(1+\alpha c),0<c<1)\) and \((\beta=0,\alpha=-\frac{1}{c},0<c<1)\), respectively. The equilibria \(e_{\pm}\) undergo Hopf and Hopf-zero bifurcations for \((\beta=\alpha(2c+1)(1-2\alpha c),-\frac{1}{2}<c<0)\) and \((\beta=0,\alpha=\frac{1}{2c},-\frac{1}{2}<c<0)\). These four singularities are not treated here; see [34, 40]. We remark that our Hopf bifurcations here are caused by controller coefficient choices and thus, they are essentially different from Hopf singular cases caused by the system's choices for \(\alpha,\beta\) and \(c\). **Proposition 5.1**.: _When \(c=0,\) the origin is the only equilibrium of the system. System (5.1) has a \(\mathbb{Z}_{2}\)-equivariant Bogdanov-Takens singularity at the origin for \(\beta=\alpha\) and \(c=0\)._ Proof.: Jacobian matrix associated with (5.1) at \((0,0)\) for \(u=0\) is \(J:=[-\alpha c,\alpha,0;1,-1,1;0,-\beta,0]\). Eigenvalues for \(c=0\) are \(0\) and \(-\frac{1\pm\sqrt{1+4\alpha-4\beta}}{2}\). Thus for \(\alpha=\beta\), we have a none semi-simple double zero eigenvalue. **Theorem 5.2** (Controlled bifurcations).: _Consider controlled Chua differential system (5.1) for \(\beta=\alpha\) and \(c=0\). Then by varying small controller coefficients \(\nu_{i}\), the controlled system undergoes a pitchfork bifurcation, three Hopf bifurcations and two homoclinic bifurcations._ Proof.: By Proposition 5.1, there is a two-dimensional invariant center manifold \(\mathscr{M}\) for Chua system (5.1). The reduction of (5.1) on center manifold \(\mathscr{M}\) and then, primary shift of coordinates give rise to \(\dot{x}=v_{1}+a\alpha(\alpha-1)^{3}x^{3}+3a\alpha^{2}(\alpha-1)^{2}x^{2}y+a \alpha^{4}y^{3}\) and \(\dot{y}=-x+\nu_{3}xy+v_{2}\), for \[v_{1}:=(3a\nu_{0}^{2}-6a\alpha\nu_{0}^{2}-\alpha^{2}\nu_{1}^{2}- \nu_{1}\alpha-\nu_{3}\nu_{0}+\nu_{1}\nu_{2}\alpha+3\nu_{2}^{2}a\alpha^{4}-6a \alpha^{3}\nu_{0}^{2}+\nu_{3}\nu_{0}\alpha+9a\alpha^{2}\nu_{0}^{2})y+3a\alpha ^{2}(-\alpha\nu_{0}+\nu_{0}\alpha^{2}+\nu_{0})y^{2}\] \[+(\alpha-1)(3a\alpha^{3}\nu_{0}-6a\alpha^{2}\nu_{0}+6a\nu_{0}a \alpha-3a\nu_{0}+\nu_{3})x^{2}+\alpha(6a\alpha^{3}\nu_{0}-12a\alpha^{2}\nu_{0 }+\nu_{3}-6\nu_{0}a+12\nu_{0}\alpha)xy+3a\alpha^{3}(\alpha-1)xy^{2}\] \[+\frac{2\nu_{3}\nu_{0}+9a\alpha\nu_{0}^{2}-3a\nu_{0}^{2}-155a \alpha^{2}\nu_{0}^{2}+155a\alpha^{3}\nu_{0}^{2}-\nu_{2}^{2}\alpha+\nu_{2} \alpha-1\nu_{2}\alpha-2a\nu_{3}\nu_{0}^{2}-\nu_{1}^{2}\alpha^{2}+3a\nu_{1} \alpha^{2}+3a\alpha^{5}\nu_{0}^{2}-9\nu_{0}^{2}\alpha^{4}+2\nu_{3}\nu_{0} \alpha^{2}-3\nu_{3}\nu_{0}+\alpha^{2}\nu_{1}^{2}+\nu_{1}\alpha}x,\] \[v_{2}:=\frac{\nu_{3}(\alpha-1)}{\alpha}x^{2}-\frac{\alpha^{2}\nu _{1}^{2}-2\alpha^{3}\nu_{1}^{2}+3\alpha^{2}\nu_{1}\nu_{2}-\alpha\nu_{1}\nu_{2}- \alpha\nu_{1}\nu_{2}-\alpha\nu_{2}^{2}+2\alpha^{2}\nu_{3}\nu_{0}-3a\nu_{3}\nu_{ 0}+2\nu_{3}\nu_{0}-\alpha^{2}\nu_{1}+\alpha\nu_{1}+\alpha\nu_{2}}x-\frac{\alpha ^{2}\nu_{1}^{2}-\nu_{3}\nu_{0}+\nu_{3}\nu_{0}-\nu_{1}\nu_{2}\alpha+\nu_{1}\alpha }{\alpha}y.\] The first level normal form is obtained through a finite sequence of flow time-one maps generated by the initial value problems \(\dot{x}\frac{\partial}{\partial x}+\dot{y}\frac{\partial}{\partial y}=T_{i}\), \(x(0,\nu,X,Y)=X\), and \(y(0,\nu,X,Y)=Y\) for \(i=1,2,3.\) Further, we assign each homogenous monomial vector field with a grade equal to its degree minus one plus two times degree of its parameters, _e.g.,_\(d(x^{2}y^{3}\nu_{1}\nu_{2}{}^{3})=2+3+2+6=13.\) The image of homological operator \(L^{k}:\mathscr{L}_{k}\rightarrow\mathscr{L}_{k}\) defined by \(L^{k}(w):=[w,-x\frac{\partial}{\partial y}]\) can be used to simplify the system. Hence, we consider the Lie brackets given by \[\left[y\frac{\partial}{\partial x},-x\frac{\partial}{\partial y} \right]=x\frac{\partial}{\partial x}-y\frac{\partial}{\partial y},\left[\frac{ 1}{2}x\frac{\partial}{\partial x}-\frac{1}{2}y\frac{\partial}{\partial y},-x \frac{\partial}{\partial y}\right]=-x\frac{\partial}{\partial y},\left[y^{3} \frac{\partial}{\partial x},-x\frac{\partial}{\partial y}\right]=3xy^{2} \frac{\partial}{\partial x}-y^{3}\frac{\partial}{\partial y},\] \[\left[\frac{1}{2}x^{2}y\frac{\partial}{\partial x}-\frac{1}{2} xy^{2}\frac{\partial}{\partial y},-x\frac{\partial}{\partial y}\right]=\frac{1}{2}x^{3} \frac{\partial}{\partial x}-\frac{3}{2}x^{2}y\frac{\partial}{\partial y}, \left[x^{2}y\frac{\partial}{\partial x}+xy^{2}\frac{\partial}{\partial y},-x \frac{\partial}{\partial y}\right]=x^{3}\frac{\partial}{\partial x}+x^{2}y \frac{\partial}{\partial y}, \tag{5.2}\] \[\left[\frac{3}{4}xy^{2}\frac{\partial}{\partial x}-\frac{1}{4}y^ {3}\frac{\partial}{\partial y},-x\frac{\partial}{\partial y}\right]=\frac{3}{ 2}x^{2}y\frac{\partial}{\partial x}-\frac{3}{2}xy^{2}\frac{\partial}{\partial y },\left[xy^{2}\frac{\partial}{\partial x}+y^{3}\frac{\partial}{\partial y},-x \frac{\partial}{\partial y}\right]=2x^{2}y\frac{\partial}{\partial x}+2xy^{2} \frac{\partial}{\partial y}.\] Hence, terms of the form \(x^{3}\frac{\partial}{\partial x},x^{2}y\frac{\partial}{\partial y}\), \(x^{2}y\frac{\partial}{\partial x},xy^{2}\frac{\partial}{\partial y}\) and parametric terms associated with \(x\frac{\partial}{\partial y}\) can be simplified from the system. However, we can choose between \(xy^{2}\frac{\partial}{\partial x}\) or \(y^{3}\frac{\partial}{\partial y}\) to simplify from the system. This is also true for parametric terms \(x\frac{\partial}{\partial x}\) and \(y\frac{\partial}{\partial y}\), where only one of them can be simplified from the system. Given the Lie brackets in equation (5.2) and the grading function, we choose \(T_{1}\) to simplify terms of grade \(2\) as follows \[T_{1}^{x}=\left(\frac{1}{2}\alpha\nu_{1}-\nu_{1}-\frac{1}{2}\nu_ {2}\right)y-\frac{(\alpha\nu_{1}-\nu_{1}-\nu_{2})x}{2\alpha}-a\alpha\,\,( \alpha-1)^{3}x^{2}y-\frac{3a\alpha^{2}(\alpha-1)^{2}xy^{2}}{2}-\frac{3a\alpha^ {3}(\alpha-1)y^{3}}{4},\] \[T_{1}^{y}=\frac{(\alpha\nu_{1}-\nu_{1}-\nu_{2})y}{2\alpha}-\frac {a\alpha\,(\alpha-1)^{3}xy^{2}}{2}-\frac{a\alpha^{2}(\alpha-1)^{2}y^{3}}{2}.\] The updated system is given by \(\dot{X}\frac{\partial}{\partial X}+\dot{Y}\frac{\partial}{\partial Y}=\exp \operatorname{ad}_{T_{1}^{x}}\frac{\partial}{\partial x}+T_{1}^{y}\frac{ \partial}{\partial y}\left(v_{1}\frac{\partial}{\partial x}+v_{2}\frac{ \partial}{\partial y}\right)\) whose grade two terms are associated with \[\left(a\alpha^{4}Y^{3}-\alpha\nu_{1}Y-\frac{\alpha\nu_{1}-\nu_{2}}{2}X+\frac{3 a\alpha^{3}(\alpha-1)}{4}XY^{2}\right)\frac{\partial}{\partial X}+\left(\frac{3a \alpha^{3}(\alpha-1)}{4}Y^{3}-\frac{(\alpha\nu_{1}-\nu_{2})}{2}Y\right)\frac{ \partial}{\partial Y}\] For simplicity of notations, we replace \((X,Y)\) with \((x,y).\) Due to the Lie brackets \[\left[\frac{1}{3}x^{2}\frac{\partial}{\partial x}-\frac{2}{3}xy \frac{\partial}{\partial y},-x\frac{\partial}{\partial y}\right]=-x^{2}\frac{ \partial}{\partial y},\left[\frac{2}{3}xy\frac{\partial}{\partial x}-\frac{1} {3}y^{2}\frac{\partial}{\partial y},-x\frac{\partial}{\partial y}\right]=\frac{ 2}{3}x^{2}\frac{\partial}{\partial x}-\frac{4}{3}xy\frac{\partial}{\partial y}\] \[\left[xy\frac{\partial}{\partial x}+y^{2}\frac{\partial}{\partial y },-x\frac{\partial}{\partial y}\right]=x^{2}\frac{\partial}{\partial x}+xy \frac{\partial}{\partial y},\left[y^{2}\frac{\partial}{\partial x},-x\frac{ \partial}{\partial y}\right]=2xy\frac{\partial}{\partial x}-y^{2}\frac{\partial} {\partial y},\] we can eliminate all of the terms \(x^{2}\frac{\partial}{\partial y}\), \(xy\frac{\partial}{\partial y}\), \(x^{2}\frac{\partial}{\partial x}\), while only one of \(xy\frac{\partial}{\partial x}\) and \(y^{2}\frac{\partial}{\partial y}\) can be simplified. The calculations show that only the following terms of grade \(3\) remain in the system \[\left(\frac{3\alpha^{3}a\nu_{0}\left(1+\alpha^{2}\right)y^{2}-\nu_{0}\left(\nu_{ 2}-2\alpha\nu_{2}+\nu_{1}-\alpha\nu_{1}+2\alpha^{2}\nu_{1}\right)}{2\alpha}+ \frac{\alpha\left(3\nu_{0}a(\alpha-1)\left(1+\alpha^{2}\right)+\nu_{3}\right) xy}{3}\right)\frac{\partial}{\partial x}+\frac{\alpha\left(3a\nu_{0}(\alpha-1) \left(\alpha^{2}+1\right)+\nu_{3}\right)y^{2}}{3}\frac{\partial}{\partial y}.\] Let \(A_{r-i}^{-1}(y):=y^{r-i+1}\frac{\partial}{\partial x}.\) Then, \[A_{r}^{-1}(y+f(\nu))=\sum_{i=0}^{r+1}\binom{r+1}{i}f(\nu)^{i}y^{r-i+1}\frac{ \partial}{\partial x}.\] Thus, term \(\frac{3}{2}\alpha^{2}a(1+\alpha^{2})\nu_{0}y^{2}\frac{\partial}{\partial x}\) can be cancelled by replacing \(y\) with \(y+f(\nu)\) where \(f(\nu)=-\frac{1}{2}\alpha^{2}a(1+\alpha^{2})\nu_{0}\). Thereby, the system is normalized to (1.3), where \(\mu_{0}=\frac{9}{8}\alpha\nu_{1}\nu_{0}-\frac{3}{32}\nu_{1}\nu_{0}-\frac{33}{32} \nu_{1}\nu_{0}-\frac{33}{32}\nu_{2}\nu_{0}-\frac{11}{32}\alpha\nu_{2}\nu_{0}- \nu_{0},\) \[\begin{array}{c}\mu_{1}=-\frac{5a^{2}\nu_{1}^{2}}{16}-\frac{3\nu_{1}^{2}}{ 16}-\frac{3\alpha\nu_{1}\nu_{2}}{16}-\frac{5\nu_{1}\nu_{2}}{16}-\frac{\nu_{2} ^{2}}{4}-\frac{1047}{5120}a\alpha^{4}\nu_{0}^{2}+\frac{2583}{1280}a\alpha^{3} \nu_{0}^{2}-\frac{1047}{120}a\nu_{0}^{2}-\frac{1857}{512}a\alpha^{2}\nu_{0} ^{2}+\frac{2583}{1280}a\alpha\nu_{0}^{2}-\alpha\nu_{1},\\ \mu_{2}=-\frac{490^{2}+3\alpha+12}{64}\nu_{1}^{2}+\frac{2\alpha^{2}-\alpha+6}{3 2\alpha}\nu_{1}\nu_{2}-\frac{5(1-\nu_{2})\nu_{2}}{64\alpha}+\frac{(63a-31)( \alpha-1)\nu_{3}\nu_{0}}{64\alpha}+\frac{3a(2829\alpha^{2}-3226\alpha+2829)( \alpha-1)^{3}\nu_{0}}{10240\alpha}-\frac{\nu_{1}}{2}+\frac{\nu_{2}}{2},\\ \mu_{3}=\frac{1}{3}\alpha\nu_{3}-\frac{9}{6}\alpha^{3}a\nu_{0}+\frac{9}{9 For \(\nu_{0}=0,\,\mu_{0}=0.\) Thus, we follow Proposition 3.1, Theorem 3.3, and Theorem 3.5 for deriving symbolic estimates of \(T_{P},\,T_{H},\,T_{H\pm}\) and \(T_{HmC\pm}.\) For simplicity of the formulaes, we take \(\nu_{3}=0.3,\)\(\alpha:=0.8,\,a:=1.\) Then, we obtain the following equations: \[T_{H}=\left\{\left(\nu_{1},\nu_{2}\right)\right|\nu_{1}=\tfrac{5} {4}\nu_{2}\right\},\;\;\;T_{HmC}=\left\{\left(\nu_{1},\nu_{2}\right)\right| \nu_{1}=\tfrac{320}{1771}+\tfrac{35045}{28336}\nu_{2}-\tfrac{5}{28336}\sqrt{1 048576+5286912\nu_{2}+13861929{\nu_{2}}^{2}}\right\},\] \[T_{P}=\left\{\left(\nu_{1},\nu_{2}\right)\right|\nu_{1}=0\right\}, \;\;\;T_{HmC\pm}=\left\{\left(\nu_{1},\nu_{2}\right)\right|\pm\tfrac{4989\sqrt {10}}{16000000}\pi\nu_{1}\sqrt{\nu_{1}}\pm\tfrac{21\sqrt{10}}{160000}\pi\sqrt{ \nu_{1}}\nu_{2}-\tfrac{16}{25}\nu_{1}+\tfrac{1}{2}\nu_{2}\pm\tfrac{9\sqrt{10 }}{1000}\pi\sqrt{\nu_{1}}=0\right\},\] \[T_{H\pm}=\left\{\left(\nu_{1},\nu_{2}\right)\right|\tfrac{529162 496}{3051758125}\nu^{2}-\tfrac{67277349184}{15255875890625}\nu_{1}\nu_{2}\pm \tfrac{978767872\sqrt{5}}{3051758125}\sqrt{\nu_{1}}\nu_{2}-\tfrac{1841299456}{3 0517578125}\nu_{1}\] \[+\tfrac{2097152}{48828125}\nu_{2}\pm\tfrac{209753728\sqrt{5}}{3051 7578125}\sqrt{\nu_{1}}=0\right\}.\] These are derived from equations (3.1), (3.9), and (3.10), respectively. Let \(\nu_{3}=0.3\) and \(\nu_{0}=0\). Then, controller coefficient choices \((\nu_{1},\nu_{2})\) from regions (a)-(h) from Figure 9(a) give rise to the controlled trajectories of Chua system (5.1) in Figures 9(b)-9(d), 10(a)-10(d), and 11(a)-11(b), respectively. ### Controller objectives Different bifurcation scenarios can be realised by appropriate choices of controller coefficients and these provide an effective approach to meet many desired control objectives. For example, we Figure 9: Regularization of the origin using bifurcation control and stabilization via supercritical Hopf bifurcation; controller coefficients are from regions (a) and (b) in Figure 9(a) when \(\alpha:=0.8,\)\(a:=1,\,\nu_{3}=0.3,\) and \(\nu_{0}=0.\) explain how these objectives can include feedback regularization, stabilization, amplitude size and frequency control for oscillations of the controlled differential Chua system. #### Region (a): A feedback regularization by bifurcation control: Controller coefficient choices from region (a) in Figure 9(a) turn the origin into a spiral sink. Hence, the feedback control \(u\) in (5.1) regularizes the origin and all trajectories converge to the origin; the origin is globally asymptotically stable. Figure 9(b) depicts trajectories \(x(t),\,y(t),\) and \(z(t)\) for initial values \((-0.6,-0.54,-0.1)\) and controller coefficients \((\nu_{1},\nu_{2})=(-0.01,-0.02)\) chosen from region (a). These trajectories converge to the origin while the controller \(u\) is very small. #### Region (b): Stabilization and oscillation control via a supercritical Hopf bifurcation: There is a stable limit cycle \(\mathscr{C}_{0}\) for region \(b\) and trajectories converge to \(\mathscr{C}_{0}.\) By Proposition 3.1, there is a supercritical Hopf bifurcation at the border between regions \(a\) and \(b,\) that is estimated by \(\nu_{1}=\frac{5}{4}\nu_{2}.\) This provides a stabilization approach for the system. Furthermore, the leading estimate for radius of the bifurcation limit cycle is given by \(\sqrt{\frac{4(\alpha\nu_{1}-\nu_{2})}{3a(\alpha-1)\alpha^{3}}}\) while the leading estimate for its angular frequency is \(\sqrt{\alpha\nu_{1}}.\) Figures 9(c)-9(d) illustrate converging trajectories to the limit cycle \(\mathscr{C}_{0}\) from the initial values \((0.3,0.54,0.1)\) for controller coefficient choices \((\nu_{1},\nu_{2})\) from region (b). Since the angular frequency and amplitude size of oscillations are proportional to the angular frequency and radius of the bifurcation limit cycle, we can control the oscillating trajectories accordingly. Hence, an increase in \(\nu_{1}\) leads to an increase in angular frequency of oscillations while \(|\alpha\nu_{1}-\nu_{2}|\) is the deciding factor for the amplitude sizes of the oscillations. Therefore, an increase from \(|\nu_{1}|=0.01\) to \(|\nu_{1}|=0.02\) gives rise to an amplification in both amplitude sizes and angular frequencies in figures 9(c)-9(d). Notice that the controller \(u\) in figure 9(c) is very small while it causes oscillations with moderate magnitudes. Region (c): A pitchfork bifurcation of equilibria inside \(\mathscr{C}_{0}\) within center manifold \(\mathscr{M}\). There is a pitchfork bifurcation at border of regions (b) and (c). The system for controller coefficients from region (b) has a limit cycle \(\mathscr{C}_{0}.\) The limit cycle \(\mathscr{C}_{0}\) lives on the center manifold Figure 10: Controlled trajectories 10(a)-10(d) associated with regions (c)-(f) from Figure 9(a) for (5.1), \(\alpha:=0.8\), \(a:=1\), \(\nu_{3}=0.3\), \(\nu_{0}=0.\) \(\mathscr{M}\) of Chua system and within the center manifold \(\mathscr{M}\), \(\mathscr{C}_{0}\) encircles the origin. As the controller coefficients cross the pitchfork variety \(T_{p}\), two new equilibria \(e_{\pm}\) are born from the origin. Hence, for controller coefficients chosen from region (c), controlled Chua system (5.1) has two spiral sources, and origin as a hyperbolic saddle. All these equilibria live inside the stable limit cycle \(\mathscr{C}_{0}\) within the center manifold. Trajectories in Figure 10(a) converge to the asymptotically stable limit cycle \(\mathscr{C}_{0}\) for initial values \((-0.7,-0.7,-0.3)\) and \((\nu_{1},\nu_{2})=(0.01,0.1).\) The oscillation frequency and amplitude size control of the oscillating trajectories are controlled by the management of the stable limit cycle \(\mathscr{C}_{0}\). Limit cycle \(\mathscr{C}_{0}\) is controllable as similar to the controller coefficient cases from region (b). Region (d): A secondary subcritical Hopf bifurcation of \(\mathscr{C}_{-}^{1}\) inside \(\mathscr{C}_{0}\) within \(\mathscr{M}\). By Theorem 3.3, a secondary Hopf bifurcation of limit cycles from \(e_{-}\) occurs inside the limit cycle \(\mathscr{C}_{0}\). Remark that equilibria \(e_{\pm}\), the origin, and limit cycles \(\mathscr{C}_{-}^{1}\) and \(\mathscr{C}_{0}\) all live on the center manifold \(\mathscr{M}\). Hence for controller coefficient choices from region (d), the controlled system (5.1) has a spiral source, a saddle, a spiral sink and two limit cycles. The leading estimates for angular frequency and radius of \(\mathscr{C}_{-}^{1}\) are given by \(\frac{\sqrt{2}}{4}(16\alpha\nu_{1}-(\nu_{2}-\alpha\nu_{1})^{2})\) and \(-\frac{8\sqrt{2}}{7\alpha\alpha^{4}}\sqrt{\frac{14}{3(\alpha-1)}(\nu_{2}- \alpha\nu_{1})-184((\nu_{2}-\alpha\nu_{1})^{2}-4\alpha\nu_{1})}-\frac{32}{7} \frac{\sqrt{4\alpha\nu_{1}-(\nu_{2}-\alpha\nu_{1})^{2}}}{a_{1}}\), respectively. However, \(\mathscr{C}_{-}^{1}\) is an unstable limit cycle. Hence, amplitude size control of the oscillating dynamics is typically determined by the stable limit cycle \(\mathscr{C}_{0}\). Figure 10(b) represents controlled dynamics associated with region (d). Controlled trajectories for initial values \((0.004,-0.1,0)\) converge to \(e_{-}\) when controller coefficients \((\nu_{1},\nu_{2})=(0.02,0.068)\) are taken from region (d). #### Region (e): Disappearance of \(\mathscr{C}_{-}^{1}\) via homoclinic bifurcation at \(T_{HmC-}\). The limit cycle \(\mathscr{C}_{-}^{1}\) collides with the origin and constructs a homoclinic cycle \(\Gamma_{-}\) exactly when controller coefficients are taken from homoclinic controller set \(T_{HmC-}\); see Theorem 3.5. Therefore, both of \(\mathscr{C}_{-}^{1}\) and \(\Gamma_{-}\) disappear when we take controller coefficients from region (e) in Figure 9(a). Hence, there exist a source, a saddle, a sink and an enlarged stable limit cycle \(\mathscr{C}_{0}\). Trajectories in Figure 10(c) for initial values \((-0.015,-0.001,0.015)\) converge to \(e_{-}\) for \((\nu_{1},\nu_{2})=(0.02,0.03)\) from region (e). Controlled dynamics of region (f): \(\mathscr{C}_{0}\) disappears via homoclinic bifurcation at \(T_{HmC-}\). The limit cycle \(\mathscr{C}_{0}\) collides with origin for controller coefficients taken from transition variety \(T_{HmC}\). Hence, limit cycle \(\mathscr{C}_{0}\) disappears from the local dynamics of controlled Chua system when we take controller coefficients from region (f). Therefore, there are a source, a saddle, a sink and no limit cycle. Figure 10(d) depicts a trajectory that converges to \(e_{-}\) from initial values \((-0.02,-0.001,0.1)\) for controllers \((\nu_{1},\nu_{2})=(0.02,0)\) corresponding with region (f). For controller coefficients from region (f), system (5.1) has three equilibria and no limit cycle. Region (g): Limit cycle \(\mathscr{C}_{+}^{1}\) appears when controller coefficients cross \(T_{HmC+}\). By Theorem 3.5, there is a stable limit cycle \(\mathscr{C}_{+}^{1}\) for controller coefficients from region (g), _i.e.,_ there is a homoclinic variety \(T_{HmC+}\) at the border between regions (f) and (g). In fact, there is a homoclinic cycle \(\Gamma_{+}\) (encircling \(e_{+}\)) for controller coefficients from \(T_{HmC+}\). There are a stable limit cycle \(\mathscr{C}_{+}^{1}\) encircling unstable equilibrium \(e_{+}\), and two more equilibria outside of \(\mathscr{C}_{+}^{1}\). The latter equilibria are a saddle and a sink \(e_{-}\). All equilibria, homoclinic cycle \(\Gamma_{+}\) and limit cycle \(\mathscr{C}_{+}^{1}\) live on center manifold \(\mathscr{M}.\) Trajectories in Figures 11(a) converge to the stable limit cycle \(\mathscr{C}_{+}^{1}\) and \(e_{-}\) for controller coefficients \(\nu_{1}=0.018\) and \(\nu_{2}=-0.016\) from region (g), initial values \((0.02,0.005,-0.02)\) and \((-0.015,-0.001,0.3)\), respectively. Region (h): Limit cycle \(\mathscr{C}_{+}^{1}\) shrinks in size to coalesce with \(e_{+}\) and disappear. Theorem 3.3 concludes that there is a supercritical Hopf bifurcation at \(T_{H+}\), where \(\mathscr{C}_{+}^{1}\) is born from \(e_{+}.\) Controller coefficients crossing \(T_{H+}\) and entering region (h) lead to change of stability for \(e_{+}\) and disappearance of \(\mathscr{C}_{+}^{1}.\) Hence for controller coefficients from region (g), there exist two stable equilibria \(e_{\pm}\), a saddle origin and no limit cycle. Trajectories in Figures 11(b) for initial values \((0.013,-0.03,0.5)\) and \((-0.013,-0.001,0.1)\) converge to \(e_{+}\) and \(e_{-}\), respectively. **Remark 5.3** (Stabilization and regularization for controller coefficient choices from regions (g) and (h)).: Controller coefficients associated with region (h) lead to two asymptotically stable equilibria \(e_{+}\) and \(e_{-}\), whose their combined basins of attraction is the whole state space except the stable and unstable manifolds of the saddle origin. Therefore, controller coefficients chosen from region (h) give a regularization approach for the system, where the controlled trajectories converge to equilibria close to the origin; also see [31] for a similar phenomenon. Since Hopf bifurcation at the border between regions (h) and (g) is supercritical, controller coefficient choices from region (g) have a stabilization property. In fact, every trajectory converges to either \(\mathscr{C}_{+}^{1}\) or \(e_{-}\), except that a trajectory would exactly fall on the stable and unstable manifolds of the saddle origin. The latter is practically impossible due to imperfections, errors, etc., and the fact that the stable and unstable manifolds are structurally unstable and have a zero-Lebesgue measure. ## 6 Conflict of interest There is no conflict of interest to report. ## 7 Data availability statement There is no data associated with this paper to report.
2309.02314
The role of three-nucleon potentials within the shell model: past and present
We survey the impact of nuclear three-body forces on structure properties of nuclei within the shell model. It has long been acknowledged, since the seminal works of Zuker and coworkers, that three-body forces play a fundamental role in making the monopole component of shell-model Hamiltonians, derived from realistic nucleon-nucleon potentials, able to reproduce the observed evolution of the shell structure. In the vast majority of calculations, however, their effects have been taken into account by shell-model practitioners by introducing ad hoc modifications of the monopole matrix elements. During last twenty years, a new theoretical approach, framed within the chiral perturbation theory, has progressed in developing nuclear potentials, where two- and many-body components are naturally and consistently built in. This new class of nuclear forces allows to carry out nuclear structure studies that are improving our ability to understand nuclear phenomena in a microscopic approach. We provide in this work an update on the status of the nuclear shell model based on realistic Hamiltonians that are derived from two- and three-nucleon chiral potentials, focusing on the role of the three-body component to provide the observed shell evolution and closure properties, as well as the location of driplines. To this end, we present the results of shell-model calculations and their comparison with recent experimental measurements, which enlighten the relevance of the inclusion of three-nucleon forces to master our knowledge of the physics of atomic nuclei.
L. Coraggio, G. De Gregorio, T. Fukui, A. Gargano, Y. Z. Ma, Z. H. Cheng, F. R. Xu
2023-09-05T15:29:25Z
http://arxiv.org/abs/2309.02314v1
# The role of three-nucleon potentials within the shell model: past and present ###### Abstract We survey the impact of nuclear three-body forces on structure properties of nuclei within the shell model. It has long been acknowledged, since the seminal works of Zuker and coworkers, that three-body forces play a fundamental role in making the monopole component of shell-model Hamiltonians, derived from realistic nucleon-nucleon potentials, able to reproduce the observed evolution of the shell structure. In the vast majority of calculations, however, their effects have been taken into account by shell-model practitioners by introducing _ad hoc_ modifications of the monopole matrix elements. During last twenty years, a new theoretical approach, framed within the chiral perturbation theory, has progressed in developing nuclear potentials, where two- and many-body components are naturally and consistently built in. This new class of nuclear forces allows to carry out nuclear structure studies that are improving our ability to understand nuclear phenomena in a microscopic approach. We provide in this work an update on the status of the nuclear shell model based on realistic Hamiltonians that are derived from two- and three-nucleon chiral potentials, focusing on the role of the three-body component to provide the observed shell evolution and closure properties, as well as the location of driplines. To this end, we present the results of shell-model calculations and their comparison with recent experimental measurements, which enlighten the relevance of the inclusion of three-nucleon forces to master our knowledge of the physics of atomic nuclei. ###### Contents * 1 Introduction * 2 Three-body forces * 2.1 _Backgrounds_ * 2.2 _Chiral effective field theory for three-nucleon forces_ * 3 Shell model * 3.1 _Generalities_ * 3.2 _The derivation of realistic effective interactions and operators_ * 3.2.1 _The perturbative expansion of effective shell-model Hamiltonian_ * 3.2.2 _The perturbative expansion of effective shell-model decay operators_ * 3.3 _Gamow shell model with three-body forces_ * 4 Applications and comparison with experiment * 4.1 _Benchmark calculations in the \(0p\)-shell region_ * 4.2 _Approaching the weakly bound systems_ * 4.2.1 _The limit of oxygen isotopes_ * 4.2.2 _3NF and continuum in neutron-rich oxygen isotopes_ * 4.2.3 _3NF and continuum in proton-rich Borromean \({}^{17}\)Ne_ * 4.2.4 _The calcium isotopes dripline_ * 4.3 _Shell evolution and the role of three-body forces_ * 4.3.1 _Overview: the \(fp\) shell region_ * 4.3.2 _Monopole interaction and effective single-particle energies_ * 4.3.3 _Spin-tensor decomposition of the shell-model interaction_ * 5 Summary and conclusions * A Three-body matrix elements for shell-model calculations * A.1 Three-body states * A.2 Antisymmetrization * A.3 Structures of three-body matrix elements * A.3.1 \(JT\)-coupled three-body matrix elements * A.3.2 Chiral three-body potentials and nonlocal regularization * A.3.3 Contact term * A.3.4 One-pion exchange plus contact term * A.3.5 Two-pion exchange term ## 1 Introduction The awareness of a defined role of many-body forces in the study of nuclear systems traces back to the early stages of meson theory [1]. As a matter of fact, there is no guarantee that the picture of the meson degrees of freedom to be frozen as soon as they have created the interaction between two nucleons - and then being responsible only for two-nucleon forces (2NFs) - may work in any nuclear environment, in any energy regime, and, more significantly, within any desired degree of accuracy [2, 3]. The theoretical progress in the construction of high-quality nucleon-nucleon (\(NN\)) potentials able to reproduce large sets of two-nucleon data [4, 5], as well as the advancement of high-precision nuclear structure approaches [6, 7], has established that the sole use of two-body nuclear forces does not provide a fully satisfactory reproduction of the low-energy spectroscopy of light nuclear systems [8]. However, two main issues have slowed the development of nuclear structure calculations employing three-nucleon forces (3NFs). First, the little knowledge of a mechanism providing many-body forces consistently with the nature of the \(NN\) interaction, as, for example, the pion-nucleon (\(\pi N\)) scattering amplitude which is a fundamental quantity to construct 3NF contributions in meson theory [3]. Second, the difficulty to manage three-nucleon (3N) potentials within a many-body system, whose solution requires formalisms that are computationally extremely demanding. As regards the first issue, a major breakthrough in the last two decades has been the derivation of nuclear potentials in terms of the chiral perturbation theory (ChPT) to build realistic \(NN\) and 3N forces starting from a chiral Lagrangian. This idea goes back to the seminal work of Weinberg [9, 10, 11], where the concept of an effective field theory (EFT) has been introduced to study the \(S\)-matrix for processes involving arbitrary numbers of low-momentum pions and nucleons. Within such an approach, the long-range component of the potential is ruled by the symmetries of low-energy quantum chromodynamics (QCD) - as the spontaneously broken chiral symmetry - and the short-range dynamics is absorbed into a complete basis of contact terms that are proportional to low-energy constants (LECs). The LECs may be fitted to \(NN\) and 3N data, but in a future they could be determined by extrapolating them from lattice QCD (LQCD) results for light nuclear systems at the physical \(\pi\) mass [12, 13]. The main advantage of ChPT, as regards the need of consistency between \(NN\) and 3N potentials, is that it generates nuclear two- and many-body forces on an equal footing [14, 15, 16, 17]. In fact, most interaction vertices that appear in the three- and four-nucleon forces also occur in the two-nucleon ones. Since the LECs associated to these vertices are shared with the chiral \(NN\) potential, then consistency requires that for the same vertices the same parameter values are used in the many-body components of the nuclear Hamiltonian. This new generation of two- and three-nucleon forces has been successfully employed to study both spectroscopic properties and scattering processes of light systems within the framework of the _ab initio_ no-core shell model (NCSM) method [6, 18, 19, 20]. However, the computational difficulty to manage a full treatment of three-body correlations increases rapidly with the mass of the nuclei under investigation making calculations unfeasible. A successful approach to overcome such a hindrance is to resort to the so-called normal-ordered decomposition of the three-body component of the nuclear Hamiltonian [21]. This is a convenient procedure in nuclear many-body methods which starts from an unperturbed reference state. The basic idea is to use the Wick's theorem [22] and re-arrange, with respect to the reference state, the three-body component of the nuclear Hamiltonian into a sum of zero-, one-, two-, and three-body terms [23]. Then, a truncation is performed neglecting the residual three-body term, which arises from the normal-ordering decomposition, and retaining only the zero-, one-, and two-body contributions. This approximation is obviously advantageous to simplify the theoretical expressions characterizing different nuclear many-body methods, and to drastically reduce the computational complexity. The validity of the normal-ordering approximation has been tested in light- and medium-mass nuclei [24, 25, 26], and it is currently a building block of _ab initio_ nuclear structure calculations where chiral \(NN\) and 3N potentials are employed [27, 21, 28, 29, 30, 31, 32, 33]. The question of the significance of including the effects of 3NFs in the derivation of the effective shell-model (SM) Hamiltonians \(H_{\text{effs}}\) becomes crucial when they are obtained by way of the many-body theory starting from realistic nuclear potentials [34, 35, 36, 37, 38]. In fact, for phenomenological \(H_{\text{effs}}\), where the single-particle (SP) energies and the two-body matrix elements (TBMEs) of the residual interaction are fitted or adjusted so as to reproduce a certain set of spectroscopic observables [39, 40], it is reasonable to conclude that 3N forces are implicitly taken into account. The first studies about the role of 3NFs in nuclear SM calculations have been carried out by Zuker and coworkers (see Ref. [41], where a complete list of reference can be found), who have extensively investigated the characteristics of the TBMEs of the residual SM interaction derived within the many body perturbation theory (MBPT) from realistic 2NFs by Kuo and Brown [42, 43]. They have shown that these \(H_{\rm eff}\)s need to be modified in their monopole component to reproduce the experimental evolution of shell closures as a function of the number of valence nucleons [44, 45, 46]. The inferred conclusion is that this deficiency traces back to the lack of a 3NF component in the nuclear realistic Hamiltonian, which affects negatively the \(H_{\rm eff}\) monopole component, as discussed in Ref. [47] that was inspired by the NCSM results of Ref. [48] indicating the need of 3NF in describing \(p\)-shell nuclei and in particular the ground state of \({}^{10}\)B. This is not a negligible drawback for a SM calculation, since the ability to describe the evolution of the nuclear spectroscopic properties along isotopic and isotonic chains, and consequently the formation of magic numbers, is the feature that has placed the SM in a central role within the structure of atomic nuclei, and represents also its main success [49, 50, 51]. Moreover, the \(H_{\rm eff}\) monopole component affects also the evolution of the calculated binding energies as a function of the number of valence nucleons along isotopic and isotonic chains, and then the ability of SM calculations to reproduce or predict correctly the edge of the nuclear chart. This is the reason why it is fundamental that \(H_{\rm eff}\)s should be able to reproduce the observed shell evolution and closures. The first SM study where the effects of 3NFs have been explicitly included in the derivation of \(H_{\rm eff}\) has been carried out by Otsuka and coworkers in order to reproduce the oxygen-isotope dripline [52]. In this work, the authors aimed to study what are the underlying conditions to reproduce the limit of oxygen isotopes as bound systems, which is experimentally very close to the stability line [53, 54]. To this end, they derived an effective Hamiltonian for the \(sd\)-shell model space within the MBPT framework [36], starting from a realistic \(NN\) potential as well as from an \(NN\)+3N potential. The two-body component of the nuclear Hamiltonian was chosen to be a potential constructed by way of the ChPT at next-to-next-to-next-to-leading order (N\({}^{3}\)LO) [55], whose high-momentum components were renormalized by way of the \(V_{\rm low-}\)\(k\) procedure [56], while the three-body contribution was given by the Fujita-Miyazawa \(\Delta\)-exchange force [57] or the chiral 3NF term at N\({}^{2}\)LO. In order to manage the 3N component in SM calculations, its contribution was evaluated by way of the previously mentioned normal-ordering approximation, and such a contribution was shown to be crucial to obtain \({}^{24}\)O as the last bound oxygen isotope. Starting from the work of Ref. [52] extensive studies have then carried out on the role of 3NFs in SM calculations to reproduce the spectroscopy and the binding energies of nuclei belonging to the \(sd\)-shell region [58, 59, 60, 61, 62], as well as to investigate and predict the nuclear structure of heavy calcium isotopes [63, 59, 61, 25]. For all these works, the \(H_{\rm eff}\)s have been derived starting from two- and three-body potentials built up within the chiral perturbative expansion and softened by way of the \(V_{\rm low-}\)\(k\) technique [56] or the similarity renormalization-group (SRG) approach [64, 65]. Chiral \(NN\) and 3N forces have been also the starting point of non-perturbative approaches to the derivation of \(H_{\rm eff}\), such as the SM coupled cluster (SMCC) [66, 67, 68] and the valence-space in-medium SRG (VS-IMSRG) [69, 70], and a comprehensive review about tackling the problem of deriving \(H_{\rm eff}\) within _ab initio_ methods can be found in Ref. [37] where an extension of the normal-ordering approximation using a multi-reference state is also outlined. In particular, calculations within the VS-IMSRG approach have validated the need of 3NFs to reproduce the experimental behavior of ground-state (g.s.) energies of oxygen and calcium isotopes [37], and have been employed to provide theoretical insight in many experimental works [71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81]. In 2018, the authors of the present work have started a research plan aimed to employ chiral two- and three-body potentials both in standard SM calculations as well as in Gamow SM (GSM) [82, 83, 84], the latter being focused on the description of weakly-bound nuclei by coupling bound to resonant states (Gamow states). The main goal of such an investigation is to single out the role of 3NF among the main components of the residual SM interaction, starting from nuclear potentials based on ChPT and linked to the QCD, namely the fundamental theory of strong interactions [85]. In the first work of our project, we have studied the spectroscopy of \(p\)-shell nuclei [86] by employing an \(H_{\rm eff}\) derived from an \(NN\) potential obtained within the ChPT at N\({}^{3}\)LO [55], and including also one- and two-body components of a normal-ordered 3N chiral potential at N\({}^{2}\)LO. The matrix elements of the 3NF have been constructed using consistently the same LECs belonging to the components of the 2NF, the only two extra LECs \(c_{D},c_{E}\) - which are attached to the 3N 1-\(\pi\) exchange and contact terms, respectively - have been chosen to be the same as fixed in Ref. [87]. With this choice of the \(NN\) and 3N forces we have intended to benchmark our SM results with those of _ab initio_ NCSM for the same class of nuclei [87, 88] in order to verify the quality of the approximations which characterize our calculations. The results reported in that paper evidenced a substantial agreement between the SM and NCSM calculations, and a positive assessment for the MBPT to provide reliable \(H_{\rm eff}\)s starting from \(NN\) and 3N forces. Furthermore, it is worth emphasizing that in Ref. [86] we have shown that the 3NF is essential to make explicit the spin-orbit splitting of the \(0p\) orbitals. The success of this work has been the starting point for further SM studies, and the focus has been shifted to nuclei belonging to \(fp\) shell, which provide the best paradigm to investigate the role of 3NFs to reproduce the observed shell evolution of isotopic chains belonging to this region. In Ref. [89], it has been shown that 3NFs are crucial to obtain the shell closure at \(N=28\), and to provide SP-energy splittings which are consistent with the interpretation of \({}^{56}\)Ni as a doubly magic nucleus. This feature has been the cornerstone to obtain the evolution of the excitation energies of the yrast \(J=2^{+}\) states of calcium, titanium, chromium, iron, and nickel isotopes in agreement with the experimental one [89]. The paper on \(fp\) nuclei has triggered two following studies on the location of the neutron dripline for calcium [90] and titanium [91] isotopic chains, both aimed to assess the relevance of accounting for induced many-body forces, which originate from the 2NF in nuclei with more than two valence nucleons (see Section 2.1). For a thorough SM investigation of Ca and Ti isotopes, which are experimentally found to be stable beyond \(N=40\), these two works adopt a model space larger than the standard \(fp\) shell including the neutron \(0g_{9/2}\) orbital. Such an enlargement of the number of configurations of the nuclear wave functions, which pushes the performance of the available SM codes to their computational limits, has been proven to be important to improve the agreement between theory and experiment. As mentioned previously, the role of 3NFs has been investigated in the framework of the GSM [82, 83, 84] for weakly-bound nuclei. In Ref. [92], the three-body component of the nuclear Hamiltonian and continuum states have been shown to be fundamental to reproduce the dripline position and the unbound properties of oxygen isotopes beyond the dripline. The GSM, with the inclusion of 3NFs, reproduces the experimental resonance widths of the \({}^{24}\)O excited states, and predicts the particle-emission widths for other resonant states in \({}^{24,25,26}\)O. The GSM with chiral \(NN\) and 3N forces has been employed also to study the structure of the halo nucleus \({}^{17}\)Ne, and it has been observed that the repulsive behavior of the 3NF is decisive to raise the energy of \({}^{16}\)F over the threshold of the proton emission, leading to the Borromean nature of \({}^{17}\)Ne [93]. In a recent work, the focus has been spotted on the mirror-symmetry breaking of \(sd\)-shell mass region, and it has been found that, for \(Z(N)=8\) isotopes (isotones), 2NFs only cannot provide the correct binding energies, nucleon separation energies and excitation spectra [94]. Let us now outline the structure of the present work. In the following section, we sketch out the essentials of our understanding of nuclear many-body forces and the framework of deriving nuclear potentials in terms of chiral EFT. In Section 3 we present the nuclear SM, which is the many-body method where our calculations are grounded on, and the theoretical aspects related to the derivation of effective SM operators starting from realistic nuclear forces. In Section 4, we review a large variety of the latest nuclear structure calculations that have been performed in terms of SM employing realistic two- and three-body potentials. Results for nuclei ranging from light and weakly-bound systems up to intermediate-mass nuclei belonging to the \(fp\) shell are presented, with the aim to investigate the role of 3NFs in explaining phenomena such as the location of neutron dripline of isotopic chains, the Borromean structure of halo nuclei, as well as the shell evolution as a function of the number of valence nucleons. The last section is devoted to summarizing our considerations about the current status of SM with three-body potentials, and to looking over the future evolution of this kind of approach to the study of nuclear structure problems. In the appendix A, we present a few of the theoretical details of the methods which are needed to employ 3NFs in SM calculations. ## 2 Three-body forces ### _Backgrounds_ As mentioned in the Introduction, the first example of three-body force was the electromagnetic force introduced by Primakoff and Holstein in their seminal paper in Ref. [1], where they introduced a three-body term in a non-relativistic Schrodinger equation to account for the creation of relativistic particle-antiparticle pairs. In the very same paper, the authors employed the meson theory of the nuclear force among nucleons, analogously to the electromagnetic one among charged particles, to introduce the contribution of a three-nucleon potential. Actually, a theoretical framework that assumes the meson exchange as the source of the two-nucleon potential but, at the same time, freezes the meson degrees of freedom out of many-body systems is not a consistent picture [3]. This means that meson exchange should be also considered the source of three-, four-,..., many-nucleon forces, whose relevance with respect to the two-nucleon one is somehow model-dependent. There are a few indications that the inclusion of a 3NF, alongside a 2NF, may improve the quality of theoretical calculations in reproducing the observables of three-nucleon systems. The role of 3NFs may be considered circumstantial, in other words, there is evidence that some specific theoretical results cannot be improved, when compared with data, by considering only a two-body component for the nuclear potential [2]. More precisely, the introduction of a 3NF may lower the discrepancy between the experimental and theoretical binding energies (BEs), charge radii, and charge form factors of \({}^{3}\)H and \({}^{3}\)He in terms of non-relativistic nuclear models. Calculations of three-nucleon systems in terms of a realistic 2NF only suffer a bad correlation between BE and charge radius. In fact, 2NFs that reproduce the experimental dimension of the nucleus underestimate the BE and, viceversa, if the theory reproduces the correct BE then the radius is too small. Another observable whose reproduction evidences the need of resorting to three-body forces is the charge form factor of the three-nucleon system \({}^{3}\)He [95]. As a matter of fact, the \({}^{3}\)He charge density, which can be obtained from the experimental charge form factor as a function of the momentum transfer, exhibits a "hole" in the inner part of the nuclear volume at variance with a flat profile that is obtained with calculations performed with 2NF only (see Fig. 3 in Ref. [2]). The inclusion of 3NFs, as well as of two-body electromagnetic meson-exchange currents [96, 97], to calculate \({}^{3}\)H, \({}^{3}\)He charge form factors has been proved to be crucial in order to reproduce data [98]. Both of them - 3NF and meson-exchange currents - originate from the same source of many-body nuclear forces: nucleons are not point-like particles, and their quark structure needs to be accounted effectively in terms of meson exchange and extra degrees of freedom. These concepts of extra degrees freedom and meson exchange among nucleons introduce us to the basic question: what is the definition of a 3NF in terms of microscopic degrees of freedom? A sound definition of nuclear many-body forces is the one which can be found in Refs. [2, 3], namely we should express them as irreducible functions of coordinates or momenta of N nucleons by way of irreducible Feynman diagrams which cannot be generated by iterating \(NN\) interactions. The contributions that are represented by diagrams constructed merely as products of 2NF vertices are then named induced many-body forces, and they are taken into account by the diagonalization of the nuclear Hamiltonian of the many-body method employed to calculate energies and wave functions. To exemplify, in Fig. 1 we have reported a diagram which, in a nuclear model based on a \(NN\) potential including one-pion exchange component, corresponds to an induced 3NF that is distinct from a genuine 3NF. The distinction between many-body forces and correlations is based on the choice of the internal degrees of freedom of a nucleus. For example, if the adopted nuclear model freezes out the \(\Delta\)-isobar degree of freedom, then many-body diagrams that include the \(\Delta\) as an intermediate state between two-body vertices are to be considered as a component of a real many-body force. The most commonly considered 3NF in meson-exchange nuclear models is the so-called \(2\pi\)-exchange three-nucleon force (\(2\pi\)-3NF), which is reported as diagram \(a\) in Fig. 2. The first one is to approach the problem by using the current algebra and partially-conserved axial-current (PCAC) constraints to extrapolate the off-mass-shell parametrization of the \(\pi\)-N scattering amplitude from on-mass-shell properties [99]. This way has generated the well-known Tucson-Melbourne 3N potential [100, 101], which has been employed for the calculations of infinite nuclear matter [100]. The other way to develop 3NFs is to resort to field theory by carrying out a diagrammatic expansion. A relevant component of such a model is the celebrated Fujita-Miyazawa force [57], where a component of the off-shell amplitude is the scattering into an intermediate \(\Delta\)-isobar state, that is reported as diagram \(b\) in Fig. 2. This pioneering potential has been the first attempt to frame nuclear 3NFs in terms of meson exchange, and has been considered for calculations of the nuclear equation of state of infinite nuclear matter (EOS) [102, 103] as well as for nuclear structure studies for finite nuclei [52]. It is worth mentioning that a good description of the nuclear matter saturation properties can be also provided by a relativistic meson exchange potential, as discussed in Ref. [104]. Figure 1: Induced 3NF constructed iterating two one-pion exchange 2NF. Figure 2: Diagram _a_: \(2\pi\)-exchange three-nucleon force. Diagram _b_: Fujita-Miyazawa force. See text for details. The above mentioned approaches suffer both the lack of a proper hierarchy in the perturbative expansion of the nuclear Hamiltonian and tight connection between the derivation of the \(NN\) and 3N forces. This drawback has been overcome in the last twenty years with the advent of a new theoretical approach to the derivation of nuclear forces, based on the ChPT, where the starting point is a chiral Lagrangian which traces back to the work of Weinberg [9, 10, 11], calling in the concept of EFT for \(S\) matrix in processes with an arbitrary numbers of low-momentum pions and nucleons. In the following section, the approach to many-nucleon forces in the framework of ChPT will be sketched out in its essentials. ### _Chiral effective field theory for three-nucleon forces_ As mentioned in Section 1, the derivation of high-precision nuclear potentials based on ChPT represents a major breakthrough in the last two decades [16, 17, 55, 105, 106]. Nowadays, this class of theoretical potentials is widely employed to link the fundamental theory of strong interactions, QCD, to nuclear many-body systems. The derivation of nuclear forces starting from a chiral Lagrangian is framed within the EFT, by employing an arbitrary number of low-momentum pions and nucleons. The long-range forces are then ruled by the symmetries of low-energy QCD and, particularly, by the spontaneously broken chiral symmetry. The short-range dynamics is absorbed into a complete basis of contact terms that are proportional to LECs, which are determined in order to reproduce few-body-system data. In our perspective - that focuses on the role of 3NFs in the study of the structure of finite nuclei - the main advantage of ChPT is that it generates nuclear two- and many-body forces on an equal footing [14, 15, 16]. Moreover, most interaction vertices that appear in the 3NF and in the four-nucleon force (4NF) also occur in the 2NF. This means that the parameters carried by these vertices, as well as the LECs of the 2NF contact terms, are determined by the construction of the chiral 2NF, and consistency then requires that the same parameter values should be inserted at the same vertices in the many-body-force contributions. When the chiral perturbation expansion is performed, the first non-vanishing 3NF occurs at N\({}^{2}\)LO [16, 17]. At this order, there are three 3NF topologies: two-pion exchange (2PE), one-pion exchange plus a \(NN\)-contact interaction (1PE), and pure 3N-contact interaction. The three topologies are reported in Fig. 3. The 2PE 3N-potential is given by \[v_{3N}^{(2\pi)}=\left(\frac{g_{A}}{2f_{\pi}}\right)^{2}\frac{1}{2}\sum_{i\neq j \neq k}\frac{(\mathbf{\sigma}_{i}\cdot\mathbf{q}_{j})(\mathbf{\sigma}_{j}\cdot\mathbf{q}_{j})} {(q_{i}^{2}+m_{\pi}^{2})(q_{j}^{2}+m_{\pi}^{2})}\;F_{ijk}^{ab}\;\tau_{i}^{a} \tau_{j}^{b}. \tag{1}\] Here, \(\mathbf{\sigma}_{i}(\mathbf{\tau}_{i})\) is the Pauli spin (isospin) matrix of nucleon \(i\), and the transferred momentum is \(\mathbf{q}_{i}\equiv\mathbf{p}_{i}^{\prime}-\mathbf{p}_{i}\), with \(\mathbf{p}_{i}\) and \(\mathbf{p^{\prime}}_{i}\) being the initial and final momenta, respectively. The other quantities entering the Figure 3: The three-nucleon potential at N\({}^{2}\)LO. From left to right: 2PE, 1PE, and contact diagrams. expression are the axial coupling constant \(g_{A}\), the pion mass \(m_{\pi}\), and the pion-decay constant \(f_{\pi}=92.4\) MeV, by using natural units, namely \(c=\hbar=1\). In the above expression we have used the definition \[F^{ab}_{ijk}=\delta^{ab}\left[-\frac{4c_{1}m_{\pi}^{2}}{f_{\pi}^{2}}+\frac{2c_{3 }}{f_{\pi}^{2}}\ \mathbf{q}_{i}\cdot\mathbf{q}_{j}\right]+\frac{c_{4}}{f_{\pi}^{2}}\sum_{c}\epsilon^{ abc}\ \tau_{k}^{c}\ \mathbf{\sigma}_{k}\cdot(\mathbf{q}_{i}\times\mathbf{q}_{j}). \tag{2}\] It is worth noticing that the 2PE contribution to the structure of the 3NF does not contain any new parameters with respect to those appearing in the expression of the N\({}^{2}\)LO 2NF, because the LECs \(c_{1}\), \(c_{3}\), and \(c_{4}\) have to be determined when fitting the 2NF to the data of the two-nucleon system. The 1PE contribution is \[v_{3N}^{(1\pi)}=-\frac{c_{D}}{f_{\pi}^{2}\Lambda_{\chi}}\ \frac{g_{A}}{8f_{\pi}^{2}}\sum_{i \neq j\neq k}\frac{(\mathbf{\sigma}_{i}\cdot\mathbf{q}_{j})(\mathbf{\sigma}_{j}\cdot\mathbf{q} _{j})}{q_{j}^{2}+m_{\pi}^{2}}\mathbf{\tau}_{i}\cdot\mathbf{\tau}_{j}, \tag{3}\] with \(\Lambda_{\chi}=700\) MeV. The 3N contact potential reads \[v_{3N}^{(\rm ct)}=\frac{c_{E}}{f_{\pi}^{4}\Lambda_{\chi}}\ \frac{1}{2}\sum_{j\neq k}\mathbf{\tau}_{j}\cdot\mathbf{\tau}_{k}. \tag{4}\] The last two 3NF terms involve the two new LECs \(c_{D}\) and \(c_{E}\), which do not appear in the two-body problem. There are many ways to constrain these two parameters. For example, the triton binding energy and the \(nd\) doublet scattering length \({}^{2}a_{nd}\) or the \({}^{4}\)He binding energy can be used. However, because it is well-known that there is a correlation between these observables, an optimal over-all fit of the properties of light nuclei is needed, as it has been done in Ref. [87]. Another approach to fix \(c_{D}\) and \(c_{E}\) is to consider the consistency of interactions and currents in chiral EFT [107, 108], since \(c_{D}\) that appears in \(v_{3N}^{1\pi}\) is also involved in the two-nucleon contact term in the \(NN\) axial current operator derived up to N\({}^{2}\)LO. Therefore, \(c_{D}\) may be constrained using the accurate experimental value of one observable from weak processes involving two- or few-nucleon systems. This procedure has been followed in Ref. [109], where the triton \(\beta\)-decay half-life, in particular its Gamow-Teller (GT) component, was used. The same choice was already adopted in a variety of previous studies to constrain the two-body axial current operator [110]. Because of the great success of ChPT nuclear forces in nuclear structure calculations, the N\({}^{2}\)LO three-body potential is the one that has been mostly employed in SM calculations with realistic 3NFs. The N\({}^{2}\)LO 3N potential is defined in momentum space and, in order to employ this potential in the derivation of the effective SM interaction, its matrix elements must be calculated in the harmonic-oscillator (HO) basis following a procedure that has been first reported in Ref. [87], for a potential characterized by a local regulator in momentum space. As concerns 3NFs beyond N\({}^{2}\)LO, their contributions remain unclear. At N\({}^{3}\)LO, the two-pion exchange 3NFs produce rather weak effects for the nucleon-deuteron system [111], while the short-range 3NFs may be significant [112]. The short-range 3NFs at N\({}^{4}\)LO, where thirteen LECs newly appear, may give sizeable contributions [113, 114]. In Appendix A, one can find the details on the calculation of the N\({}^{2}\)LO 3NF matrix elements in the HO basis, but employing a nonlocal regulator function. ## 3 Shell model ### _Generalities_ The nuclear SM represents one the most powerful tools for understanding the structure of atomic nuclei, in which the complexity of the nuclear many-body problem is reduced by considering only a limited number of microscopic degrees of freedom while the missing ones are taken into account by employing effective operators. This model is based on the assumption that each nucleon moves in a mean field created by the remaining \(A-1\) nucleons. The mean field gives rise to a shell structure, composed of single-particle states (_orbitals_) grouped in shells, which are well separated in energy from each other. To a first approximation, the nucleus can be considered as an inert core, made up of shells filled with neutrons and protons paired to total angular momentum \(J=0\), with the remaining nucleons (_valence nucleons_) located in orbitals on top of the inert core. Within the SM, which is also called the "interacting shell model" to underline the difference with the simple independent-particle description, the valence nucleons interact in a truncated space (_valence space_) spanned in general by a single proton and/or neutron shell above the inert core. Then, all the orbitals above the valence space are regarded as empty and constitute the external space. It is worth pointing out that in modern SM calculations larger valence spaces are considered, including proton and neutron cross-shell excitations, to describe some features of exotic neutron-rich nuclei. The SM is nowadays a well-established approach to investigate nuclei in different mass regions, as testified by the large number of successful calculations carried out during the 70 years since its introduction. The real beginning of the SM dates back to the end of 1940s and is connected to the publication of the papers by Mayer [49] and Haxel _et al._[50], even if the existence of shell structure was already evidenced during the previous two decades. It was, in fact, realized that nuclei with specific numbers of protons and/or neutrons (_magic numbers_) are more stable than others, which can be interpreted as a manifestation of an independent particle behavior. However, it was crucial for explaining the experimental regularities associated with the magic numbers the addition of a strong attractive spin-orbit term to the central mean field, as proposed in Refs. [49, 50]. It became immediately clear that the description of nuclei only in terms of a simple mean-field potential was a very crude approximation, and the inclusion of the interaction between valence nucleons was indispensable to break up the degeneracies when considering systems with two or more particles outside doubly-closed nuclei. Therefore, soon after the publication of the Mayer and Haxel's papers, a variety of two-body interactions was developed - essentially for single-orbital configurations - using central forces with different radial dependencies and including spin and isospin terms. A review of these first SM calculations can be found in Ref. [115]. The construction of interactions to be used in SM has always been a central issue within this approach, and still it is. The use of phenomenological interactions, which contain a certain number of parameters adjusted to reproduce the experimental data, has been largely adopted in SM calculations, but at the same time significant efforts have been focused on the derivation of microscopic interactions starting from realistic nuclear forces, namely from the bare nuclear potentials determined from scattering experiments. Before discussing this point in more detail, we would like to introduce the effective SM Hamiltonian, that in the coupled representation can be written as \[H_{\rm eff}=\sum_{a}\epsilon_{a}N_{a}-\frac{1}{4}\sum_{abcdJ}\langle ab;J\mid V _{\rm eff}\mid cd;J\rangle(-1)^{J}[a_{a}^{\dagger}a_{b}^{\dagger}]^{J}\cdot[ \tilde{a}_{c}\tilde{a}_{d}]^{J}, \tag{5}\] where the symbol \((\cdot)\) indicates the scalar product as usual, while the latin indices run over the orbitals of the neutron and proton valence spaces and stand for \((nlj)\), with \(n\) being the radial quantum number, \(l\) and \(j\), respectively, the orbital and total angular momentum. We have used the \(a_{am_{a}}^{\dagger}\) and \(\tilde{a}_{am_{a}}=(-1)^{j_{a}+m_{a}}a_{a-m_{a}}\) operators which, respectively, creates and annihilates one particle in a state of the underlying mean field, with \(m_{a}\) associated to the \(z\) component of \(j_{a}\). The one-body component of \(H_{\rm eff}\) is written in terms of the SP energies and the number operator \(N_{a}=(-1)^{j_{a}}a_{a}^{\dagger}\cdot\tilde{a}_{a}\). The TBMEs are antisymmetrized but unnormalized. The eigenvalue problem of the Hamiltonian (5) can be solved by employing as basis states appropriate combinations of the Slater determinants \[[\underbrace{a^{\dagger}_{am_{a}}a^{\dagger}_{bm_{0}}a^{\dagger}_{ cm_{e}}\ldots}_{\mathfrak{N}}]|C\rangle, \tag{6}\] where the set of SP orbitals \((a,b,c\ldots)\) corresponds to a given configuration and \(\mathfrak{N}\) is the number of valence nucleons. The unperturbed doubly-closed core, \(|C\rangle\), can be explicitly written as \[|C\rangle=\prod_{am_{a}\in\ filled\ shells}a^{\dagger}_{am_{a}}|0\rangle. \tag{7}\] In present days, there are several open source codes for performing SM calculations, such as NuShellX [116], BIGSTICK [117], ANTOINE or NATHAN [41], KSHELL [118], MFDN [119] and others (see [120] for more details). Some of them are developed for massive parallel computation and, therefore, can run on high-performance computing clusters. These type of codes are able to handle up to \(\sim 100\) billion dimensions, making it possible to approach nuclei with many valence nucleons in large valence spaces. One component of \(H_{\rm eff}\) relevant for the following discussion is given by the monopole interaction, which, using explicitly the indices \(\tau,\tau^{\prime}\) for proton or neutron, takes the form \[H_{\rm mon}=\sum_{a\tau}\epsilon_{a\tau}N_{a\tau}+\frac{1}{2}\sum_{abtr\tau^{ \prime}}\frac{\bar{V}^{\tau\tau^{\prime}}_{ab}N_{a\tau}(N_{br^{\prime}}-\delta _{ab}\delta_{\tau\tau^{\prime}})}{\sum_{J}\hat{J}^{2}} \tag{8}\] where the angular momentum \(J\) runs on all allowed values and the notation \(\hat{J}=\sqrt{2J+1}\) is used. The matrix elements \(\bar{V}^{\tau\tau^{\prime}}_{ab}\) are defined as \[\bar{V}^{\tau\tau^{\prime}}_{ab}=\frac{\sum_{J}\hat{J}\langle a \tau b\tau^{\prime};J\mid V_{\rm eff}\mid a\tau b\tau^{\prime};J\rangle}{\sum_ {J}\hat{J}}\, \tag{9}\] and represent the angular-momentum averaged TBMEs, or centroids of the interaction. The monopole interaction corresponds to the spherical mean field as extracted from the SM Hamiltonian in open-shell nuclei [41] and therefore governs the evolution of the SP energies along isotopic and isotonic chains. Starting from the monopole interaction, effective single particle energies (ESPEs) are defined \[{\rm ESPE}(a\tau)=\epsilon_{a\tau}+\sum_{br^{\prime}}\bar{V}^{\tau \tau^{\prime}}_{ab}n^{\tau^{\prime}}_{b}, \tag{10}\] which, as compared to the SP energies of the SM Hamiltonian (Eq. (5)), incorporate the mean effects from other nucleons outer the inert core (see, for instance, Ref. [121]). Here the g.s. occupation number of the proton/neutron \(b\) orbital is denoted by \(n^{\tau^{\prime}}_{b}\). Once energies \(E_{\alpha}\) and wave functions \(|\psi_{\alpha}\rangle\) of the system under consideration are determined by solving the eigenvalue problem for the Hamiltonian of Eq. (5), it is possible to compute the matrix elements of operators which are related to physical quantities, like electromagnetic transitions and decay strengths, or appear in form factors needed to evaluate reaction cross sections. The matrix elements for one-body operators can be written as \[\langle\psi_{f}||\Theta^{\lambda}_{\rm eff}||\psi_{i}\rangle=\sum_{ab}\frac{ \langle\psi_{f}||[a^{\dagger}_{a}\hat{a}b]^{\lambda}||\psi_{i}\rangle}{\hat{ \lambda}}\langle a||\Theta^{\lambda}_{\rm eff}||b\rangle=\sum_{ab}{\rm OBTD}( fiab\lambda)\langle a||\Theta^{\lambda}_{\rm eff}||b\rangle, \tag{11}\] where the one-body transition densities (OBTDs) represent in a compact form the nuclear structure information on the initial and final states involved in the process. Note that we have added the label "eff" to \(\Theta^{\lambda}\) to point out that a different operator with respect to the bare one should be used with SM wave functions, since, as for the Hamiltonian, renormalizations due to the adopted truncated space are needed. As mentioned above, the choice of the Hamiltonian is one of the most crucial issues in the SM approach, which attracted the attention of the nuclear community from the very beginning. At first, interest was essentially addressed to the development of phenomenological schematic interactions, which are parametrized functions of nucleon coordinates with very simple or more complicated structures, depending on the included exchange operators. Between them, we mention the \(\delta\) and pairing forces, which, even if very simple, account for the short-range nature of the residual interaction and its tendency to correlate nucleons in zero-coupled pairs (\(J^{\pi}=0^{+}\)). Contemporaneously, there were efforts to derive the SM Hamiltonian from a realistic interaction between nucleons. It was soon evident that the peculiar properties of the bare potential, containing a strong repulsive core at short distances, prevent the description of the nucleus in terms of mean field and consequently within the SM framework. However, the introduction of the \(G\) matrix by Keith Brueckner and coworkers [122] was the first milestone for the development of a microscopic interpretation of the SM. In the \(G\) matrix, strong short-range correlations are renormalized by summing all two-particle ladder-type interactions. It can be, therefore, used to perform Hartree-Fock self-consistent calculations or taken, in principle, as residual interaction in SM calculations. A further relevant step along this line is represented by the paper of Kuo and Brown [42], in which a new effective interaction was derived for the \(sd\) shell starting from the \(G\) matrix by a perturbative expansion including terms up to second order in \(G\), namely the so-called _bubble_ diagrams corresponding to one particle-one hole (\(1p-1h\)) core-polarization excitations. These pioneering works made it evident that the interaction used in SM calculations cannot be the bare one between free nucleons. In fact, the SM Hamiltonian is defined in a reduced space, and should therefore account - in an effective way - for the omitted degrees of freedom, namely for the excitations of core particles into the valence and external spaces as well as for the excitations of valence particles in the external space. Then, as testified by the large number of papers published on the subject, great attention was dedicated to the derivation of the effective SM Hamiltonian within a perturbative approach and to the assessment of its role in the study of nuclear structure. Nowadays, substantial progress has been made regarding the starting bare potential as well as the many-body technique for constructing the effective interaction, that will be presented in detail in forthcoming sections. Within the class of phenomenological interactions, in which the introduced parameters are adjusted to reproduce a selected set of experimental data, an alternative way to the schematic interactions, introduced by Talmi in Ref. [123], consists in considering the Hamiltonian matrix elements themselves as free parameters. This approach is quite successful, and a comprehensive discussion and presentation of such a procedure can be found in Refs. [41, 115, 124, 125, 125]. Here, we report just a few examples of this kind of interactions, as the \(p\)-shell interaction by Cohen and Kurath [39], the so-called universal interaction developed for the \(sd\) shell by Brown _et al._[40, 126], or the GXPF1 [127] and JUN45 [128] interaction by M. Honma _et al._, respectively, for the \(fp\) and \(f_{5/2}pg_{9/2}\) valence spaces. For large valence spaces, the number of matrix elements increases drastically and they are determined by choosing a starting Hamiltonian, as for instance the \(G\) matrix of a realistic \(NN\) potential, and using the linear combination method of Ref. [40], where only selected linear combinations of matrix elements are varied to fit experimental data. All these phenomenological interactions have been largely used in nuclear structure studies providing a successful description of a certain variety of phenomena. It is worthwhile to point out that in the vast majority of SM calculations, only two-body interactions have been used. As mentioned in the Introduction, the explicit inclusion of 3NFs has historically been neglected in SM treatment due to the ambiguity in producing a 3N term consistent with the \(NN\) one, as well as to the difficulty in handling such a term in many-body systems. On the other hand, the good agreement between theory and experiment obtained with SM calculations employing phenomenological interactions suggests that the effects of 3NFs can be empirically taken into account. We already mentioned that Zucker and coworkers [47] argued that the main effects of 3NFs concentrate in the monopole component of the effective interaction. As a matter of fact, modifications of the monopole component were first proposed, without a clear connection with 3NFs, to cure the deficiencies of effective Hamiltonians derived from realistic \(NN\) potentials related to their bad saturation and shell formation properties [129]. The phenomenological adjustment of monopole terms in realistic effective Hamiltonians has been largely applied by the Strasbourg-Madrid group to various mass regions providing interactions able to give an accurate description of the nuclear spectroscopy. Examples of monopole corrected interactions are KB3 [129] in the \(fp\) shell, SDPF-U [130] in the \(sdfp\) shell, and LNPS [131] including the \(fp\) shell for protons and the \(f_{5/2}pg_{9/2}d_{5/2}\) orbitals for neutrons. In the last decade SM calculations that explicitly take into account 3NF have been carried out and they will be discussed in detail in the following sections. ### _The derivation of realistic effective interactions and operators_ The SM is grounded on the ansatz that each nucleon belonging to a nucleus moves independently in a spherically symmetric auxiliary potential, which accounts for the average interaction with the other protons and neutrons. This potential is usually described by a Woods-Saxon (WS) or an HO potential including a spin-orbit term. Actually, it is clear that the residual interaction between valence nucleons, which is not explicitly included in the one-body auxiliary potential, has to be considered to describe quantitatively the low-energy structure of nuclei with two or more valence nucleons confined to move in the valence space. In fact, the action of the residual interaction generates a mixing of different configurations thus removing the degeneracy of states belonging to the same configuration. In general, when considering only the valence nucleons interacting in the reduced number of orbitals of the valence space, one is left with the problem to construct a SM Hamiltonian and decay operators defined in a truncated space, but whose matrix elements should account for the neglected degrees of freedom. Namely, we need an effective Hamiltonian \(H_{\rm eff}\) and effective decay operators \(\Theta_{\rm eff}\). We start by sketching out the fundamentals of the derivation of \(H_{\rm eff}\) in a formal way, by considering, without loss of generality, an \(A\)-body Hamiltonian with no transitional invariance. This is appropriate in SM calculations performed including only one proton and/or neutron major shell above the closed core. A purely intrinsic Hamiltonian will be introduced in Section 3.3. The full Hilbert-space eigenvalue problem is then written as \[H|\Psi_{\alpha}\rangle=E_{\alpha}|\Psi_{\alpha}\rangle, \tag{12}\] where \[H=H_{0}+H_{1}, \tag{13}\] and \[H_{0}=\sum_{i=1}^{A}\left(\frac{p_{i}^{2}}{2m}+U_{i}\right), \tag{14}\] \[H_{1}=\sum_{i<j=1}^{A}V_{ij}^{NN}-\sum_{i=1}^{A}U_{i}\;. \tag{15}\] As mentioned before, to introduce the SM framework we must consider an auxiliary one-body potential \(U\) to break up the nuclear Hamiltonian as the sum of a one-body term \(H_{0}\), which describes the independent motion of the nucleons, and the residual interaction \(H_{1}\). Without any loss of generality and for the sake of simplicity, we assume that the interaction between the nucleons is described only by a two-body force, and neglect 3NF contributions. The generalization of the formalism to include a three-body potential will be considered later. The solution of Eq. (12) requires the diagonalization of the infinite matrix \(H\), a task that is obviously unfeasible. Then, one has to reduce this huge matrix to a smaller one \(H_{\rm eff}\) - defined in a model space made of the only configurations allowed by the valence nucleons within the adopted valence space - by requiring that its eigenvalues belong to the set of the eigenvalues of \(H\). This model space is defined in terms of a subset of eigenvectors of \(H_{0}\), \(|\Phi_{i}\rangle\), namely as appropriate combinations of the Slater determinants of Eq. (6), written, in general, in the angular momentum-coupled scheme. It is worth introducing the projection operators \(P\) and \(Q=1-P\), which project from the complete Hilbert space onto the model space and its complementary space, respectively. The operator \(P\) can be expressed by way of the states \(\Phi_{i}\) as \[P=\sum_{i=1}^{d}|\Phi_{i}\rangle\langle\Phi_{i}|, \tag{16}\] where \(d\) is the dimension of the model space. The projection operators \(P\) and \(Q\) then satisfy the properties \[P^{2}=P,\ \ Q^{2}=Q,\ \ PQ=QP=0. \tag{17}\] The aim of the effective SM interaction theory is to reduce the eigenvalue problem of Eq. (12) to a model-space eigenvalue problem \[H_{\rm eff}P|\Psi_{\alpha}\rangle=E_{\alpha}P|\Psi_{\alpha}\rangle, \tag{18}\] where \(\alpha=1,\ldots,d\) and \(H_{\rm eff}\) is defined only in the model space. As mentioned in Section 3.1, there are two ways to tackle the problem of deriving \(H_{\rm eff}\), namely by 1. employing a phenomenological approach, 2. starting from bare realistic nuclear forces and then resorting to the many-body theory. References about the phenomenological approach and some examples of this kind of interactions were given in Section 3.1, while here we discuss in some detail the second way. Nowadays, novel non-perturbative methods have been developed to derive a effective SM Hamiltonian starting from the bare nuclear interaction, like valence-space in-medium SRG (VS-IMSRG) [132], SM coupled cluster (SMCC) [67], or NCSM with a core [133, 134, 135, 136], all of them based on similarity transformations. These non-perturbative approaches are rooted in many-body theory and provide somehow different paths to \(H_{\rm eff}\). However, they can all be derived in a general theoretical framework, that consists in expressing \(H_{\rm eff}\) as the result of a similarity transformation acting on the original Hamiltonian \[H_{\rm eff}=Pe^{\cal G}He^{-\cal G}P, \tag{19}\] where the transformation is parametrized as the exponential of a generator \({\cal G}\), which needs to satisfy the decoupling condition \[QH_{\rm eff}P=0. \tag{20}\] An extended and up-to-date presentation of non-perturbative approaches to the derivation of \(H_{\rm eff}\) from realistic nuclear interactions can be found in Ref. [37], showing how the different methods can be derived in such a general framework and describing the approximation schemes that have to be employed in each of them. As mentioned in the Introduction, in the present review we focus on the MBPT approach to \(H_{\rm eff}\), since it is at the moment the one that has been most largely adopted in SM calculations where the role of 3NFs has been investigated. #### 3.2.1 _The perturbative expansion of effective shell-model Hamiltonian_ Here, we introduce the formalism of the perturbative derivation of the effective SM Hamiltonian \(H_{\rm eff}\), using the similarity transformation introduced by Lee and Suzuki [34, 137]. The starting point is the Schrodinger equation for the \(A\)-nucleon system in the whole Hilbert space as defined in Eq.(12). Then, by following Eqs. (13)-(15), we introduce an auxiliary one-body potential \(U\) to break up the nuclear Hamiltonian as the sum of an unperturbed one-body term \(H_{0}\), that describes the independent motion of the nucleons, and the residual interaction Hamiltonian \(H_{1}\). The robust energy gap between the shells allows considering as inert the \(A-\mathfrak{N}\) core nucleons, which fill the energy orbitals below the Fermi surface. The SP states that are accessible to the \(\mathfrak{N}\) valence nucleons are those included in the major shell placed in energy above the closed core, and constitute the valence space. The configurations allowed by the valence nucleons within this valence space define a reduced Hilbert space, the so-called model space, by way of a finite subset of \(d\) eigenvectors of \(H_{0}\). The operators that project the wave functions from the complete Hilbert space onto the model space and its complementary space are, respectively, \(P\) and \(Q\), satisfying the properties of Eq. (17). As already mentioned before, the goal is to reduce the eigenvalue problem of Eq. (12) to the model-space eigenvalue problem of Eq. (18). Therefore, we need to obtain a new Hamiltonian \(\mathcal{H}\) whose eigenvalues are the same of the Hamiltonian \(H\) for the \(A\)-nucleon system, but satisfying the decoupling equation between the model space \(P\) and its complement \(Q\) \[Q\mathcal{H}P=0, \tag{21}\] in order to guarantee that the effective Hamiltonian is \(H_{\rm eff}=P\mathcal{H}P\). Clearly, the Hamiltonian \(\mathcal{H}\) has to be obtained by way of a similarity transformation defined in the whole Hilbert space \[\mathcal{H}=X^{-1}HX. \tag{22}\] The class of transformation operators \(X\) such that \(\mathcal{H}\) satisfies the decoupling equation (21) is infinite, and Lee and Suzuki [34, 137] have suggested an operator \(X\) defined as \(X=e^{\omega}\). Without loss of generality, \(\omega\) is chosen to satisfy the following properties: \[\omega=Q\omega P, \tag{23}\] \[P\omega P=Q\omega Q=P\omega Q=0, \tag{24}\] with Eq. (23) implying that \[\omega^{2}=\omega^{3}=\...\ =0. \tag{25}\] The above equation leads us to write the operator \(X\) as \(X=1+\omega\), and, consequently, \(H_{\rm eff}\) takes the form \[H_{\rm eff}=P\mathcal{H}P=PHP+PHQ\omega, \tag{26}\] while the decoupling equation (21) can be expressed as \[QHP+QHQ\omega-\omega PHP-\omega PHQ\omega=0. \tag{27}\] This is a non-linear matrix equation and can easily provide a solution for \(\omega\) as long as the Hamiltonian \(H\) is explicitly expressed in the whole Hilbert space. Actually, this is not feasible and some sort of approximations would be necessary. A successful way to solve Eq. (27) and derive \(H_{\rm eff}\) for SM calculations is to introduce a vertex function, the \(\hat{Q}\) box, which is suitable for a perturbative expansion. In the following, we make explicit the \(\hat{Q}\)-box approach assuming, for the sake of simplicity, that the unperturbed Hamiltonian \(H_{0}\) is degenerate for those states belonging to the model space, \[PH_{0}P=\epsilon_{0}P. \tag{28}\] This limitation may be overcome by introducing multi-energy \(\hat{Q}\) boxes, which are able to account for non-degenerate spaces [35]. Actually, this approach is quite complicated for practical applications, but recently two methods have been introduced, which may be implemented straightforwardly in the derivation of \(H_{\rm eff}\)s with eigenstates of \(H_{0}\) non-degenerate in the model space. The details of these procedures are reported in Refs. [138, 139]. Starting from Eq. (26), the effective interaction \(H_{1}^{\rm eff}=H_{\rm eff}-PH_{0}P\) can be written in terms of \(\omega\) as \[H_{1}^{\rm eff}=P\mathcal{H}P-PH_{0}P=PH_{1}P+PH_{1}Q\omega, \tag{29}\] where we have used the diagonality of \(H_{0}\) in the \(P\) and \(Q\) states which implies \[QHP=QH_{1}P+QH_{0}P=QH_{1}P, \tag{30}\] \[PHQ=PH_{1}Q+PH_{0}Q=PH_{1}Q. \tag{31}\] Similarly, the decoupling equation (27) takes the form \[QH_{1}P+QHQ\omega-\omega(PH_{0}P+PH_{1}P+PH_{1}Q\omega)=QH_{1}P+QHQ\omega- \omega(\epsilon_{0}P+H_{1}^{\rm eff})=0, \tag{32}\] which leads to a new identity for the operator \(\omega\) \[\omega=Q\frac{1}{\epsilon_{0}-QHQ}QH_{1}P-Q\frac{1}{\epsilon_{0}-QHQ}\omega H _{1}^{\rm eff}. \tag{33}\] Finally, by inserting Eq. (33) into the identity (29) we obtain a recursive equation for \(H_{1}^{\rm eff}\) \[H_{1}^{\rm eff}(\omega)=PH_{1}P+PH_{1}Q\frac{1}{\epsilon_{0}-QHQ}QH_{1}P-PH_{ 1}Q\frac{1}{\epsilon_{0}-QHQ}\omega H_{1}^{\rm eff}(\omega). \tag{34}\] Then, by defining the \(\hat{Q}\)-box vertex function as \[\hat{Q}(\epsilon)=PH_{1}P+PH_{1}Q\frac{1}{\epsilon-QHQ}QH_{1}P, \tag{35}\] the recursive equation (34) can be written as \[H_{1}^{\rm eff}(\omega)=\hat{Q}(\epsilon_{0})-PH_{1}Q\frac{1}{\epsilon_{0}- QHQ}\omega H_{1}^{\rm eff}(\omega). \tag{36}\] Lee and Suzuki suggested two possible iterative schemes to solve Eq. (36), which are based on the calculation of the \(\hat{Q}\) box and its derivatives, known as the Krenciglowa-Kuo (KK) and the Lee-Suzuki (LS) techniques [34]. Let us start from the KK iterative method, which traces back to the coupling of Eqs. (36) and (33), providing the relation \[H_{1}^{\rm eff}(\omega_{n})=\sum_{m=0}^{\infty}\left[-PH_{1}Q\left(\frac{-1}{ \epsilon_{0}-QHQ}\right)^{m+1}QH_{1}P\right]\left[H_{1}^{\rm eff}(\omega_{n-1 })\right]^{m}. \tag{37}\] The quantity inside the square brackets of Eq. (37), which is commonly dubbed as \(\hat{Q}_{m}(\epsilon_{0})\), is proportional to the \(m\)-th derivative of the \(\hat{Q}\) box calculated in \(\epsilon=\epsilon_{0}\) \[\hat{Q}_{m}(\epsilon_{0})=-PH_{1}Q\left(\frac{-1}{\epsilon_{0}-QHQ}\right)^{m+1} QH_{1}P=\frac{1}{m!}\left[\frac{d^{m}\hat{Q}(\epsilon)}{d\epsilon^{m}}\right]_{ \epsilon=\epsilon_{0}}. \tag{38}\] We may then rewrite Eq. (37), according to the above identity, as \[H_{1}^{\rm eff}(\omega_{n})=\sum_{m=0}^{\infty}\frac{1}{m!}\left[\frac{d^{m} \hat{Q}(\epsilon)}{d\epsilon^{m}}\right]_{\epsilon=\epsilon_{0}}\left[H_{1}^{ \rm eff}(\omega_{n-1})\right]^{m}=\sum_{m=0}^{\infty}\hat{Q}_{m}(\epsilon_{0} )\left[H_{1}^{\rm eff}(\omega_{n-1})\right]^{m}. \tag{39}\] The starting point of the KK iterative method is the assumption \(H_{1}^{\rm eff}(\omega_{0})=\hat{Q}(\epsilon_{0})\), leading to rewrite Eq. (39) as \[H_{1}^{\rm eff}=\sum_{i=0}^{\infty}F_{i}, \tag{40}\] where \[F_{0} = \hat{Q}(\epsilon_{0})\] \[F_{1} = \hat{Q}_{1}(\epsilon_{0})\hat{Q}(\epsilon_{0})\] \[F_{2} = \hat{Q}_{2}(\epsilon_{0})\hat{Q}(\epsilon_{0})\hat{Q}(\epsilon_{ 0})+\hat{Q}_{1}(\epsilon_{0})\hat{Q}_{1}(\epsilon_{0})\hat{Q}(\epsilon_{0}) \tag{41}\] \[...\] The above expression is a different form of the well-known folded-diagram expansion of the effective Hamiltonian as introduced by Kuo and Krenciglowa, since in Ref. [140] it has been demonstrated the operatorial identity \[\hat{Q}_{1}\hat{Q}=-\hat{Q}\int\hat{Q}, \tag{42}\] where the integral sign corresponds to the so-called folding operation as introduced by Brandow in Ref. [141]. An alternative approach to the solution of Eq. (36) is to resort to the LS technique. This can be carried out by rearranging Eq. (36) in order to obtain an explicit expression of the effective Hamiltonian \(H_{1}^{\rm eff}\) as a function of the operators \(\omega\) and \(\hat{Q}\)[34] \[H_{1}^{\rm eff}(\omega)=\left(1+PH_{1}Q\frac{1}{\epsilon_{0}-QHQ}\omega\right) ^{-1}\hat{Q}(\epsilon_{0}). \tag{43}\] The iterative form of this equation is \[H_{1}^{\rm eff}(\omega_{n})=\left(1+PH_{1}Q\frac{1}{\epsilon_{0}-QHQ}\omega_{ n-1}\right)^{-1}\hat{Q}(\epsilon_{0}), \tag{44}\] while an iterative expression of Eq. (33) is given by \[\omega_{n}=Q\frac{1}{\epsilon_{0}-QHQ}QH_{1}P-Q\frac{1}{\epsilon_{0}-QHQ} \omega_{n-1}H_{1}^{\rm eff}(\omega_{n}). \tag{45}\] The starting point of the procedure is to choose \(\omega_{0}=0\), so that we may write \[H_{1}^{\rm eff}(\omega_{1}) = \hat{Q}(\epsilon_{0})\] \[\omega_{1} = Q\frac{1}{\epsilon_{0}-QHQ}QH_{1}P.\] Using some algebra, the following identity can be demonstrated \[\hat{Q}_{1}(\epsilon_{0})=-PH_{1}Q\frac{1}{\epsilon_{0}-QHQ}Q\frac{1}{\epsilon_{ 0}-QHQ}QH_{1}P=-PH_{1}Q\frac{1}{\epsilon_{0}-QHQ}\omega_{1}, \tag{46}\] and for the iteration step \(n=2\) we have \[H_{1}^{\rm eff}(\omega_{2}) = \left(1+PH_{1}\frac{1}{\epsilon_{0}-QHQ}\omega_{1}\right)^{-1} \hat{Q}(\epsilon_{0})=\frac{1}{1-\hat{Q}_{1}(\epsilon_{0})}\hat{Q}(\epsilon_{ 0}),\] \[\omega_{2} = Q\frac{1}{\epsilon_{0}-QHQ}QH_{1}P-Q\frac{1}{\epsilon_{0}-QHQ} \omega_{1}H_{1}^{\rm eff}(\omega_{2}). \tag{47}\] Finally, the LS iterative expression of \(H_{\rm eff}\) is \[H_{1}^{\rm eff}(\omega_{n})=\left[1-\hat{Q}_{1}(\epsilon_{0})\sum_{m=2}^{n-1} \hat{Q}_{m}(\epsilon_{0})\prod_{k=n-m+1}^{n-1}H_{1}^{\rm eff}(\omega_{k}) \right]^{-1}\hat{Q}(\epsilon_{0}). \tag{48}\] It is important noting that the KK and LS iterative techniques, even if they have been both conceived to solve the decoupling equation (32), do not provide necessarily the same \(H_{\rm eff}\). Suzuki and Lee have shown that by way of the KK iterative approach one obtains eigenstates that have a large overlap with the model space. On the other side, when \(H_{\rm eff}\) is derived by employing the LS technique, its eigenvalues are the lowest in energy among those belonging to the set of the full Hamiltonian \(H\)[34]. The heart of the matter is now the calculation of the \(\hat{Q}\)-box vertex function defined in Eq. (35). Within a perturbative framework, the term \(1/(\epsilon-QHQ)\) appearing in Eq. (35) should be expanded as a power series \[\frac{1}{\epsilon-QHQ}=\sum_{n=0}^{\infty}\frac{1}{\epsilon-QH_{0}Q}\left( \frac{QH_{1}Q}{\epsilon-QH_{0}Q}\right)^{n}. \tag{49}\] It is common to employ a diagrammatic approach of the perturbative expansion by representing the \(\hat{Q}\) box as a collection of irreducible Goldstone diagrams - diagrams with at least one line between two successive vertices not belonging to the model space - that have at least one \(H_{1}\)-vertex and are linked to minimum one external valence line [142]. Usually, the derivation of \(H_{\rm eff}\) is performed for systems with one and two valence nucleons. Single valence-nucleon nuclei supplies the one-body component of \(H_{\rm eff}\), \(H_{\rm eff}^{1b}\), namely the theoretical effective SP energies, while the TBMEs are obtained from the effective Hamiltonian for systems with two-valence nucleons, we indicate by \(H_{\rm eff}^{2b}\). In particular, the TBMEs are obtained by way of a subtraction procedure, which consists in removing from \(H_{\rm eff}^{2b}\) the diagonal component of \(H_{\rm eff}^{1b}\)[143]. In Ref. [144], the topic of the calculation of \(\hat{Q}\)-box diagrams in the angular momentum-coupled representation is extensively treated. It should be noted that in literature the effective SM Hamiltonians are derived accounting for \(\hat{Q}\)-box diagrams up to at most the third order in perturbation theory, and their complete list is reported in Ref. [145]. This limitation is dictated by the computational cost of performing calculations that include complete higher-order sets of diagrams. The diagrammatic reported in Refs. [36, 145] is constrained to the derivation of \(H_{\rm eff}\)s for one- and two-valence nucleon systems, but many-body diagrams must be included to obtain \(H_{\rm eff}\)s for systems with three or more valence nucleons. At present, few codes can perform the diagonalization of SM Hamiltonians including a three body component, like BIGSTICK [117] and MFDN [119]. They are however mainly oriented to NCSM and limited to light nuclei. Then, in order to include the contribution of \(\hat{Q}\)-box diagrams with at least three incoming and outcoming valence particles in \(H_{\rm eff}\), one can resort to the same approximation used to manage the input of 3NFs, namely the so-called normal-ordering decomposition of the three-body component of a many-body Hamiltonian [146]. To this end, we start with a \(\hat{Q}\) box including second-order three-body diagrams, which, for nuclei with more than two-valence nucleons, account for the interaction via the two-body force of the valence nucleons with core excitations as well as with virtual intermediate nucleons scattered above the valence space (see Fig. 4). Footnote 1: The \(\hat{Q}\)-boson is defined as the \(\hat{Q}\)-boson. According to the definition of 3NFs introduced in Section 2.1, the contributions reported in Fig. 4 are proper three-body forces, since the intermediate states are orbitals belonging to shells outside the valence space and, consequently, they cannot be constructed by iterating 2NF diagrams with valence-space external lines. The analytical expressions of these diagrams are in Ref. [147]. As discussed in Ref. [37], the main advantage of expressing many-body operators in normal-ordered form is to include as much information as possible from the higher-particle-rank operators into the lower-rank operators. Then, after the normal-ordered decomposition, the approximation consists in neglecting the residual three-body component and, consequently, \(H_{\rm eff}\) may be employed in standard SM codes. Such a procedure allows to obtain, starting from the calculation of each \((A,B)\) topology in Fig. 4, nine one-loop diagrams represented by the graph \((\alpha)\) in Fig. 5. Their explicit form, in terms of the three-body graphs \((A,B)\), is \[V^{\alpha}_{ab,cd,J}=\sum_{mJ^{\prime}}\rho_{m}\frac{\hat{J}^{\prime 2}}{ \hat{J}^{2}}_{A}\big{\langle}(a,b)\,,m;JJ^{\prime}\,\big{|}V^{A,B}\big{|}\,(c,d)\,,m;JJ^{\prime}\big{\rangle}_{A}\,, \tag{50}\] Figure 4: Second-order three-body diagrams. The sum over the intermediate lines runs over particle and hole states outside the valence space, shown by A and B, respectively. For the sake of simplicity, for each topology we report only one of the diagrams which correspond to the permutations of the external lines. Figure 5: Density-dependent two-body contribution that is obtained from a three-body one. The graph \(\alpha\) is obtained by summing over one incoming and outgoing particle of the three-body graphs \(A,B\) reported in Fig. 4. where the summation over \(m\) runs in the valence space, and \(\rho_{m}\) is the unperturbed occupation density of the orbital \(m\) according to the number of valence nucleons. The definition of the antisymmetrized but unnormalized three-body states, \(\left|\left(a,b\right),c;JJ^{\prime}\right\rangle_{A}\), can be found in Appendix A. Finally, the perturbative expression of the \(\hat{Q}\) box contains one- and two-body diagrams up to third order in \(V_{NN}\), and a density-dependent two-body contribution that includes the effect of three-body diagrams at second-order in \(V_{NN}\)[147, 148]. Obviously, this means that a specific effective SM Hamiltonian has to be derived for a given system, depending on the number of valence protons and neutrons, and the obtained set of \(H_{\rm eff}\)s differs only in the TBMEs. The role played by density-dependent \(H_{\rm eff}\)s in the calculation of g.s. energies in nuclei with many-valence nucleons has been investigated in Refs. [89, 90, 91], and discussed in Section 4.2.4. Now we draw our attention on the calculation of \(H_{\rm eff}\) accounting also for the contributions of the 3NF component of a realistic nuclear potential, such as, for example, the N\({}^{2}\)LO 3N potential reported in Fig. 3. In SM calculations where \(H_{\rm eff}\) has been derived perturbatively, this contribution is introduced at first-order in many-body perturbation theory only for the one- and two-valence nucleon systems. In Fig. 6, both first order one- and two-body diagrams of \(\hat{Q}\) box from a 3N potential are shown, and their explicit expressions are \[\epsilon_{a}^{\rm(3NF)}=\sum_{\begin{subarray}{c}h_{1},h_{2}\\ J_{12}J\end{subarray}}\frac{\hat{J}^{2}}{2\hat{j}_{a}^{\,2}}{}_{A}\langle\left( h_{1},h_{2}\right),a;J_{12}J\left|V_{3N}\right|\left(h_{1},h_{2}\right),a;J_{12}J \right\rangle_{A}, \tag{51}\] \[V_{ab,cd,J}^{\rm(3NF)}=\sum_{h,J^{\prime}}\,\frac{\hat{J}^{\prime }}{\hat{J}^{2}}{}_{A}\langle\left(a,b\right),h;JJ^{\prime}\left|V_{3N}\right| \left(c,d\right),h;JJ^{\prime}\rangle_{A}\,, \tag{52}\] where the indices \(h\) refer to core states, while the three-body matrix element on the right hand side of both equations, expressed within the proton-neutron formalism, is antisymmetrized but not normalized, and its explicit form is given, for example, by Eq. (110). The three-body component of a many-body Hamiltonian is therefore written in terms of one- and two-body pieces, which correspond, respectively, to interactions among one-valence and two-core nucleons, or two-valence and one-core nucleon, with coefficients given by the expressions in Eqs. (51) and (52). It is worth noting that these two pieces arise from the normal-ordering two-body decomposition of the 3NF with respect to the core as reference state. This means that the 3NF among valence nucleons is neglected, which may lead to underestimation of the 3NF repulsion and to overbinding that acquire more relevance with an increasing number of valence particles, as pointed out in Refs. [69, 37]. In Ref. [69], an ensemble or mixed-state reference was introduced to account for these interactions in an approximate way within the VS-IMSRG framework, which it is shown to cure this deficiency. In this connection, it will be certainly relevant to investigate the relative importance of the missing 3NF contributions in our approach. However, we would mention here, as discussed in detail in section 4.1, Figure 6: First-order one- and two-body diagrams with a three-body-force vertex. See text for details. that of our SM results for \(p\) shell nuclei obtained by using an \(NN\) +3N potential are in close agreement with those of NCSM. In particular, we are able to reproduce the experimental sequence of observed states in \({}^{10}\)B, as it is done by NCSM calculations. The agreement between the results of the SM and NCSM models including 3NFs is on the overall of the same quality of that obtained when using the \(NN\) force only. Larger discrepancies are found only for energies of high excited states, which may be related to our approximation in the 3NF treatment. In concluding this section, it may be useful to give a summary of the diagrams we include in the derivation of our SM effective Hamiltonians. We arrest the \(\hat{Q}\)-box expansion to the one- and two-body Goldstone diagrams at third order in the \(NN\) potential, which are explicitly shown in the Appendices of Ref. [145], while we consider only diagrams at first order in the 3N force whose expressions are given in Eqs. 51 and 52. In addition, to account for the progressive filling of the valence space in systems with more than two-valence particles we include the density-dependent two-body contributions of Eq. 50 arising from the second-order three-body diagrams of Fig. 4. #### 3.2.2 _The perturbative expansion of effective shell-model decay operators_ Besides the calculation of energy spectra, SM wave functions may provide also the matrix elements of operators \(\Theta\) which are related to physical observables, such as electromagnetic transition rates, multipole moments, etc. As it has been previously pointed out, the wave functions \(|\psi_{\alpha}\rangle\) obtained diagonalizing \(H_{\rm eff}\) are not the true ones \(|\Psi_{\alpha}\rangle\), but their projections onto the model space (\(|\psi_{\alpha}\rangle=P|\Psi_{\alpha}\rangle\)). Then, it is necessary to renormalize \(\Theta\) in order to account for the neglected degrees of freedom belonging to configurations outside the model space. Formally, an effective operator \(\Theta_{\rm eff}\) has to be derived, such that \[\langle\tilde{\Psi}_{\alpha}|\Theta|\Psi_{\beta}\rangle=\langle\tilde{\psi}_{ \alpha}|\Theta_{\rm eff}|\psi_{\beta}\rangle. \tag{53}\] The perturbative expansion of effective operators has been approached in the early attempts to employ realistic potentials for SM calculations by L. Zamick for the problematics of electromagnetic transitions [149, 150, 151] and by I. S. Towner for the study of the quenching of spin-dependent decay-operator matrix elements [152, 153]. A formally improved structure to the derivation of non-Hermitian effective operators has been elaborated by Suzuki and Okamoto in Ref. [154], where they have introduced an expansion formula for the effective operators in terms of a vertex function \(\hat{\Theta}\) box that, analogously to the \(\hat{Q}\) box in the effective Hamiltonian theory, is the building block for constructing effective operators. We outline now some details about this procedure. According to Eq. (26) and keeping in mind that \(\omega\equiv Q\omega P\), \(H_{\rm eff}\) may be written as \[H_{\rm eff}=PH(P+\omega), \tag{54}\] so that the true eigenstates \(|\Psi_{\alpha}\rangle\) and their orthonormal counterparts \(\langle\tilde{\Psi}_{\alpha}|\) are given by \[|\Psi_{\alpha}\rangle=(P+\omega)|\psi_{\alpha}\rangle\ \ \ \ \,\ \ \ \ \ \langle\tilde{\Psi}_{\alpha}|=\langle\tilde{\psi}_{\alpha}|(P+\omega^{ \dagger}\omega)(P+\omega^{\dagger}). \tag{55}\] Actually, a general effective operator expression in the bra-ket representation is written as \[\Theta_{\rm eff}=\sum_{\alpha\beta}|\psi_{\alpha}\rangle\langle\tilde{\Psi}_{ \alpha}|\Theta|\Psi_{\beta}\rangle\langle\tilde{\psi}_{\beta}|, \tag{56}\] where \(\Theta\) is a given time-independent Hermitian operator. Then, \(\Theta_{\rm eff}\) in an operator form is \[\Theta_{\rm eff}=(P+\omega^{\dagger}\omega)^{-1}(P+\omega^{\dagger})\Theta(P+ \omega). \tag{57}\] It is worth noting that Eq. (53) holds whatever it is the normalization of \(|\Psi_{\alpha}\rangle\) and \(|\psi_{\alpha}\rangle\), but if the true eigenvectors are normalized, then \(\langle\tilde{\Psi}_{\alpha}|=\langle\Psi_{\alpha}|\) and \(|\psi_{\alpha}\rangle\) should be normalized as \[\langle\tilde{\psi}_{\alpha}|(P+\omega^{\dagger}\omega)|\psi_{\alpha}\rangle=1. \tag{58}\] To calculate \(\Theta_{\rm eff}\), it is convenient to introduce the vertex function \(\hat{\Theta}\) box, which is defined as \[\hat{\Theta}=(P+\omega^{\dagger})\Theta(P+\omega), \tag{59}\] in order to factorize \(\Theta_{\rm eff}\) as follows \[\Theta_{\rm eff}=(P+\omega^{\dagger}\omega)^{-1}\hat{\Theta}. \tag{60}\] Therefore, to derive \(\Theta_{\rm eff}\) one needs to calculate both \(\hat{\Theta}\) and \(\omega^{\dagger}\omega\). Let us tackle the first issue: according to Eq. (59) and to the following expression of \(\omega\) in terms of \(H_{\rm eff}\) \[\omega=\sum_{n=0}^{\infty}(-1)^{n}\left(\frac{1}{\epsilon_{0}-QHQ}\right)^{n +1}QH_{1}P(H_{1}^{\rm eff})^{n}, \tag{61}\] the following relation can be written for \(\hat{\Theta}\) \[\hat{\Theta}=\hat{\Theta}_{PP}+(\hat{\Theta}_{PQ}+h.c.)+\hat{\Theta}_{QQ}, \tag{62}\] where \[\hat{\Theta}_{PP}=P\Theta P, \tag{63}\] \[\hat{\Theta}_{PQ}=P\Theta\omega P=\sum_{n=0}^{\infty}\hat{\Theta}_{n}(H_{1}^{ \rm eff})^{n}, \tag{64}\] \[\hat{\Theta}_{QQ}=P\omega^{\dagger}\Theta\omega P=\left.\sum_{n,m=0}^{\infty} (H_{1}^{\rm eff})^{n}\hat{\Theta}_{nm}(H_{1}^{\rm eff})^{m},\right. \tag{65}\] and \(\hat{\Theta}_{m}\), \(\hat{\Theta}_{mn}\) are defined as \[\hat{\Theta}_{m} = \left.\frac{1}{m!}\frac{d^{m}\hat{\Theta}(\epsilon)}{d\epsilon^{ m}}\right|_{\epsilon=\epsilon_{0}}, \tag{66}\] \[\hat{\Theta}_{mn} = \left.\frac{1}{m!n!}\frac{d^{m}}{d\epsilon_{1}^{m}}\frac{d^{n}}{d \epsilon_{2}^{n}}\hat{\Theta}(\epsilon_{1};\epsilon_{2})\right|_{\epsilon_{1} =\epsilon_{0},\epsilon_{2}=\epsilon_{0}}, \tag{67}\] with \[\hat{\Theta}(\epsilon)= P\Theta P+P\Theta Q\frac{1}{\epsilon-QHQ}QH_{1}P, \tag{68}\] \[\hat{\Theta}(\epsilon_{1};\epsilon_{2})= PH_{1}Q\frac{1}{\epsilon_{1}-QHQ}Q\Theta Q\frac{1}{\epsilon_{2}-QHQ}QH_{1}P. \tag{69}\] By way of definition (38), the product \(\omega^{\dagger}\omega\) takes the form \[\omega^{\dagger}\omega=-\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}((H_{1}^{\rm eff })^{\dagger})^{n-1}\hat{Q}(\epsilon_{0})_{n+m-1}(H_{1}^{\rm eff})^{m-1}. \tag{70}\] Using now the expression of \(H_{1}^{\rm eff}\) in terms of the \(\hat{Q}\) box and its derivatives - Eqs. (40) and (41) - the above quantity may be rewritten as \[\omega^{\dagger}\omega=-\hat{Q}_{1}+(\hat{Q}_{2}\hat{Q}+h.c.)+(\hat{Q}_{3} \hat{Q}\hat{Q}+h.c.)+(\hat{Q}_{2}\hat{Q}_{1}\hat{Q}+h.c.)+\cdots \tag{71}\] Melting together Eqs. (68) and (71), the final perturbative expansion form of the effective operator \(\Theta_{\rm eff}\) is \[\Theta_{\rm eff}=(P+\hat{Q}_{1}+\hat{Q}_{1}\hat{Q}_{1}+\hat{Q}_{2}\hat{Q}+\hat{Q }\hat{Q}_{2}+\cdots)\times(\chi_{0}+\chi_{1}+\chi_{2}+\cdots), \tag{72}\] where \[\chi_{0} = (\hat{\Theta}_{0}+h.c.)+\hat{\Theta}_{00}, \tag{73}\] \[\chi_{1} = (\hat{\Theta}_{1}\hat{Q}+h.c.)+(\hat{\Theta}_{01}\hat{Q}+h.c.),\] (74) \[\chi_{2} = (\hat{\Theta}_{1}\hat{Q}_{1}\hat{Q}+h.c.)+(\hat{\Theta}_{2}\hat{ Q}\hat{Q}+h.c.)+(\hat{\Theta}_{02}\hat{Q}\hat{Q}+h.c.)+\hat{Q}\hat{\Theta}_{11} \hat{Q}.\] (75) \[\cdots\] It is worth to evidence the link existing between \(H_{\rm eff}\), derived in terms of the \(\hat{Q}\) box, and any effective operator. This is achieved by inserting the identity \(\hat{Q}\hat{Q}^{-1}={\bf 1}\) in Eq. (72), and consequently obtaining the following expression: \[\Theta_{\rm eff} = (P+\hat{Q}_{1}+\hat{Q}_{1}\hat{Q}_{1}+\hat{Q}_{2}\hat{Q}+\hat{Q} \hat{Q}_{2}+\cdots)\hat{Q}\hat{Q}^{-1}\times(\chi_{0}+\chi_{1}+\chi_{2}+\cdots) \tag{76}\] \[= H_{\rm eff}\hat{Q}^{-1}(\chi_{0}+\chi_{1}+\chi_{2}+\cdots).\] The \(\chi_{n}\) series must be arrested to a finite order, and the starting point is the derivation of a perturbative expansion of \(\hat{\Theta}_{0}\equiv\hat{\Theta}(\epsilon_{0})\) and \(\hat{\Theta}_{00}\equiv\hat{\Theta}(\epsilon_{0};\epsilon_{0})\), including diagrams up to a finite order in the perturbation theory, consistently with the \(\hat{Q}\)-box expansion. The issue of the convergence of the \(\chi_{n}\) series and of the perturbative expansion of \(\hat{\Theta}_{0}\) and \(\hat{\Theta}_{00}\) has been investigated in Refs. [155, 156, 157], and in Ref. [38], which report details about the calculation of the diagrams appearing in the \(\hat{\Theta}_{0}\) expansion for a one-body operator \(\Theta\). ### _Gamow shell model with three-body forces_ As the number of protons or neutrons in the nucleus increases to the existence limit of the dripline, exotic phenomena such as halo and resonances can emerge. With extreme proton-neutron imbalance, the nuclei around the dripline are weakly bound or unbound. They belong to open quantum systems in which the coupling to the continuum can be significant and should be properly treated. Due to the large spatial distributions of wave functions in resonance and continuum states, using a standard spatially-localized HO basis would not be a good option. In the past two decades, several methods have been developed to overcome this challenge. The conventional SM has been extended to include the continuum effect, e.g., SM embedded in the continuum [158, 159, 160, 161] and the continuum-coupled SM [162]. An elegant treatment of the continuum coupling is to use the Berggren representation [163] in which the one-body Schrodinger equation is generalized to a complex-momentum (complex-\(k\)) plane. This creates naturally bound, resonance and continuum single-particle states on an equal footing, see Fig. 7. The many-body Hamiltonian can be expressed in the Berggren basis, and tackled by many-body methods in the complex-\(k\) space. The SM has been successfully extended to the complex-\(k\) Berggren basis, leading to the so-called GSM, in which the continuum effect is included at the basis level. In the first GSM applications to atomic nuclei, phenomenological interactions were used with the potential parameters determined by fitting nuclear structure data [164, 165, 166, 167, 168, 82]. Soon, the GSM was developed with realistic nuclear interactions [170, 171, 172, 173, 174, 175]. Meanwhile, the Berggren technique was also used in CC [176, 177, 178] and IMSRG approaches [179]. Along with the continuum coupling, 3NFs also play an important role in the descriptions of exotic nuclei. In Refs. [92, 93, 94], the realistic GSM was extended with 3NFs considered. The inclusions of both the continuum coupling and 3NFs can give a better quantitative description of exotic nuclei [92]. With 3NFs included, two steps of development towards a full self-consistent _ab initio_ GSM have been made: i) the WS potential has been used to generate the complex-\(k\) Berggren basis, and the many-body GSM with realistic 2NFs and 3NFs has been performed in the WS basis [92, 93], whose WS parameters have been determined by fitting data; ii) for a more self-consistent calculation, the Berggren basis has been created starting from realistic interaction itself using the complex-\(k\) Gamow Hartree-Fock (GHF) method, and the complex GSM has been performed in the GHF basis. The 3NF has been included in both the GHF and GSM calculations, as done in Ref. [94]. Similar to standard SM calculations, an auxiliary one-body potential \(U\) is usually introduced into the Hamiltonian to obtain a one-body term \(H_{0}\) describing the independent motions of the nucleons and a residual interaction \(H_{1}\). However, here we rewrite the Hamiltonian (13) to include the 3NF and remove the center of mass kinetic energy thus obtaining an intrinsic transitionally invariant Hamiltonian, \[\begin{split} H&=\sum_{i<j}\frac{(\mathbf{p}_{i}-\mathbf{p }_{j})^{2}}{2mA}+\hat{V}_{\rm NN}+\hat{V}_{\rm 3N}\\ &=\left[\sum_{i=1}^{A}\left(\frac{p_{i}^{2}}{2m}+U_{i}\right) \right]+\left[\sum_{i<j}^{A}\left(V_{\rm NN}^{(ij)}-\frac{\mathbf{p_{i}}\cdot\mathbf{p _{j}}}{mA}\right)-\sum_{i=1}^{A}\left(U_{i}+\frac{p_{i}^{2}}{2mA}\right)+ \sum_{i<j<k}^{A}V_{\rm 3N}^{(ijk)}\right]\\ &=H_{0}+H_{1},\end{split} \tag{77}\] To generate the Berggren basis, \(U\) is usually taken as the WS potential produced by the core for the GSM calculation with a core [92, 93]. The radial wave functions of the Berggren SP states are obtained by solving the SP Schrodinger equation in the complex-\(k\) space with the WS potential \(U\). The Berggren SP states form a complete set of basis states with discrete bound, resonant and continuum scattering states, as shown in Fig. 7. Due to the complexity and computational task of many-body calculations with full 3NF, as discussed in the Introduction and in Section 3.2.1, we adopt the normal-ordering approximation with neglecting the residual three-body term [92, 93, 86, 24, 89]. The normal-ordered zero-body and one-body terms are absorbed into the core Hamiltonian which generates the Berggren basis, while the final GSM Hamiltonian has a two-body form but with normal-ordered two-body term of the 3NF included (see Eq.(52)). With the overlap method of wave functions [173, 180], the Hamiltonian can be transferred to the complex-\(k\) Berggren basis. With the Hamiltonian matrix elements given in the Berggren basis, the \(\hat{Q}\)-box folded-diagram method is used to construct the realistic complex effective interaction for a chosen model space. In Figure 7: Schematic Berggren complex-\(k\) plane. The bound, resonant and scattering states construct the Berggren completeness relation. The contour \(L^{+}\) has to be chosen in such a way that all the discrete narrow resonant states are contained in the domain between \(L^{+}\) and the real-\(k\) axis [173]. general, such kind of model space should include relevant bound, resonant and continuum states. Therefore, the basis states are certainly not degenerate, and we exploit an extension of the Kuo-Krenciglowa (EKK) method [181] to the complex-\(k\) space to build the effective GSM interaction. At last, the complex-symmetry GSM Hamiltonian is diagonalized in the model space using the Jacobi-Davidson method [182]. Although the Hamiltonian is intrinsic, the GSM wavefunction is not factorized into the center-of-mass (CoM) and intrinsic parts. This means that the effect from the CoM motion has not been removed exactly. In the HO basis, the CoM effect can be treated using the Lawson method [183]. In principle, one can generalize the Lawson method, for example, in the real-energy CC [184] and IMSRG [185] calculations which adopted the Hartree-Fock basis. Unfortunately, the generalization is not valid in the complex-energy Berggren basis due to the fact that the \(R^{2}\) matrix elements (\(R\) is the CoM position) cannot be regularized in resonance and continuum states, which are not square integrable in fact. However, we have well discussed in the previous works [173, 174, 179] that the CoM effect is not significant in low-lying states. ## 4 Applications and comparison with experiment In this section, we shall review recent results of SM calculations based on effective Hamiltonians derived from realistic two- and three-body potentials. The aim of the presentation is essentially to highlight the role of 3NFs in explaining phenomena such as the location of the neutron dripline in isotopic chains and the shell evolution as a function of the number of valence nucleons, as well as to investigate the combined effects of 3NFs and the coupling with continuum for the description of unbound nuclei and unbound resonance states in the vicinity of the dripline or of the Borromean structure of halo nuclei. We shell focus on systems ranging from the very light-mass \(p\)-shell nuclei to those with intermediate mass belonging to the \(fp\) shell. In the subsection devoted to \(p\)-shell nuclei, we have found it useful to evidence the validity of our perturbative approach in deriving the effective SM Hamiltonian. To this end, in the first part we illustrate the convergence properties of the \(\hat{Q}\)-box vertex function as concerns the truncation of the intermediate-state space, the order-by-order convergence, and the dependence on the HO parameter. The analyses is based on an \(NN\) potential since as concerns 3NFs only first-order contributions of the normal-ordered one- and two-body parts are taken into account. Then, once shown how the introduced approximations can be taken under control, we compare our SM results arising from the chiral \(NN\)-only and \(NN\)+3N forces with the corresponding ones obtained by the _ab initio_ NCSM. ### _Benchmark calculations in the \(0p\)-shell region_ Benchmark calculations are very important to test methods as well as computational approaches. More specifically, they may be helpful to understand to what extent a many-body method is working and to estimate the impact of the necessary truncations and approximations that have been introduced. In Ref. [145], an extensive study has been carried out to compare the results of SM calculations for \(p\)-shell nuclei obtained by using an effective Hamiltonian derived from an \(NN\) chiral potential at N\({}^{3}\)LO [16, 55] with those obtained with the _ab initio_ NCSM [18, 87, 88, 186]. In Ref. [86], this investigation has been extended by including in the derivation of \(H_{\rm eff}\) for \(p\)-shell nuclei the one- and two-body components of a normal-ordered 3N chiral potential at N\({}^{2}\)LO. The work in Ref. [145] has also tried to assess the behavior of the perturbative expansion of \(H_{\rm eff}\) with respect to the dimension of the intermediate state space and the order-by-order convergence, when starting from chiral potentials. As regards this aspect, we recall that the \(Q\) space appearing in Eq. (35) is the complement of the model space in the whole Hilbert space, therefore it is composed by an infinite number of configurations. It is clearly unfeasible to employ an infinite \(Q\) space, and consequently the perturbative expansion of the \(\hat{Q}\) box implies a truncation of the space of the intermediate states, which belong to the \(Q\) space by definition. The common procedure is to employ an energy truncation, which consists in including only those intermediate states whose unperturbed excitation energy is smaller than a fixed value \(E_{\rm max}\) expressed in terms of the number of oscillator quanta \(N_{\rm max}\), namely as an integer multiple of the HO parameter \(\hbar\omega\) \[E_{\rm max}=N_{\rm max}\hbar\omega.\] In Ref. [145] the theoretical energies of the yrast states in \({}^{6}\)Li, corresponding to the absolute energy values relative to \({}^{4}\)He, have been calculated as a function of \(E_{\rm max}\) (see Fig. 8), using an effective Hamiltonian derived from the N\({}^{3}\)LO \(NN\) potential and including in the \(\hat{Q}\)-box diagrams up to the second order in \(H_{1}\). As can be seen in Fig. 8, the convergence is reached when intermediate states at least up to \(E_{\rm max}=20\)\(\hbar\omega\), with \(\hbar\omega=19\) MeV, are included. This value of the HO parameter is close to the one provided by the expression [187]\(\hbar\omega=45A^{-1/3}-25A^{-2/3}\) for \(A=4\). This is no surprise if we consider that the \(NN\) potential is characterized (in momentum-space representation) by a certain cutoff momentum \(\Lambda\), which is also the maximum relative momentum of the two-nucleon system. Consequently, the maximum value of the energy corresponding to the relative motion of two nucleons is \[E_{\rm max}=\frac{\hbar^{2}\Lambda^{2}}{M}, \tag{78}\] where \(M\) is the nucleon mass. This relation may be rewritten in terms of \(N_{\rm max}\) and \(\hbar\omega\), \[N_{\rm max}\hbar\omega=\frac{\hbar^{2}\Lambda^{2}}{M}. \tag{79}\] Figure 8: Theoretical energies of \({}^{6}\)Li yrast states relative to \({}^{4}\)He, obtained with the N\({}^{3}\)LO \(NN\) potential, as a function of \(N_{\rm max}\)[145]. Equation (79) constrains the value of \(N_{\rm max}\) for a chosen HO parameter and depends on the cutoff \(\Lambda\) of the \(NN\) potential. The chiral N\({}^{3}\)LO potential under consideration is characterized by a cutoff \(\Lambda=2.5,2.6\) fm\({}^{-1}\)[16, 55], and, therefore, if \(\hbar\omega=\)19 MeV, one should include in the \(\hat{Q}\)-box expansion the contributions of the \(Q\)-space configurations at least up to \(N_{\rm max}=16\). It is worth pointing out that the N\({}^{3}\)LO potential is multiplied by a smooth regulator function with a gaussian shape [188]. This characteristic slows down the convergence behavior of the \(NN\) potential and justifies the need to include a larger number of \(Q\)-space configurations. Actually, in Ref. [145] the convergence with respect to the number of intermediate states has been studied also considering an \(NN\) potential with a sharp cutoff as regulator function, a chiral potential dubbed N\({}^{3}\)LOW [189]. This potential, whose cutoff is \(\Lambda=2.1\) fm\({}^{-1}\), is characterized by a faster convergence, and the convergence is reached at \(N_{\rm max}=10\), which is consistent with the relationship (79) for \(\hbar\omega=19\) MeV. Another important aspect to be studied is the order-by-order convergence properties of the \(H_{\rm eff}\) expansion, namely the dependence of SM results on the order at which the perturbative expansion of the \(\hat{Q}\) box is arrested. It is worth mentioning that this relevant topic has been first investigated by Barrett and Kirson in the pioneering period of the perturbative expansion of \(H_{\rm eff}\)[190], and it has been reprised in Ref. [145] within the \(\hat{Q}\)-box approach, using \(\hat{Q}\) boxes at second (\(H_{\rm 2nd}^{\rm eff}\)) and third (\(H_{\rm 3rd}^{\rm eff}\)) order in perturbation theory. Moreover, in order to estimate the value to which the perturbative series may converge, \(H_{\rm eff}\) has been derived by calculating the Pade approximant [2][191, 192] of the \(\hat{Q}\) box (\(H_{\rm pade}^{\rm eff}\)). In Fig. 9 the energies of \({}^{6}\)Li yrast states with respect to \({}^{4}\)He, obtained with the N\({}^{3}\)LO \(NN\) potential, are reported as calculated with \(H_{\rm 1st}^{\rm eff}\), \(H_{\rm 2nd}^{\rm eff}\), \(H_{\rm 3rd}^{\rm eff}\), and \(H_{\rm pade}^{\rm eff}\), \(H_{\rm 1st}^{\rm eff}\) representing the one-body first-order \(\hat{Q}\)-box diagrams plus the \(NN\) bare potential. There are a couple of aspects that should be evidenced from the inspection of Fig. 9: 1. the large gap between the results at first order in perturbation theory and those at higher orders shows that the employment of a bare \(NN\) potential in SM calculations, without any renormalization due to long-range correlations, leads to a poor description of the physics of atomic nuclei; 2. the results for \({}^{6}\)Li obtained with \(H^{\rm eff}_{\rm 3rd}\) are very close to those with \(H^{\rm eff}_{\rm p_{\rm side}}\), and this supports the hypothesis that SM calculations may have a weak dependence on higher-order \(\hat{Q}\)-box perturbative terms. On the above grounds, in all subsequent SM calculations the effective Hamiltonians have been derived by calculating the Pade approximant \([2|1]\) of the \(\hat{Q}\) box. Finally, it is worth to examine another aspect that has been evidenced in Ref. [145]. We recall that, because of Eqs. (13-15), there could be a dependence of \(H_{\rm eff}\) on the choice of the auxiliary potential \(U\) introduced to construct the SP basis employed to expand the matrix elements of the input interaction. More precisely, since we choose the HO basis, the results of the SM calculations may depend on the choice of the HO parameter \(\hbar\omega\). This is due to the approximations inherent our calculations, namely, as discussed above, to the truncation of the space of intermediate states and of the perturbative expansion of the \(\hat{Q}\)-box at a certain order. In Fig. 10, the theoretical energies of the yrast states in \({}^{6}\)Li are reported as a function of \(\hbar\omega\) for three effective Hamiltonians derived from the N\({}^{3}\)LO potential by using as HO parameter \(\hbar\omega\)= 18, 19, and 20 MeV. The panel (a) of Fig. 10 refers to effective Hamiltonians derived including all third-order diagrams in the \(\hat{Q}\) box, and then calculating its Pade approximant \([2|1]\). On the other hand, the spectra in panel (b) are obtained by retaining in the \(\hat{Q}\) box only the first-order (\(V\)-\(U\))-insertion diagram (see Fig. 1 in Ref. [145]) and neglecting higher-order terms of the same class of diagrams, and again calculating its Pade approximant \([2|1]\). These results show very clearly that (\(V\)-\(U\))-insertion diagrams play a crucial role to reduce the dependence on the HO parameter. Once a complete survey of all possible sources of approximations induced by the perturbative expansion has been completed, a comparison of the SM results with those provided by the _ab initio_ NCSM can be performed [18, 186]. Actually, it should be pointed out that the results reported in Fig. 8-10 have been obtained starting from the \(A\)-body Hamiltonian of Eqs. (13)-(15), which is not translationally invariant, while NCSM calculations employ a purely intrinsic Hamiltonian. Therefore, to compare our SM results with the NCSM ones we have to remove the center of mass kinetic energy from Eqs.(13)-(15), namely we have to use the Hamiltonian defined in Eq. (77). Figure 10: Theoretical energies of \({}^{6}\)Li yrast states relative to \({}^{4}\)He, obtained with the N\({}^{3}\)LO \(NN\) potential, as a function of \(\hbar\omega\)[145]. See text for details. The calculated energies of the yrast states in \({}^{6}\)Li relative to \({}^{4}\)He are reported in Fig. 11; the results labelled with (a) refer to a SM calculation with an effective Hamiltonian derived from Eqs. (13)-(15), the spectrum (b) corresponds to an effective Hamiltonian derived from the translationally invariant Hamiltonian of Eq. (77) retaining only the \(NN\) component. The NCSM spectrum (c) is obtained considering the calculated binding energy of \({}^{6}\)Li in Ref. [87] with respect to the \({}^{4}\)He ground state energy [18], and the \({}^{6}\)Li excitation energies reported in Ref. [186]. The results in Fig. 11 evidence how relevant is to employ a purely intrinsic Hamiltonian to compare correctly the ground-state energies of SM and NCSM. This choice of the Hamiltonian does not affect, however, the energy spacings, and it should be noted that the difference between the not transitionally invariant and intrinsic Hamiltonians rapidly decreases with growing \(A\). Moreover, we can conclude that the agreement between the SM and NCSM results for \({}^{6}\)Li, which is a two-valence nucleon system with respect to \({}^{4}\)He core, is quite good. This conclusion may be extended also to calculations of nuclei with a number of valence nucleons larger than two. Figure 11: Theoretical energies of \({}^{6}\)Li yrast states relative to \({}^{4}\)He, obtained with N\({}^{3}\)LO \(NN\) potential. (a) SM calculation with an effective intrinsic plus center of mass Hamiltonian. (b) SM calculation with an effective intrinsic Hamiltonian. (c) NCSM calculation [145]. Figure 12: Theoretical and experimental spectra for \({}^{8}\)Li, \({}^{8}\)B, \({}^{8}\)Be, and \({}^{10}\)B. The theoretical energies have been obtained using the N\({}^{3}\)LO \(NN\) potential within SM and NCSM calculations. Figure adapted from Ref. [86]. In Figs. 12 and 13 the low-energy excitation spectra of \({}^{8}\)Li, \({}^{8}\)B, \({}^{8}\)Be, \({}^{10}\)B, \({}^{11}\)B, \({}^{12}\)C, and \({}^{13}\)C, calculated with SM [86] and NCSM [87, 88] are reported and compared with experiment [193]. From the inspection of these two figures we see that, as regards the excitation spectra of many-valence nucleon systems, the comparison between SM calculations with \(H_{\rm eff}\) derived within the perturbative approach and _ab initio_ calculations is quite satisfactory. In Fig. 14 (a), the ground-state energies, relative to \({}^{4}\)He, for the \(N=Z\) nuclei with mass \(6\leq A\leq 12\) calculated within the SM (dot-dashed line) [145] are compared with those of NCSM calculations (dotted line) and the experimental ones (continuous line) [194]. We see that discrepancies between SM and NCSM results increase with the number of valence nucleons, and this may be ascribed to the fact that many-body (\(>2\)) components of \(H_{\rm eff}\) have not been taken into account. As mentioned in Section 3.2.1, for nuclei with a number of valence nucleons larger than two, the \(\hat{Q}\) box should contain diagrams with at least three incoming and outcoming valence particles, as for Figure 14: Experimental ground-state energies for \(N=Z\) nuclei with mass \(6\leq A\leq 12\) are compared with theoretical values obtained using the N\({}^{3}\)LO \(NN\) potential within the NCSM and SM. SM results refer to calculations (a) without and (b) with contributions from 3N induced forces [86, 145]. Figure 13: Same as in Fig. 12, but for \({}^{11}\)B, \({}^{12}\)C, and \({}^{13}\)C. Figure adapted from [86]. example the second-order three-body diagrams in Fig. 4. In order to include the effects of these contributions, in Ref. [86] the monopole component of the diagrams in Fig. 4 has been calculated and added to the theoretical g.s. energies. The results of this procedure are reported in Fig. 14 (b), where the new calculated SM g.s. energies (black squares) are compared with both the experimental ones (red triangles) and those obtained with NCSM (blue bullets) [86]. As it can be seen, the comparison between SM and NCSM has been efficiently improved with respect to that of Fig. 14 (a), the largest discrepancy being about 4% for \({}^{8}\)Be. So far, we have shown that the derivation of \(H_{\rm eff}\) via a perturbative expansion of the \(\hat{Q}\)-box vertex function provides SM results that are in a satisfactory agreement with those of the _ab initio_ method NCSM, when accounting for realistic \(NN\) potential only. In Ref. [86] a step forward has been made by including in the derivation of \(H_{\rm eff}\) contributions from a chiral 3NF [18]. It is worth recalling that in the chiral perturbative expansion the 3N potentials appear from N\({}^{2}\)LO on, and at this order the 3N potential consists of three components (see Fig. 3), which are the 2PE, the 1PE, and the contact terms. The adopted intrinsic Hamiltonian is defined in Eq. (77). As reported in Section 2.2, a great advantage of ChPT is that it generates nuclear two- and many-body forces on an equal footing [14, 15, 16], namely most interaction vertices that appear in the 3NF also occur in the \(NN\) potential. The parameters carried by these vertices are fixed in the construction of the chiral 2NF, and for the N\({}^{2}\)LO 3N potential they are the LECs \(c_{1}\), \(c_{3}\), and \(c_{4}\), appearing in \(v_{3N}^{(2\pi)}\). However, the 3N 1PE term and the contact interaction are characterized by two extra LECs (known as \(c_{D}\) and \(c_{E}\)), which cannot be constrained by two-body observables, and should be fitted by reproducing observables in systems with mass \(A>2\). The goal of the work of Ref. [86] has been to benchmark SM calculations, now including also the contribution from a N\({}^{2}\)LO 3N potential, against those obtained with NCSM [87, 88], and consequently the adopted \(c_{D},c_{E}\) values are -1, -0.34, respectively, as those employed in Ref. [87] (see Fig. 1 in [87]). As already mentioned before, \(H_{\rm eff}\) is calculated introducing the contribution of the N\({}^{2}\)LO 3N potential at first-order in many-body perturbation theory only for one- and two-valence nucleon systems. The contribution at first order to the single-particle and two-body components of the \(\hat{Q}\) box from a three-body potential are shown in Fig. 6 and their expression is reported in Eqs. (51) and (52). In Section 3.2.1 we have also pointed out that these expressions give the coefficients which multiply the one-body and two-body terms, respectively, arising from the normal-ordering decomposition of the three-body component of a many-body Hamiltonian [146]. In Figs. 15 and 16, we show the low-energy spectra of \({}^{6}\)Li, \({}^{8}\)Li, \({}^{8}\)B, \({}^{8}\)Be, \({}^{10}\)B, \({}^{11}\)B, \({}^{12}\)C, and \({}^{13}\)C, calculated within the SM framework, now including also the contributions from the N\({}^{2}\)LO 3N potential. They are compared with the experimental ones [193] and the NCSM results [87, 88]. Figure 15: Theoretical and experimental spectra for \({}^{6}\)Li, \({}^{8}\)Li, \({}^{8}\)B, and \({}^{8}\)Be. The theoretical energies have been obtained using the N\({}^{3}\)LO \(NN\) plus N\({}^{2}\)LO 3N potentials within SM and NCSM calculations. Figure adapted from Ref. [86]. From the inspection of Figs. 15 and 16, the results obtained with \(H_{\rm eff}\) derived by expanding perturbatively the \(\hat{Q}\) box and the NCSM ones are in a close agreement, as in the case with only the \(NN\) potential. Moreover, the theory with the 3NF compares far better with experiment, as can be seen in all the reported spectra. In this regard, it is paramount, among other observations, to note that the experimental sequence of observed states in \({}^{10}\)B is restored, and the degeneracies of \(J^{\pi}=1/2_{1}^{-},3/2_{1}^{-}\) and \(J^{\pi}=3/2_{2}^{-},5/2_{1}^{-}\) states in \({}^{11}\)B are removed. This supports the crucial role played by the 3N potential to improve the spectroscopic description of \(p\)-shell nuclei. ### _Approaching the weakly bound systems_ #### 4.2.1 _The limit of oxygen isotopes_ The oxygen isotopic chain exhibits, in addition to the conventional doubly magic \({}^{16}\)O, the occurrence of two new shell closures in \({}^{22}\)O [195] and \({}^{24}\)O [196, 54, 197] with \(N=14\), and 16, respectively. Of particular interest is that the dripline is located at \({}^{24}\)O [198] very close to stability line, the so-called "Oxygen anaomaly". The heaviest experimentally identified isotopes, \({}^{25}\)O and \({}^{26}\)O, are indeed unbound with respect to neutron emission [199, 200]. As a matter of fact, the doubly-closed nature of \({}^{24}\)O was suggested, before to be experimentally confirmed, by various SM calculations employing phenomenological interactions, which were also able to explain the occurrence of the neutron dripline at \(N=16\)[201, 202, 203]. In these papers, it was shown that the \(N=16\) shell gap arises from an upward shift of the \(0d_{3/2}\) orbital, whose energy increases rapidly becoming close to zero while neutrons fill the \(sd\) shell. As for the \(0s_{1/2}\) orbital, these calculations indicate that it remains bound and the \(0d_{5/2}-1s_{1/2}\) spacing opens up from \(N=8\) and 14 making \(N=14\) a magic number. On the other hand, realistic SM calculations based on 2NF-only predict that the \(0d_{3/2}\) orbital comes down in energy with increasing number of valence neutrons and remains well bound in \({}^{24}\)O and beyond, putting the neutron dripline in an incorrect position. The first study that has explicitly included the 3NF in the derivation of the effective SM Hamiltonian for oxygen isotopes has been carried out by Otsuka and coworkers [52]. In the Introduction, we referred to this work by mentioning calculations using a chiral \(NN\) N\({}^{3}\)LO potential plus 3N contributions. However, Ref. [52] also reports on the comparison with calculations employing phenomenological and realistic \(NN\) interactions, which more clearly highlights the effects of 3N forces on the limit of oxygen isotopes. Specifically, g.s. energies of the oxygen isotopes are also obtained by using i) phenomenological SDPF-M [204] and USD-B [205] forces; ii) \(NN\) effective interactions derived by way of the MBPT at second order from a \(G\) matrix and including Fujita-Miyazawa 3N forces due to \(\Delta\) excitations. The comparison between results of these calculations is illustrated in Fig. 17. We see that the energies from phenomenological interactions well agree with experiment and the minimum value correctly locates at \(N=16\). By contrast, the energies based on the \(NN\) force-only do not stop to decrease putting the dripline at \(N=20\), regardless of the renormalization procedure employed for the \(NN\) potential (the same results are obtained by using the \(V_{\rm low-k}\) instead of the G matrix). Then, the effects introduced by including the Fujita-Miyazawa 3N contributions, which become more relevant with increasing neutron number, correct the behavior of the binding energies bringing a significant raise from \(N=16\) to 18. Results of Ref. [52], have been confirmed by more recent calculations based on large many-body spaces and with improved MBPT and nonperturbative valence-space Hamiltonians. A summary of these calculations performed by using different _ab initio_ approaches, including references, is in Ref. [59], where it is shown that they all predict the correct dripline position at \({}^{24}\)O when large many-body spaces are adopted with binding energy differing only within a few percentage. As concerns oxygen isotopes beyond the dripline, the description of their binding energies and excitation spectra requires to consider the coupling with the continuum in addition to the 3NF contribution. This issue is presented within the GSM framework in Section 4.2.2, where also unbound resonance states in \({}^{24}\)O are discussed. The role played by the coupling with the continuum and the 3NF contribution is also evidenced for the proton-rich Borromean \({}^{17}\)Ne in Section 4.2.3. #### 4.2.2 _3NF and continuum in neutron-rich oxygen isotopes_ Neutron-rich oxygen isotopes have been attracting many interests from both experiment and theory, not only because of the famous "Oxygen Anomaly" [52], discussed above, but also their many peculiar phenomena. The \({}^{26}\)O ground state has been found barely unbound with two-neutron separation energy of only \(-18\) keV [206], and an excited state in the unbound resonant isotope \({}^{25}\)O has been observed in a recent experiment [207]. Even though many theoretical works [52, 208, 209, 66] have shown the importance of 3NFs in nuclear structure calculations to reproduce the oxygen dripline position, there still exists a strong demand to consider the coupling to the continuum, which is crucial to the understanding of the loose structures of exotic nuclei. Using the GSM with chiral \(NN\) and 3N forces, we have investigated oxygen isotopes up to beyond the neutron dripline, in which both effects from the 3NF and continuum are considered. For neutron-rich oxygen isotopes, we choose the doubly magic system \({}^{16}\)O as core, with its ground-state Slater determinant as reference state for the normal-ordering decomposition of 3NF. The Berggren basis is generated by the WS potential. We adopt the universal WS parameters [210], but reduce Figure 17: Ground-state energies of oxygen isotopes measured from \({}^{16}\)O, including experimental values of the bound \({}^{16-24}\)O. Energies obtained from (a) phenomenological forces SDPF-M [201] and USD-B [126], (b) a \(G\) matrix and including Fujita-Miyazawa 3N forces due to \(\Delta\) excitations. Figure adapted from Ref. [52]. the depth parameter by 2.3 MeV to obtain a reasonable \(0d_{3/2}\) resonance width compared with the experimental width extracted in \({}^{17}\)O. This WS potential gives bound \(0d_{5/2}\) and \(1s_{1/2}\) orbitals and a resonant \(0d_{3/2}\) orbital. In this Berggren basis produced by the WS potential, we construct the effective GSM interaction for the model space {\(0d_{5/2}\), \(1s_{1/2}\), \(0d_{3/2}\) pole plus continuum}, using the MBPT of \(\hat{Q}\)-box folded diagrams, named EKK [181]. The detail about the complex-\(k\) MBPT can be found in our previous papers [173, 180]. The continuum effect enters the model through both the complex effective interaction and the active model space which includes continuum partial waves. As discussed in the previous Section 3.3, the complex symmetric GSM Hamiltonian is diagonalized in the complex model space via the Jacobi-Davidson method. We have calculated the binding energies of oxygen isotopes, shown in Fig. 18. The comparison of different calculations given in the figure shows that both 3NF and continuum play important roles in reproducing the experimental binding energies, especially in the vicinity of the dripline. The 3NF gives repulsive contributions to g.s. energies and the effect increases with the increase of the number of valence neutrons. In the inset of Fig. 18, the 3NF effect has been dissected in \({}^{24-26}\)O. We find that the attractive \(v_{\rm 3N}^{(1\pi)}\) and repulsive \(v_{\rm 3N}^{(\rm ct)}\) terms have similar values but opposite signs. Consequently, their effects are almost canceled out, implying that \(v_{\rm 3N}^{(2\pi)}\) term is responsible for the observed 3NF repulsive effect in this mass region. Fig. 19 displays the calculated excitation spectra of \({}^{24-26}\)O. We see that 3NF also improves spectrum calculations compared with experimental data, especially for unbound resonance states, such as the excited \(2_{1}^{+}\) and \(1_{1}^{+}\) in \({}^{24}\)O and the ground state of \({}^{25}\)O. The calculations with 3NF included give better agreements with data in resonance widths. #### 4.2.3 _3NF and continuum in proton-rich Borromean \({}^{17}\)Ne_ Due to the Coulomb repulsion, the proton dripline is not so far from the stability line compared with the neutron dripline, which has been reached experimentally up to \(Z\approx 90\) more than a decade ago [198]. However, when focusing on the light proton-rich region where the Coulomb barrier is not so high and continuum effect is prominent, interesting phenomena (such as halo and Borromean structures) may Figure 18: Calculated \({}^{18-28}\)O ground-state energies with respect to the \({}^{16}\)O core [92], compared with experimental data and other calculations: conventional SM (HO) with 3NF but without continuum [52] and continuum CC with a density-dependent effective 3NF [178]. The inset shows the 3NF contributions from 2PE (\(2\pi\)), 1PE (\(1\pi\)) and contact terms. emerge. Among these nuclei, the Borromean \({}^{17}\)Ne is of particular interest in both experiment [211, 212, 213] and theory [214, 215, 216]. It may bear a similarity to the two-neutron halo nucleus \({}^{6}\)He which can be described as a three-body \({}^{4}\)He + 2n system. To see the weakly-bound characteristic in \({}^{17}\)Ne, we performed a realistic GSM calculation with the chiral N\({}^{3}\)LO 2NF and N\({}^{2}\)LO 3NF which were discussed already in Section 4.2.2. In this calculation, we choose the closed-shell \({}^{14}\)O as the core. The GSM valence space contains the neutron bound states \(\nu\{0p_{1/2},0d_{5/2},1s_{1/2},0d_{3/2}\}\) and the proton resonances \(\pi\{1s_{1/2},0d_{5/2}\}\) plus continua \(\pi\{s_{1/2},d_{5/2}\}\). To reproduce the inverse positions of proton 1s\({}_{1/2}\) and 0d\({}_{5/2}\) orbitals in \({}^{15}\)F, the WS parameters need to be adjusted [93]. In detail, we reduce the spin-orbital coupling strength to 2 MeV, and reduce the depth parameter by 2 MeV for the proton WS potential. The valence-space Hamiltonian was obtained using MBPT as described in Sec. 4.2.2, and diagonalized by the Jacobi-Davidson method. More details can be found in Ref. [93]. The calculated low-lying levels of \({}^{17}\)Ne and its subsystem \({}^{16}\)F are present in Fig. 20, with respect to the \({}^{15}\)O ground-state energy. It is seen that the 3NF lifts the whole spectra of \({}^{17}\)Ne and \({}^{16}\)F, which Figure 19: GSM calculations of excitation spectra for \({}^{24-26}\)O [92]. The experimental data are taken from [197, 200, 206, 207]. Figure 20: Calculated excitation spectra of \({}^{17}\)Ne along with its isotone \({}^{16}\)F, with respect to the \({}^{15}\)O ground-state energy [93]. Blue and red lines are the GSM calculations with 2NF only and 2NF+3NF, respectively. Experimental data are taken from Refs. [194, 212]. makes \({}^{16}\)F unbound and leads to a Borromean structure of \({}^{17}\)Ne. In the g.s. of \({}^{17}\)Ne, we have found strong configuration mixing with a 54% s-wave component of \(\pi 1s_{1/2}^{2}\otimes\nu 0p_{1/2}^{1}\), which is consistent with that in Ref. [216]. The small two-proton separation energy and large weight of \(s\)-wave component suggest a halo structure in the ground state of \({}^{17}\)Ne. We have calculated the one-body density by GSM, shown in Fig. 21. Compared with \({}^{15}\)O, there exist a long tail in the density of \({}^{17}\)Ne, which is a direct evidence to support the halo nature of \({}^{17}\)Ne. From Fig. 21, we see that 3NF effect on the density is small, while the effect from the continuum coupling is significant. More detailed discussions can be found in Ref. [93]. #### 4.2.4 _The calcium isotopes dripline_ The Ca isotopic chain, from the experimental point of view, presents characteristics similar to the oxygen one. In analogy to the shell gaps found in correspondence of \(N=14\) and 16 for oxygen isotopes, it exhibits two neutron shell closures at \(N=32\)[217, 218, 219] and 34 [220] in addition to the standard ones at \(N=20\) and 28. However, while oxygen isotopes have been studied experimentally even beyond the neutron dripline [199, 206], not even its position is known in Ca isotopic chain. Indeed, experimental studies have reached only \({}^{60}\)Ca [221] with \(N/Z=2\) and 12 neutrons more than the last stable isotope. From the theoretical side, calculations for Ca isotopes have been performed by both mean field and microscopic approaches - as relativistic Hartree-Bogoliubov or density functional theories (DFT) in the first case and SM or _ab initio_ methods in the second - but their results provide ambiguous indications about the behavior of the g.s. energies as well as about the location of the neutron dripline. As matter of fact, some studies predict the neutron dripline around \({}^{60}\)Ca [63, 222, 223, 224] while others pushes it up \({}^{70}\)Ca [225, 226, 227, 228, 229, 230]. However, the available experimental data, including the recent evidence of a bound \({}^{60}\)Ca [221] and the mass measurements of heavy odd calcium isotopes [78, 219], may be very helpful to narrow the spread of the theoretical predictions. As an illustrative example, we compare in Fig. 22 the experimental two-neutron separation energies (\(S_{2n}\)s) for even Ca isotopes with results from different calculations. We see that DFT using the Skyrme interaction of Ref. [231] predicts that \({}^{60}\)Ca is well bound and the neutron dripline is around \({}^{70}\)Ca, as it comes out from other mean field calculations [225, 226, 227, 228, 229]. It is worth mentioning, however, that Figure 21: The \({}^{17}\)Ne density in the valence space, calculated by GSM with 2NF only (blue dot-dashed line) and 2NF+3NF (red line). SM (WS) stands for the conventional SM calculations performed in the non-continuum discrete WS basis, with 2NF+3NF (purple line) and 2NF only (grey dot-dashed line). The \({}^{15}\)O density (black dash line) in the valence space is also displayed for comparison. the results depend on the symmetry energy, as clearly underlined in Ref. [227]. The last bound Ca isotope was located well beyond \(N=40\) also by GSM calculations which start from the CD-Bonn \(NN\) potential [230]. On the other hand, the position of the dripline predicted by the IM-SRG approach [224] is around \({}^{60}\)Ca, as it is confirmed by other microscopic approaches (see Refs. [63, 223]). However, Holt and coworkers [25], who updated the SM results of Ref. [63], have underlined the difficulty of a precise prediction for the dripline due to the flat evolution of the g.s. energies beyond \({}^{60}\)Ca. Figure 22 also reports \(S_{2n}\)s from SM calculations in the \(fp\) valence space based on the KB3G phenomenological interaction [232], and we see that the theoretical values nicely overlap the available experimental data and locate between the DFT and GSM curves. The study of the evolution of the nuclear masses in isotopic chains has pushed forward our understanding of the nuclear forces, drawing attentions on new aspects of nuclear forces that develop going from the valley of stability to the limits of existence. In this connection, particular attention is focused on 3NFs, which have shown to be critical in calculations of extreme neutron-rich systems. The effect of 3NFs on the dripline and on the shell evolution in Ca region has been the subject of investigations only in the last decade or so. The g.s. energies along the Ca isotopic chain have been studied starting from chiral \(NN\) and 3N potentials by way of the SM [25, 63, 89, 90], CC model [223], IM-SRG [224] and self-consistent Green function theory [33] approaches. In particular, SM calculations of Ref. [63] - performed within the \(fp\) space - have first evidenced that the inclusion of the three-body component in the derivation of the effective Hamiltonian leads to a repulsive contribution needed to correct the overbinding obtained when considering the \(NN\) force only, and provides a better agreement with the available experimental data, as observed for oxygen nuclei [52]. In Ref. [63], it was also shown that predictions with \(NN\)+3N forces are quite close to those resulting from the phenomenological GXPF1 [127] and KB3G [232]\(NN\) interactions, which show similar monopole components [127] although developed by employing different techniques. This supports the relevant role of 3N forces in removing deficiencies of the two-body monopole component arising from \(NN\)-only theory. A careful analysis of the contribution of chiral 3NFs to the monopole component of the effective SM Hamiltonian is reported in Section 4.3.2. Here, we shall discuss predictions of Ca dripline based on the realistic SM calculations of Refs. [89, 90] performed in the \(0f_{7/2}\), \(0f_{5/2}\), \(1p_{3/2}\), \(1p_{1/2}\) (\(fp\)) neutron space as well as in the extended space including the neutron \(0g_{9/2}\) orbital (\(fpg_{9/2}\)). In both cases, a chiral \(NN\)+3N potential is chosen as starting point Figure 22: Experimental [233] two-neutron separation energies as a function of the neutron number \(N\) for Ca isotopes compared with the results of a variety on many-body methods. See text for details. to construct the effective SM Hamiltonians, but for the \(fp\) space we also report results obtained with the \(NN\) force only. The \(NN\) and 3N forces were derived within the ChPT framework [55] stopping the perturbative series at N\({}^{3}\)LO and at N\({}^{2}\)LO, respectively (see Sections 2.2 and 4.1). We would like to reiterate here that the \(NN\) and 3N forces consistently share the same nonlocal regulator function and some LECs which are determined by the renormalization procedure described in Ref. [16], while the values of the additional LECs appearing in the 3N force, \(c_{D}\) and \(c_{E}\), are taken from Ref. [87]. The effective Hamiltonians were derived within the framework of the MBPT outlined in Section 3.2.1 by arresting the \(\hat{Q}\)-box expansion of the one- and two-body Goldstone diagrams at third order in the \(NN\) potential and at first order in the 3N one, the latter diagrams corresponding to the normal-ordered one- and two-body parts of the 3N force. Moreover, calculations were carried out, as described in Section 3.2.1, by employing density-dependent \(H_{\rm eff}\)s, whose TBMEs change according to the number of valence nucleons and take into account the interactions via two-body force of clusters of three-valence nucleons with configurations outside the model space. This means that, in addition to genuine 3N forces, we consider also induced 3N contributions, that come into play for systems with more than two-valence nucleons. It is worth mentioning that similar calculations, in both the \(fp\) and \(fpg_{9/2}\) valence spaces, were performed in [25, 63], where, however, the \(NN\) chiral potential was renormalized through the \(V_{\rm low-k}\) technique [56] and the effects of induced three-body contributions in the derivation of \(H_{\rm eff}\) were neglected. The experimental \(S_{2n}\)s are compared with calculated values in Fig. 23. Results within the \(fp\) space obtained by using the \(NN\) and \(NN\)+3N force are dubbed, respectively, as \(H^{\rm 2N}-fp\) and \(H^{\rm 3N}-fp\), while \(H^{\rm 3N}-fpg_{9/2}\) indicates results in the \(fpg_{9/2}\) space with the \(NN\)+3N force. By comparing the \(H^{\rm 2N}-fp\) and \(H^{\rm 3N}-fp\) results, we see that both calculations reproduce the rather flat experimental \(S_{2n}\) behavior up to \(N=28\). Then, the measured values are overestimated when using the \(NN\)-only force, while, in line with the results of Refs. [25, 63], the repulsion due to the 3NF leads to less bound g.s. energies and improves the agreement with experiment. However, at \(N=36\) a too sudden drop is found by \(H^{\rm 3N}-fp\), at variance with the experimental finding, which may be ascribed to the missing contributions arising from the \(0g_{9/2}\) orbital. We find, indeed, that a larger \(S_{2n}\), quite close to the experimental value, is predicted by \(H^{\rm 3N}-fpg_{9/2}\) calculations at \(N=36\). As a matter of fact, when including both the neutron \(0g_{9/2}\) orbital and the 3N force we are able to well describe the available experimental data, and in particular to predict \({}^{60}\)Ca as a bound system, consistently with the recent Figure 23: Experimental [233] two-neutron separation energies as a function of the neutron number \(N\) for Ca isotopes compared with SM results obtained within the \(fp\) and \(fpg_{9/2}\) spaces. Calculations are based on the \(NN\) and \(NN\)+3N forces. See text for details. experiment of Ref. [221]. We have also found that calcium isotopic chain is bound at least up to \({}^{70}\)Ca in line with the results of the DFT [231] and GSM [230] calculations mentioned above, and the recent Bayesian analysis of different DFT calculations [228]. In Fig. 23, labelled by \(\bar{H}^{3\rm N}-fpgg_{9/2}\), we have also reported the \(S_{2n}\)s obtained by considering only a genuine chiral 3N force without accounting for induced 3N contributions. In this case, the same Hamiltonian derived for the two-body system is adopted for all Ca isotopes. We see that results of the two Hamiltonians, \(H^{3\rm N}-fpgg_{9/2}\) and \(\bar{H}^{3\rm N}-fpgg_{9/2}\), almost overlap up to \(N=32\), but then differences between them start to grow and become larger and larger with increasing number of valence neutrons. Actually, from \(N=34\) on the inclusion of induced 3NFs brings an upshift of the two-neutron separation energies evidencing their attractive contribution which counterbalances in part the one arising from a genuine 3NF. We may therefore conclude that the effects of these two 3NFs with very different origin are both equally important in determining the dripline of Ca isotopes. It is worth mentioning that similar results are obtained for Ti isotopes as shown in Ref. [91]. ### _Shell evolution and the role of three-body forces_ #### 4.3.1 _Overview: the \(fp\) shell region_ The access to experimental information for nuclear systems with a large unbalanced number of neutrons and protons, the so-called exotic nuclei, has opened up new possibilities to advance our understanding of nuclear physics. One of the key questions that modern research has allowed to be address concerns the robustness of the standard magic numbers, namely the evolution of the shell structure as a function of \(N/Z\), in particular when moving far from the stability line and approaching the driplines. Significant theoretical and experimental efforts have been devoted in the last four decades to this issue by investigating nuclear structure properties along isotopic and isotonic chains. Experiments have been performed with radioactive beams to identify the disappearance of conventional magic numbers or the appearance of new ones, and at the same time a number of theoretical papers have been published aimed at understanding the underlying mechanisms determining such behaviour and the specific role of the various components of the nuclear interaction (see for instance [121, 234]). Noteworthy examples of disappearance or weakening of canonical magic numbers has been observed in light nuclei at \(N=8\), \(N=20\), and \(N=28\) for \({}^{12}\)Be, \({}^{32}\)Mg and \({}^{42}\)Si, respectively [235, 236, 237, 238, 239, 240], while the onset of new shell closures at \(N=14\), \(16\) and \(N=32\), \(34\) has been evidenced, respectively, in neutron-rich oxygen [197, 54] and calcium isotopes [241, 242, 243, 244, 245, 246, 247, 248, 219, 250, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 212, 214, 215, 217, 219, 221, 216, 217, 218, 219, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240]. As an illustration, we report in Fig. 24 the behaviour of the experimental excitation energy of the first \(2^{+}\) state and its \(E2\) transition rate to the g.s. in the even \(Z=8\) isotopes and \(N=20\) isotones as a function of increasing \(N\) and \(Z\), respectively [193]. We see that the \(2^{+}\) energy of oxygen isotopes in the left-hand side of Fig. 24 drops by a factor of about three from \(N=8\) to \(10\) before rising up at \(N=14\), with a more dramatic increase at \(N=16\). The behavior of the \(2^{+}\) energy may be seen as a manifestation of the doubly-magic nature of \({}^{24}\)O\({}_{16}\), corresponding to the complete filling of the neutron \(0d_{5/2}\) and \(1s_{1/2}\) orbitals, and to the appearance of a significant \(0d_{3/2}-1s_{1/2}\) subshell gap. The increase in energy from \(N=12\) to \(N=14\) together with a decrease in the \(B(E2,2^{+}_{1}\to 0^{+}_{1})\) value testifies the existence of a consistent gap also between the neutron \(0d_{5/2}\) and \(1s_{1/2}\) orbitals. The evolution of the \(N=20\) shell gap is illustrated on the right-hand side of Fig. 24. The persistence of the shell closure at \(N=20\) and the existence of significant \(\pi 1s_{1/2}-\pi 0d_{5/2}\) and \(\pi 0d_{3/2}-\pi 1s_{1/2}\) gaps account for the behavior of the \(2^{+}_{1}\) energy and \(B(E2,2^{+}_{1}\to 0^{+}_{1})\) transition rate in the \(N=20\) isotones from \({}^{34}\)Si to \({}^{40}\)Ca. However, no SM calculations limited to the \(sd\) space can account for the sudden and strong change from \({}^{32}\)Mg to \({}^{34}\)Si, where the \(2^{+}\) increases from \(\sim 0.9\) to \(3.3\) MeV and the \(B(E2)\) value decreases by a factor of about \(4\). This indicated a collective nature for \({}^{32}\)Mg, which can be explained only by taking in account the correlation energy due to \(2p-2h\) neutron excitations [244]. These excitations are favoured over the normal \(sd\) configurations by the lowering of the gap between the \(sd\) and \(fp\) shells, that is produced as soon as protons are removed from the \(0d_{5/2}\) orbital. These changes in the shell structure, called shell evolution, and the interplay between spherical configurations and deformation are strictly connected to the monopole component of the interaction (Eq. (8)). In fact, as mentioned in Section 3.1, this component governs the behaviour of the ESPEs (see Eq. (10)) by accounting for the variations in the SP energies arising from the residual interaction between the valence nucleons. In this connection, a particularly appealing subject is the role of the different components of the nuclear force in determining the monopole interaction, that was first raised in 2001 by T. Otsuka and collaborators [202]. More specifically, attention has been focused in literature on the evaluation of the contributions originating from the central, vector, tensor components as obtained from the spin-tensor decomposition of the SM Hamiltonian [245]. In Refs. [246, 247], it has been shown that the splitting of the spin-orbit partners is essentially due to the tensor component, although any part of the Hamiltonian can give a relevant contribution to the shell evolution. Another important question that has been recently addressed is the relevance of 3NFs in the shell formation, as well as in the location of the neutron dripline as discussed in Section 4.2. In particular, its role in the shell formation has been investigated within the SM and IM-SRG approach for light- and medium-mass nuclei [52, 58, 59, 60, 61, 63, 69, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91], focusing mainly on the oxygen [52, 58, 59, 60, 69] and calcium [60, 63, 25, 89, 90] isotopic chains. In all these calculations, the effective Hamiltonians are derived from \(NN\) and 3N potentials built up within the chiral perturbative theory, while in [52, 249] the Fujita-Miyazawa three-body force is employed to study the low-lying states of neutron-rich nuclei with \(Z=8\), \(Z=10-14\) and \(N\sim 20\). The main results of all these works is that 3N forces give rise to a repulsive interaction between the valence particles that improves the agreement with experimental data. For instance, in Ref. [52], it was shown that the 3NF is able to correct the strong attractiveness of the monopole interaction between the \(0d_{3/2}\) and \(0d_{5/2}\) neutron orbitals, which explains the change in the location of the neutron dripline from \({}^{28}\)O to the experimentally observed \({}^{24}\)O, as discussed in Section 4.2.1. Here, to illustrate the role of the 3NF in providing a reliable monopole component of the effective SM Hamiltonian we choose as physics case the nuclei of the \(fp\) shell, namely Ca, Ti, Cr, Fe, Ni isotopes Figure 24: Experimental (a) \(2_{1}^{+}\) excitation energies and (b) \(B(E2,2_{1}^{+}\to 0_{1}^{+})\) transition rates in the \(Z=8\) isotopes (left) and \(N=20\) isotones (right). from \(N=20\) to 32. Within this mass region, two doubly magic nuclei, \({}^{48}\)Ca and \({}^{56}\)Ni, are known, while an enhancement in the collectivity is observed for nuclei with \(22\leq Z\leq 26\) and \(N=28\), as can be seen in Fig. 25 reporting the experimental \(2^{+}_{1}\) excitation energies and the \(B(E2,2^{+}_{1}\to 0^{+}_{1})\) values for the \(N=28\) isotones [193]. This case may be, therefore, a good testing ground to investigate the relevance of 3NFs in generating gaps, and it may be of particular interest to study how 3NFs affect the \(N=28\) shell closure when the proton \(0f_{7/2}\) orbital is getting filled. In the next section, on the basis of the results obtained in Ref. [89], we discuss physical quantities as the excitation energies of the yrast \(2^{+}\) states, the \(B(E2,2^{+}_{1}\to 0^{+}_{1})\) values, and the two neutron separation energies focusing on the changes produced by the the 3N force on the monopole component of the effective interaction and on the ESPEs. In Section 4.3.3, the 3NF monopole component is analyzed in terms of the central, vector, and tensor contributions. #### 4.3.2 _Monopole interaction and effective single-particle energies_ Calculations have been performed within the SM framework by considering valence protons and neutrons interacting in the valence space composed by the \(0f_{7/2}\), \(0f_{5/2}\), \(1p_{3/2}\), and \(1p_{1/2}\) orbitals outside doubly magic \({}^{40}\)Ca. The adopted effective Hamiltonians are derived within the MBPT approach starting from the chiral \(NN\)-only and \(NN\)+3N forces, as described in Section 4.2.4. As for the proton-proton channel the Coulomb force is added. We limit to consider nuclei from \(N=20\) to 32 to avoid that our predictions are affected by the choice of the adopted model space (see Section 4.2.4). In the following, we focus on the \(S_{2n}\)s and excitation energies of the yrast \(2^{+}\) states by comparing the experimental data with the results of SM calculations obtained by employing effective Hamiltonians derived from the \(NN\) force only (\(H^{2\rm N}\)) and the complete \(NN\) +3N force (\(H^{3\rm N}\)). Furthermore, to better pinpoint the role of the monopole component and how it is affected by the 3NF we have also introduced a third Hamiltonian, \(H^{\rm mon}\), which is made up by summing the monopole component of \(H^{3\rm N}\) to the multipole component of \(H^{2\rm N}\). However, as a preliminary step, we should comment on the SP energies of the adopted effective Hamiltonians. As mentioned in Section 3.2.1, the derivation of \(H_{\rm eff}\) for one-valence nucleon systems provides the theoretical SP energies, which are then subtracted from the diagonal matrix elements of \(H_{\rm eff}\) derived for the two-valence nucleon systems to obtain the TBMEs of the residual interaction. The SP energies arising from \(NN\)-only theory do not supply a reasonable proton and neutron gap between the Figure 25: Experimental (a) \(2^{+}_{1}\) excitation energies and (b) \(B(E2,2^{+}_{1}\to 0^{+}_{1})\) transition rates in the \(N=28\) isotones. \(0f_{7/2}\) orbital and the remaing three orbitals [89], which might prevent the description of the observed shell closure at \(Z\), \(N=28\). Therefore, in order to remove the effects due to the SP energies and concentrate on those produced by the two-body part of the effective Hamiltonians, specifically on their monopole components, calculations in all three cases (\(H^{\rm 2N}\), \(H^{\rm 3N}\), and \(H^{\rm mon}\)) are carried out starting from the same set of SP energies, namely those derived from the \(NN\)+3N force. The values of the neutron and proton SP energies can be found in Ref. [89]. In Figs. 26 - 30, the experimental data for the \(S_{2n}\)[233] and the excitation energies of the yrast \(2^{+}\) states [193] for Ca, Ti, Cr, Fe, and Ni isotopes from \(N=20\) to \(32\) are compared with the theoretical values obtained with \(H^{\rm 2N}\), \(H^{\rm 3N}\), and \(H^{\rm mon}\). Note that empty red circles for the experimental \(S_{2n}\) values refer to the estimated values reported in Ref. [233]. By comparing the theoretical \(S_{2n}\)s from \(H^{\rm 2N}\) and \(H^{\rm 3N}\) we observe, as already pointed out in Section 4.2.4, that the repulsive contributions of the 3NF is essential to quench the overbinding induced by the \(NN\) force only, thus producing a downshift of the \(S_{2n}\) curve and improving the agreement with experiment. When these contributions are taken into account, the calculated results follow closely the experimental behaviour for all the considered isotopes, while their omission leads to a bad reproduction Figure 27: Experimental and calculated (a) two-neutron separation energies and (b) \(2_{1}^{+}\) excitation energies for titanium isotopes from \(N=20\) to \(32\). See text for details. Figure 26: Experimental and calculated (a) two-neutron separation energies and (b) \(2_{1}^{+}\) excitation energies for calcium isotopes from \(N=22\) to \(32\). See text for details. Figure 28: Experimental and calculated (a) two-neutron separation energies and (b) \(2_{1}^{+}\) excitation energies for chromium isotopes from \(N=20\) to \(32\). See text for details. Figure 29: Experimental and calculated (a) two-neutron separation energies and (b) \(2_{1}^{+}\) excitation energies for iron isotopes from \(N=20\) to \(32\). See text for details. of the observed energy drop between \(N=28\) and 30. This drop can be seen as a manifestation of the shell closure at \(N=28\), corresponding to the filling of the \(0f_{7/2}\) neutron orbital, and the deficiency of \(H^{\rm 2N}\) ascribed to the inadequate gap between the \(0f_{7/2}\) and \(1p_{3/2}\) neutron ESPE provided by this Hamiltonian. On the other hand, \(H^{\rm mon}\) and \(H^{\rm 3N}\) give very similar results, thus confirming the inaccuracy of the monopole components obtained by considering the \(NN\) force only, as well as the central role of these components in determining the \(S_{2n}\) evolution and the \(N=28\) shell closure. At the end of this section, we shall analyze the changes introduced by the 3NF in the neutron and proton ESPEs, which only depend on the monopole components of the Hamiltonian. Similar considerations follow also from the excitation energies of the yrast \(2^{+}\) states. As shown in panel (b) of Fig. 26, the shell closure at \(N=28\) in Ca isotopes is very well reproduced by \(H^{\rm 3N}\) and \(H^{\rm mon}\), while the \(2^{+}_{1}\) state predicted by \(H^{\rm 2N}\) lies about 0.7 MeV below the experimental one. For Ti isotopes, we see in Fig. 27 that the experimental behavior is, overall, well reproduced by all three SM Hamiltonians, while for Cr and Fe isotopes (Figs. 28 and 29) the energy gap at \(N=28\) predicted by \(H^{\rm 2N}\) is underestimated by \(\sim 0.6\) MeV, and the difference increases up \(\sim 1.8\) MeV in Ni isotopes (Fig. 30). Another subshell closure at \(N=32\) is observed in Ca,Ti, Cr isotopes, although not so strong as that at \(N=28\), corresponding to the filling of the \(1p_{3/2}\) neutron orbital. Also in this case results from \(H^{\rm 2N}\) provide, in general, a less satisfactory agreement with experimental data. For all considered isotopic chains, calculations with \(H^{\rm 2N}\) underestimate the experimental excitation energy of the \(2^{+}_{1}\) states at both \(N=28\) and \(N=32\) providing too much collectivity. When moving from Ca isotopes with only identical valence nucleons to systems with \(Z>20\), we see a change in the closure properties that may arise from the collectivity induced from the proton-neutron channel of the residual interaction. This change reflects on the evolution of the \(N=28\) shell closure as a function of \(Z\), which shows a lowering of the yrast \(2^{+}\) state and an increase of the \(B(E2;2^{+}_{1}\to 0^{+}_{1})\) value for nuclei with \(22\leq Z\leq 26\). as discussed at the end of Section 4.3 from the experimental point of view. In Fig. 31, the experimental excitation energies of the \(2^{+}_{1}\) states and the \(B(E2;2^{+}_{1}\to 0^{+}_{1})\) values are compared with the \(H^{\rm 2N}\), \(H^{\rm 3N}\), and \(H^{\rm mon}\) results. The proton and neutron effective charges to calculate the \(B(E2)\)s have been consistently obtained with the same perturbation approach of the Hamiltonian, without any empirical adjustment, as described in Section 3.2.1. The collectivity evolution between \({}^{48}\)Ca and \({}^{56}\)Ni is well reproduced by \(H^{\rm 3N}\) and \(H^{\rm mon}\), but not by \(H^{\rm 2N}\). In particular, the latter Hamiltonian is not able to describe the doubly magic nature of \({}^{56}\)Ni. The Figure 30: Experimental and calculated (a) two-neutron separation energies and (b) \(2^{+}_{1}\) excitation energies for nickel isotopes from \(N=20\) to 32. See text for details. Figure 31: Experimental and calculated (a) \(2_{1}^{+}\) excitation energies and (b) \(B(E2,2_{1}^{+}\to 0_{1}^{+})\) transition rates for \(N=28\) isotones from \(Z=20\) to \(28\). See text for details. Figure 32: Neutron ESPEs from (a) \(H^{\rm 2N}\) and (b) \(H^{\rm 3N}\) for calcium isotopes as a function of the neutron number. See text for details. monopole component of \(H^{\rm 2N}\), responsible for the evolution of the neutron and proton ESPEs, cannot balance indeed the collectivity induced by higher multipole components in the proton-neutron channel. To better elucidate this point we examine, in the following, the proton and neutron ESPEs as a function of the number of valence neutrons. In particular, we compare the neutron ESPEs for Ca isotopes and both neutron and proton ESPEs for Ni isotopes obtained by employing the monopole component of \(H^{\rm 2N}\) and \(H^{\rm 3N}\). The ESPEs are defined in Eq. (10) with the g.s. occupation numbers, \(n_{b}^{\tau}\), fixed by employing the normal filling scheme, namely by putting the valence nucleons into the possible lowest orbit one by one. The results are reported in Figs. 32, 33, 34 where the ESPEs are referred to the lowest lying \(0f_{7/2}\) orbital. It is worth recalling that in all cases the starting SP energies are those derived by adopting the \(NN\)+3N force. From the inspection of Fig. 32, for calcium isotopes, we can observe that the inclusion of the 3NF does not affect the general behavior of the neutron ESPEs, but provides specific features that give rise to the difference in the results of the \(H^{\rm 2N}\) and \(H^{\rm 3N}\) discussed above. We see, in fact, that the neutron monopole component \(H^{\rm 3N}\) produces an increase in the \(1p_{3/2}-0f_{7/2}\) energy gap at \(N=28\) inducing a stronger shell closure, and also a larger \(1p_{1/2}-1p_{3/2}\) splitting in correspondence of the \(N=32\) subshell Figure 34: Proton ESPEs from (a) \(H^{\rm 2N}\) and (b) \(H^{\rm 3N}\) for nickel isotopes as a function of the neutron number. See text for details. Figure 33: Neutron ESPEs from (a) \(H^{\rm 2N}\) and (b) \(H^{\rm 3N}\) for nickel isotopes as a function of the neutron number. See text for details. closure. It is also interesting to note that a larger energy splitting is found for both pairs of the \(1p\) and \(0f\) spin-orbit partners when 3NFs are taken into account. This effect grows with increasing neutron number. Similar comments can be made for the neutron and proton ESPEs in Ni isotopes, which are shown in Figs. 33 and 34. It can be seen that the inclusion of the 3NF provides an increase in the \(0f_{7/2}-1p_{3/2}\) and \(1p_{1/2}-1p_{3/2}\) splittings at \(N=28\) and \(N=32\), respectively, for both the neutron and proton ESPEs. In general, the contribution of the 3NF leads to a substantial expansion of the orbital separations with respect to the \(NN\) force only. In particular, the \(0f_{7/2}-0f_{5/2}\) spin-orbit splitting at \(N=28\) increases by about 2 and 3 MeV for neutrons and protons, respectively. Furthermore, the strong narrowing we observe at \(N=28\) for the \(0f_{5/2}\) and \(1p_{3/2}\) orbitals with the \(NN\) force is significantly attenuated by including the 3NF. To summarize, we have shown that the monopole component of the 3NF is crucial to correct the behavior of the ESPEs and smooth the too much collectivity resulting from \(H^{\rm 2N}\) thus leading to results able to reproduce the experimental data and the doubly magic nature of \({}^{56}\)Ni. A careful analysis of the difference in the monopole components of \(H^{\rm 2N}\) and \(H^{3N}\) will be presented in the next section. #### 4.3.3 _Spin-tensor decomposition of the shell-model interaction_ The difference in the behavior of the ESPEs resulting when employing the \(NN\) force only and the complete \(NN\) + 3N force lies in the effects produced by the 3N component on the monopole matrix elements of the effective SM Hamiltonian. In order to better substantiate this statement, we consider the \({}^{56}\)Ni case, whose neutron and proton shell gaps are strongly affected by the inclusion of the 3NF, as discussed in the previous section. We therefore compare the monopole matrix elements, namely the centroids of the two density-dependent Hamiltonians, \(H^{\rm 2N}\) and \(H^{\rm 3N}\), we have used for the calculations of this nucleus. In particular, for the sake of simplicity we focus on the matrix elements \(\bar{V}_{ab}^{\tau\tau^{\prime}}\) (see Eq. (9)) with at least one of the two indices, \(a\) or \(b\), representing the \(0f_{7/2}\) orbital. In fact, since in the calculation of the ESPEs we adopt the normal filling scheme only these matrix elements come into play for \({}^{56}\)Ni. In Fig. 35, we report the centroids \(\bar{V}_{0f_{7/2}b}^{\tau\tau^{\prime}}\) of \(H^{\rm 2N}\) and \(H^{\rm 3N}\) for the neutron-neutron, proton-neutron, and proton-proton channels. It is worth mentioning that neutron-neutron and proton-neutron interactions determine the neutron ESPEs, while the proton ones depend on proton-proton Figure 35: (a) Neutron-neutron, (b) proton-neutron (b), and (c) proton-proton monopole matrix elements of the effective interactions with and without 3N force. See text for details and proton-neutron interactions. We see that the 3NF provides a repulsive contribution to all matrix elements, which makes the neutron-neutron and proton-neutron matrix elements less attractive and the proton-proton ones more repulsive. However, the size of the contributions depends on the involved orbitals, ranging from few tens of keV to about 200 keV, which produces a substantial change in the spacings between the ESPE's and consequently in the variation of the shell structure in correspondence of a sizable occupation of a specific orbital, as it is the case of the \(0f_{7/2}\) orbital in \({}^{56}\)Ni. In all three channels, the changes produced by \(H^{3N}\) are larger for the \(\bar{V}_{0f_{7/2}b}^{\tau\tau^{\prime}}\) matrix elements with \(b\neq 0f_{7/2}\) than for the diagonal ones. This leads to a larger gap between the \(0f_{7/2}\) orbital and the remaining orbitals in both the proton and neutron space when the complete \(NN\) + 3N force is adopted, which results in a stronger shell closure at \(N=28\) for Ni isotopes as discussed in Section 4.3.2. It can be also observed that the increased spacing between the \(0f_{7/2}-0f_{5/2}\) spin-orbit partners and the \(0f_{5/2}-1p_{3/2}\) orbitals, obtained when including the 3NF, is directly related to the stronger effects that this force has on the \(\bar{V}_{0f_{7/2}0f_{5/2}}^{\tau\tau^{\prime}}\) matrix elements as compared to the other ones. In closing this section, we have found it instructive to analyze the monopole matrix elements of \(H^{\rm 2N}\) and \(H^{\rm 3N}\) in terms of their tensorial structure. As mentioned in Section 4.3.1, several studies, aimed to understand the mechanism behind the ESPE variations, have been devoted to investigate the role of the central, vector, and tensor components of the effective interactions in governing the shell evolution (see, for instance, Refs. [246, 247, 250, 251, 121]). As a main result, it has been found that the behavior of each ESPE is essentially controlled by the the central component, while it is the interplay of all the three components to determine the evolution of the spacings between the ESPEs, with the tensor one significantly contributing to the changes in the spin-orbit splittings. However, these studies have concerned essentially phenomenological effective interactions and microscopic effective interactions derived from the \(NN\) force only. In particular, it has been shown that empirical adjustments of the latter interactions introduce, in general, more significant changes in the central and vector components than in the tensor one, especially for the proton-neutron matrix elements [247]. Here, we are interested to explicitly investigate the effects of the 3N force on the effective interaction to see if its contribution affects in particular a specific component of the effective interaction, and verify the conclusions drawn from the investigations of empirical adjusted interactions. To this end, we shall employ the spin-tensor decomposition to extract the central, vector, tensor contributions from the monopole matrix elements of our effective interaction, by following the procedure presented in Ref. [245] and outlined below for the sake of completeness. Any scalar two-body interaction \(V\) for spin 1/2 fermions can be written in terms of spherical tensors Figure 36: (a) Central, (b) vector, and (c) tensor contributions to \(\bar{V}_{0f_{7/2}b}^{\nu\nu}\) with and without 3N force. See text for details. by coupling the spin tensor operators (\(S^{k}\)) with the corresponding rank tensors in the configuration space (\(Q^{k}\)) as \[V(1,2)=\sum_{k=0,1,2}(S^{k}\cdot Q^{k})=\sum_{k=0,1,2}V^{k}, \tag{80}\] where \(V^{0}\), \(V^{1}\), and \(V^{2}\) are, respectively, the central, vector, and tensor components of the interaction \(V\). Their matrix elements take the expression \[\langle a\tau b\tau^{\prime};J|V^{k}|c\tau d\tau^{\prime};J\rangle =\sum_{LUSS^{\prime}}U\begin{pmatrix}l_{a}&1/2&j_{a}\\ l_{b}&1/2&j_{b}\\ L&S&J\end{pmatrix}U\begin{pmatrix}l_{c}&1/2&j_{c}\\ l_{d}&1/2&j_{d}\\ L^{\prime}&S^{\prime}&J\end{pmatrix}\] \[\times\hat{k}^{2}\begin{cases}L&S&J\\ S^{\prime}&L^{\prime}&k\end{cases}\sum_{J^{\prime}}(-1)^{J^{\prime}}\hat{J^{ \prime}}\begin{cases}L&S&J^{\prime}\\ S^{\prime}&L^{\prime}&k\end{cases}\langle n_{a}l_{a}\tau n_{b}l_{b}\tau^{ \prime};LSJ^{\prime}|V|n_{c}\tau l_{c}n_{d}l_{d}\tau^{\prime};L^{\prime}S^{ \prime}J^{\prime}\rangle, \tag{81}\] with the coefficients \(U\) representing the generalized \(9-j\) symbols \[U\begin{pmatrix}l_{a}&1/2&j_{a}\\ l_{b}&1/2&j_{b}\\ L&S&J\end{pmatrix}=\hat{j}_{a}\hat{j}_{b}\hat{L}\hat{S}\begin{cases}l_{a}&1/2 &j_{b}\\ l_{b}&1/2&j_{b}\\ L&S&J\end{cases}. \tag{82}\] The \(LS\)-coupling matrix elements of \(V\) in Eq. 81 are obtained from the \(jj\)-coupling scheme in the standard way \[\langle n_{a}l_{a}\tau n_{b}l_{b}\tau^{\prime};LSJ|V|n_{c}\tau l_ {c}n_{d}l_{d}\tau^{\prime};L^{\prime}S^{\prime}J\rangle=\sum_{j_{a}j_{b}j_{c} j_{d}}U\begin{pmatrix}l_{a}&1/2&j_{a}\\ l_{b}&1/2&j_{b}\\ L&S&J\end{pmatrix}U\begin{pmatrix}l_{c}&1/2&j_{c}\\ l_{d}&1/2&j_{d}\\ L^{\prime}&S^{\prime}&J\end{pmatrix}\] \[\times\langle a\tau b\tau^{\prime};J|V|c\tau d\tau^{\prime};J\rangle. \tag{83}\] By employing Eq. 80, the monopole matrix elements \(\bar{V}_{ab}^{\tau\tau^{\prime}}\), presented in Fig. 35, are decomposed in their central, vector, tensor contents, which are reported in Figs. 36, 37, and 38 for the the neutron-neutron, proton-neutron, and proton-proton channels, respectively. Figure 37: (a) Central, (b) vector, and (c) tensor contributions to \(\bar{V}_{0f_{7/2}b}^{\pi\nu}\) with and without 3N force. See text for details. Note that a different scale is used for the vector and tensor components with respect to the central one. We see that the tensor content of all matrix elements is rather small with respect to the central and vector ones for \(H^{\rm 2NF}\) as well as for \(H^{\rm 3NF}\). As matter of fact, the tensor components in both cases are of the order of tens of keV, and changes due to the 3NF are limited to few keV in the vast majority of cases. The effects of the 3NF are instead more relevant for the the central and vector components. However, while the 3NF provides always a repulsive contribution to the central components, this is not the case for the vector ones. The nature of these matrix elements is, in fact, enhanced by the inclusion of the 3NF with the exception of \(\bar{V}^{\pi\nu}_{0f_{7/2}0f_{5/2}}\) which changes from negative to positive. As also evidenced in prior studies, the behavior of the ESPEs is largely determined for both \(H^{\rm 2N}\) and \(H^{\rm 3N}\) by the central and vector components, and in particular by the central monopole proton \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{\(V^{\nu\nu}\)} \\ \hline & \multicolumn{4}{|c|}{\(H^{\rm 2N}\)} & \multicolumn{4}{|c|}{\(H^{\rm 3N}\)} \\ \hline \(\beta\) & C & V & T & Tot & C & V & T & Tot \\ \hline \(0f_{5/2}\) & 0.056 & 0.090 & -0.138 & 0.008 & 0.085 & 0.227 & -0.126 & 0.186 \\ \(1p_{3/2}\) & 0.084 & 0.066 & -0.037 & 0.113 & 0.187 & 0.033 & -0.043 & 0.177 \\ \(1p_{1/2}\) & 0.063 & 0.070 & -0.066 & 0.067 & 0.171 & 0.030 & -0.061 & 0.140 \\ \hline \hline \multicolumn{10}{|c|}{\(V^{\pi\nu}\)} \\ \hline & \multicolumn{4}{|c|}{\(H^{\rm 2N}\)} & \multicolumn{4}{|c|}{\(H^{\rm 3N}\)} \\ \hline \(\beta\) & C & V & T & Tot & C & V & T & Tot \\ \hline \(0f_{5/2}\) & -0.505 & 0.203 & 0.053 & -0.249 & -0.453 & 0.224 & 0.058 & -0.171 \\ \(1p_{3/2}\) & 0.137 & 0.025 & 0.047 & 0.209 & 0.198 & 0.013 & 0.046 & 0.257 \\ \(1p_{1/2}\) & 0.144 & 0.052 & -0.050 & 0.146 & 0.204 & 0.022 & -0.045 & 0.181 \\ \hline \hline \multicolumn{10}{|c|}{\(V^{\pi\pi}\)} \\ \hline & \multicolumn{4}{|c|}{\(H^{\rm 2N}\)} & \multicolumn{4}{|c|}{\(H^{\rm 3N}\)} \\ \hline \(\beta\) & C & V & T & Tot & C & V & T & Tot \\ \hline \(0f_{5/2}\) & 0.087 & 0.206 & -0.153 & 0.140 & 0.117 & 0.315 & -0.101 & 0.331 \\ \(1p_{3/2}\) & 0.220 & -0.031 & -0.066 & 0.123 & 0.300 & -0.066 & -0.058 & 0.176 \\ \(1p_{1/2}\) & 0.209 & -0.037 & -0.082 & 0.090 & 0.295 & -0.095 & -0.055 & 0.145 \\ \hline \end{tabular} \end{table} Table 1: Spin-tensor contents of centroid differences \(\Delta_{b}^{\pi\tau^{\prime}}\) (in MeV). See text for details. Figure 38: (a) Central, (b) vector and (c) tensor contributions to \(\bar{V}^{\pi\pi}_{0f_{7/2}b}\) with and without 3N force. See text for details. neutron interaction. It is this component that is mainly responsible for pushing down all the single-particle orbitals, and the attenuation of its attractiveness induced by the 3NF leads to a reduction of this phenomenon. However, in studying the shell-structure evolution we are more interested in the spacings between ESPEs than in their absolute values, and therefore attention should be focused on the differences between the centroids. These differences, calculated with respect to \(\bar{V}^{\tau\tau^{\prime}}_{0f_{7/2}0f_{7/2}}\), are denoted by \(\Delta^{\tau\tau^{\prime}}_{b}\) and reported in Table 4.3.3. It can be seen that the tensor content of the \(\Delta^{\tau\tau^{\prime}}_{\beta}\) is more relevant as compared to that of the single centroids and the central component loses in part its dominant role, all three components of the monopole interaction becoming important in determining the energy gaps. In particular, as observed in Section 4.3.2, the inclusion of the 3NF brings an increase of about 2 MeV in the neutron \(0f_{7/2}-0f_{5/2}\) spin-orbit splitting at \(N=28\). As a matter of fact, the 2NF force only leads to a decrease of the same amount with respect to the original SP value (7.4 MeV) that is determined by the almost complete balancing of the central, vector, tensor components of \(\Delta^{\nu\nu}_{0f_{5/2}}\) and by the consequent dominance of central monopole proton-neutron matrix element, whose attraction is only partially mitigated by the vector and tensor parts. The inclusion of the 3NF provides an increase of the repulsive neutron-neutron component, arising essentially from the increase of the vector term, that cancels the proton-neutron contribution. A similar mechanism explains the increase of \(\sim 3\) MeV in the proton \(0f_{7/2}-0f_{5/2}\) spin-orbit splitting resulting from the 3NF. As concerns the neutron and proton \(0f_{7/2}-1p_{3/2}\) spacings, a significant growth at \(N=28\) is produced already by the 2NF only (see Figs. 33 and 34). The \(\Delta^{\tau\tau^{\prime}}_{b}\) values of Table 4.3.3 evidence that it is related to the overall positive interference of all components of the monopole terms, while the further increase we find with \(H^{\rm 3N}\) comes out from the changes that the 3NF induces in the central terms. ## 5 Summary and conclusions In this review paper, we have discussed the state of current developments to account for 3NFs within the SM framework and their impact on our understanding of nuclear structure properties. We have focused on realistic SM calculations with effective interactions derived from the QCD level and shown how crucial these forces are for describing binding energies and formation of shell structure. Particular attention has been devoted to the effects of 3NFs on the monopole component of the SM effective interaction by highlighting how relevant they are in correcting the monopole interaction derived from realistic the 2NF only. As widely discussed in the text, starting from 1990s it was realized that the problems arising when using effective SM Hamiltonians derived from \(NN\) potentials by means of MBPT should be associated to deficiencies in its monopole component and adjustments were introduced so to obtain results of a quality comparable with the ones provided by phenomenological interactions [41]. However, the connection with the lack of the 3NF was only suggested few years later [47], and subsequently the first SM study including explicitly the effects of 3NFs was carried out for the \(sd\) shell nuclei [52]. We have focused on standard and Gamow shell models employing effective Hamiltonians derived within the MBPT approach from chiral \(NN\) and 3N forces, and reported calculations for nuclei ranging from light- to intermediate-mass nuclei. In addition to the effects of genuine chiral 3NFs, we have also discussed the role of induced 3NFs due to the interaction of clusters of three-valence nucleons with configurations outside the model space via the 2NF. Results are presented in Section 4 and compared with experiment, as well as with results from other approaches whenever possible and/or useful. In this section, we have also discussed in detail the interplay between 3NFs and the coupling with continuum for the description of weakly-bound states in the \(sd\) region and the structure of \({}^{17}\)Ne by means of the realistic GSM. In the following, we shall summarize the main features coming from the results presented in Section 1. We have evidenced the validity of the MBPT in deriving the effective SM Hamiltonian by using as testing ground \(p\)-shell nuclei. The perturbative behavior of \(H_{\rm eff}\)s derived from chiral \(NN\) potentials has been assessed by showing that the convergence properties of the \(\hat{Q}\)-box vertex function with respect to the dimension of the intermediate state space and the order-by-order convergence can be taken under control. Then, the quality of the results has been checked by benchmark calculations with the comparison of SM and ab initio NCSM results arising from chiral \(NN\)-only and \(NN\)+3N forces. 2. The detailed discussion presented for \(fp\) shell nuclei clearly emphasizes the relevance of the 3NF contribution in determining the location of the neutron dripline and the evolution of the shell structure. We have shown that genuine 3NFs provide a repulsive interaction, which, although partially counterbalanced by induced 3N forces accounting for excitations outside the valence space, is essential to quench the overbinding of the g.s energies produced by the \(NN\) force. Our results for the \(S_{2n}\)s and the \(2^{+}_{1}\) excitation energies evidence that their effect increases with an increasing number of valence nucleons and is non-negligible in the formation of the shell structure, reflecting, in particular, on the closure properties of \({}^{58}\)Ni. Furthermore, our calculations with a modified interaction, obtained by combing the multipole and monopole components of effective Hamiltonians derived, respectively, from the \(NN\) and \(NN\)+3N potential, soundly confirm that 3NFs affect essentially the monopole component. It has turned out that monopole component of the 3NF is crucial to correct the behavior of the ESPEs and produce the needed increase in the \(0f_{7/2}\) - \(0f_{5/2}\) spin-orbit splitting at \(N=28\) for both protons and neutrons. The main feature of the 3NF is its repulsive nature, as also evidenced in prior studies. However, the size of the contribution depends on the involved orbitals which produces a substantial change in the spacings between the ESPEs and consequently in the variation of the shell structure in correspondence of a large occupation of a specific orbital. From our analysis based on the spin-tensor decomposition of the monopole SM interaction, it has resulted that the central component - which acquires in all channels a significant repulsive contribution from the 3NF - is mainly responsible for the behavior of the ESPEs. On the other hand, when focusing on the spacings between ESPEs rather than on their absolute values, we have seen that also the vector and tensor components come into play, and therefore the shell structure depends on the interplay between all the three components. 3. Calculations performed within the GSM framework, in which continuum and resonance are included, have shown that 3NFs are non-negligible in explaining the dripline position in oxygen chain and the unbound properties of isotopes beyond the dripline as well as the Borromean structure of the proton-rich \({}^{17}\)Ne. As in standard SM calculations, the 3NF gives repulsive contributions to the g.s. energies, and its effect increases with the increase of the number of valence neutrons. As a matter of fact, it becomes crucial in the neutron dripline region by pushing up the \(0d_{3/2}\) and \(1s_{1/2}\) orbitals, when they start to be heavily occupied. Furthermore, a dissection of 3NF clearly evidences the main role of the two-pion exchange term with respect to the one-pion and contact ones which, having the same values but opposite signs, almost cancel out. An improved agreement with experiment and theory is found even in the case of spectra and resonance widths when including the 3NF. We have found that the repulsive contribution of the 3N force is also essential to explain the Borromean structure of \({}^{17}\)Ne by inducing an increase in energy of \({}^{16}\)F over the threshold of the proton emission. Important demonstrations of the effect of 3NFs in the SM context are now available, which show their origin and importance in nuclear spectroscopy and explain the empirical modifications one should introduce in effective interactions derived from realistic \(NN\) potentials. A major step in this direction has been the derivation of nuclear potentials in terms of the chiral EFT that provides many-body forces consistently with the nature of the \(NN\) interaction. These are the so-called genuine 3N forces that originates from neglecting subnucleonic degrees of freedom. However, it is worth mentioning that, as long as we choose to use an inert core, there are also induced 3N forces accounting for excitations outside the valence space, which gives rise to the mass dependence of effective SM interactions. In the future, important advances for a better understanding of the role of 3NFs within the realistic SM are the extension of calculations to heavier systems. Improvements in our 3NF treatment are also desirable, as the inclusion of higher-order contributions with 3N vertices in the perturbative expansion of the effective Hamiltonian and the development of a technique to account for the 3NF among valence particles. We also would point out that completely consistent calculations require that all effective operators for general observables, not just the Hamiltonian, should be constructed from \(NN+\) 3N potentials derived within the framework of chiral EFT, by including two-body meson-exchange corrections originating from subnucleonic degrees of freedom. In closing, it may be interesting to mention as possible development the inclusion of subleading 3NFs beyond the N\({}^{2}\)LO development. While the two-pion-exchnage 3NFs at N\({}^{3}\)LO are expected to produce rather weak effects as evidenced in the calculations for the nucleon-deuteron system of Ishikawa _et al._[111], the short-range 3NFs at N\({}^{3}\)LO [112] and N\({}^{4}\)LO [114] could play a non-negligible role in few-body systems. Therefore, the subleading-short-range 3NFs should be considered to include more sizeable contributions. ## Acknowledgements This work was supported by the Japan Society for the Promotion of Science KAKENHI under Grant No JP21K13919; the National Key R&D Program of China under Grant No. 2018YFA0404401; the National Natural Science Foundation of China under Grants No. 11835001, No. 11921006, No. 12035001, No. 121051062, and No. 121051063; the China Postdoctoral Science Foundation under Grants No. BX20200136, No. 2020M682747. The authors thank Nunzio Itaco for helpful comments and fruitful discussions. ## Appendix A Three-body matrix elements for shell-model calculations Here we show the formalism of the three-body matrix elements (MEs) of the chiral 3NF at N\({}^{2}\)LO. First, we describe the three-body states in terms of the HO basis functions, which enables us to easily factor out the center-of-mass (c.m.) motion of the three-body states. Next, the antisymmetrization of the three-body states is explained. By introducing the Jacobi coordinates, the three-body MEs are reduced to simple forms, which we call the Jacobi-HO MEs. Finally, the Jacobi-HO MEs of each term of the chiral 3NF are given. ### Three-body states First, we define the single-particle state \(\ket{nljm_{j}m_{\tau}}\) of a nucleon as \[\ket{nljm_{j}m_{\tau}} =\ket{\Phi_{nljm_{j}}}\ket{\varphi^{(\tau)}_{\frac{1}{2}m_{\tau} }}, \tag{84}\] \[\ket{\Phi_{nljm_{j}}} =\ket{\left[\phi_{nl}\otimes\varphi^{(\sigma)}_{\frac{1}{2}} \right]_{jm_{j}}}, \tag{85}\] where \(\Phi_{nljm_{j}}\) is specified by the principal quantum number \(n\), the orbital angular momentum \(l\), and the total spin \(j\). The projections to the \(z\) axis in association with \(l\) and \(j\) are respectively \(m_{l}\) and \(m_{j}\). In Eq. (85), the wave function in the \(j\) scheme is obtained by coupling \(\phi_{nlm_{l}}\) with the nucleon spin wave function \(\varphi^{(\sigma)}_{\frac{1}{2}m_{\sigma}}\). The isospin component is expressed by \(\varphi^{(\tau)}_{\frac{1}{2}m_{\tau}}\). Here \(m_{\sigma}\) and \(m_{\tau}\) are the \(z\) components of the nucleon spin and isospin, respectively. The spatial wave function \(\phi_{nlm_{l}}\) is expressed in terms of the HO basis functions: \[\phi_{nlm_{l}}(\mathbf{r})=\frac{R_{nl}(r)}{r}Y_{lm_{l}}(\hat{\mathbf{r}}), \tag{86}\] with the spherical harmonics \(Y_{lm_{l}}\), and \(\mathbf{r}\) specifying the position of the nucleon. The HO-wave function \(R_{nl}\) is written by \[R_{nl}(r)=\left[\frac{2n!}{b_{0}^{3}\Gamma\big{(}l+n+\frac{3}{2}\big{)}}\right]^{ \frac{1}{2}}r\left(\frac{r}{b_{0}}\right)^{l}\exp\!\left[-\bigg{(}\frac{r}{ \sqrt{2}b_{0}}\bigg{)}^{2}\right]L_{n}^{l+\frac{1}{2}}\!\left(\frac{r^{2}}{b_{ 0}^{2}}\right), \tag{87}\] The so-called size parameter is expressed by \(b_{0}=\sqrt{\hbar/(m_{N}\omega)}\) with the HO-angular velocity \(\omega\) and the average nucleon mass \(m_{N}\), while \(\Gamma\) and \(L_{n}^{l+\frac{1}{2}}\) are the gamma function and the Laguerre polynomial, respectively. Next, we consider three interacting nucleons. In the following, we explicitly put the subscripts of the quantum numbers to distinguish the particles, \(a\), \(b\), and \(c\). We symbolically express the single-particle quantum numbers as \(a=\{n_{a},l_{a},j_{a}\}\). Thus, in the \(jj\)-coupling scheme, the product of the three single-particle wave functions reads \[|(a,b)\,,c;J_{12}JT_{12}T\rangle\] \[=(-)^{J}\hat{j}_{a}\hat{j}_{b}\hat{j}_{c}\hat{J}_{12}\sum_{ \begin{subarray}{c}L_{12}S_{12}\\ LS\end{subarray}}(-)^{L+S}\hat{L}_{12}^{2}\hat{S}_{12}\hat{L}^{2}\hat{S}^{2} \left\{\begin{array}{ccc}l_{a}&\frac{1}{2}&j_{a}\\ l_{b}&\frac{1}{2}&j_{b}\\ L_{12}&S_{12}&J_{12}\end{array}\right\}\left\{\begin{array}{ccc}L_{12}&S_{ 12}&J_{12}\\ l_{c}&\frac{1}{2}&j_{c}\\ L&S&J\end{array}\right\}\] \[\times\sum_{\begin{subarray}{c}n_{12}^{l}l_{2}\\ N_{12}\mathcal{L}_{12}\end{subarray}}(-)^{l_{12}+l_{c}+L_{12}}\left\langle \!\mathcal{N}_{12}\mathcal{L}_{12}n_{12}l_{12},L_{12}\,|\,n_{a}l_{a}n_{b}l_{b },L_{12}\rangle\!\right\rangle_{d_{1}}\sum_{\mathcal{J}}(-)^{J}\hat{\mathcal{J }}^{2}\left\{\begin{array}{ccc}l_{12}&\mathcal{L}_{12}&L_{12}\\ l_{c}&L&\mathcal{J}\end{array}\right\}\] \[\times\sum_{\begin{subarray}{c}n_{12}^{l}\\ N\mathcal{L}\end{subarray}}\left\langle\!\mathcal{N}\mathcal{L}nl,\mathcal{J }\left|\mathcal{N}_{12}\mathcal{L}_{12}n_{c}l_{c},\mathcal{J}\right\rangle\! \right\rangle_{d_{2}}\sum_{\begin{subarray}{c}\mathcal{K}T\\ l_{12}L\end{subarray}}\hat{K}^{2}\hat{\mathcal{I}}I_{12}\hat{I}\left\{ \begin{array}{ccc}\begin{matrix}\mathcal{L}&l&\mathcal{J}\\ l_{12}&L&\mathcal{K}\end{matrix}\right\}\left\{\begin{matrix}\mathcal{L}& \mathcal{K}&L\\ S&J&\mathcal{I}\end{matrix}\right\}\left\{\begin{matrix}l_{12}&l&\mathcal{K} \\ S_{12}&\frac{1}{2}&S\\ I_{12}&I&\mathcal{I}\end{matrix}\right\}\] \[\times\sum_{\begin{subarray}{c}\mathcal{M}_{\mathcal{L}} \mathcal{M}_{\mathcal{I}}\end{subarray}}\left(\mathcal{LM}_{\mathcal{L}} \mathcal{IM}_{\mathcal{I}}|JM_{J}\right)\left|\phi_{\mathcal{N}\mathcal{L} \mathcal{M}_{\mathcal{L}}}\right\rangle\left|(n_{12}l_{12}S_{12})I_{12}T_{12} \left(nl\right)I;\mathcal{I}T\right\rangle, \tag{88}\] where \(\left|(n_{12}l_{12}S_{12})I_{12}T_{12}\left(nl\right)I;\mathcal{I}T\right\rangle\) is the Jacobi-HO state defined by \[|(n_{12}l_{12}S_{12})I_{12}T_{12}\left(nl\right)I;\mathcal{I}T\rangle\] \[=\left|\left[\left[\phi_{n_{12}l_{12}}\otimes\left[\varphi_{ \frac{1}{2}}^{(\sigma)}\otimes\varphi_{\frac{1}{2}}^{(\sigma)}\right]_{S_{12} }\right]_{I_{12}}\otimes\left[\phi_{nl}\otimes\varphi_{\frac{1}{2}}^{(\sigma) }\right]_{I}\right]_{\mathcal{IM}_{\mathcal{I}}}\right\rangle\] \[\times\left|\left[\left[\varphi_{\frac{1}{2}}^{(\tau)}\otimes \varphi_{\frac{1}{2}}^{(\tau)}\right]_{T_{12}}\otimes\varphi_{\frac{1}{2}}^{( \tau)}\right]_{TM_{T}}\right\rangle. \tag{89}\] The total angular momentum (total isospin) and its projection to the \(z\) axis are represented by \(J\) and \(M_{J}\) (\(T\) and \(M_{T}\)), respectively. We also introduce \(J_{12}\) (\(T_{12}\)), which are obtained by coupling \(j_{a}\) and \(j_{b}\) (the isospins of \(a\) and \(b\)). The harmonic-oscillator bracket, \(\left\langle\!\left\langle\cdots\right|\cdots\right\rangle\!\right\rangle_{d_{n}}\), originates from the Talmi transformation [252, 253, 254], and its explicit expression is given, for example, in Refs. [255, 256, 257]. The subscript \(d_{n}\) specifies the mass ratio relevant to the Talmi transformation of the \(n\)-body system, namely, \(d_{1}=1\) and \(d_{2}=2\). One can see that the c.m. motion described by \(\left|\phi_{\mathcal{N}\mathcal{L}\mathcal{M}_{\mathcal{L}}}\right\rangle\) is factored out. ### Antisymmetrization The three-body antisymmetrizer is given by \[\hat{\mathcal{A}}_{3}=\frac{1}{3!}\left[\mathbb{1}-\hat{\mathcal{P}}_{ab}-\hat{ \mathcal{P}}_{bc}-\hat{\mathcal{P}}_{ca}+\hat{\mathcal{P}}_{ab}\hat{\mathcal{P}} _{bc}+\hat{\mathcal{P}}_{ab}\hat{\mathcal{P}}_{ca}\right], \tag{90}\] where \(\mathbb{1}\) is the unity operator and \(\hat{\mathcal{P}}_{ab}\) is the permutation operator with respect to the particles \(a\) and \(b\). Using Eq. (88), the antisymmetrized \(jj\)-coupled state reads \[\ket{(a,b)\,,c;J_{12}JT_{12}T}_{A}\] \[\quad=\sqrt{6}\hat{\mathcal{A}}_{3}\ket{(a,b)\,,c;J_{12}JT_{12}T}\] \[\quad=(-)^{J}\hat{\bar{j}}_{a}\hat{j}_{b}\hat{j}_{c}\hat{J}_{12} \sum_{\begin{subarray}{c}L_{12}^{2}\mathcal{S}_{12}\\ LS\end{subarray}}(-)^{L+S}\hat{L}_{12}^{2}\hat{S}_{12}\hat{L}^{2}\hat{S}^{2} \left\{\begin{array}{ccc}l_{a}&\frac{1}{2}&j_{a}\\ l_{b}&\frac{1}{2}&j_{b}\\ L_{12}&S_{12}&J_{12}\end{array}\right\}\left\{\begin{array}{ccc}L_{12}&S_{12 }&J_{12}\\ l_{c}&\frac{1}{2}&j_{c}\\ L&S&J\end{array}\right\}\] \[\quad\times\sum_{\begin{subarray}{c}n_{12}l_{12}\\ N_{12}\mathcal{L}_{12}\end{subarray}}(-)^{l_{12}+l_{c}+L_{12}}\bra{\mathcal{N }_{12}\mathcal{L}_{12}n_{12}l_{12},L_{12}\,\ket{n_{a}l_{a}n_{b}l_{b},L_{12}}}_ {d_{1}}\sum_{\mathcal{J}}(-)^{\mathcal{J}}\hat{\mathcal{J}}^{2}\left\{ \begin{array}{ccc}l_{12}&\mathcal{L}_{12}&L_{12}\\ l_{c}&L&\mathcal{J}\end{array}\right\}\] \[\quad\times\sum_{\begin{subarray}{c}\eta\\ N\mathcal{L}\\ N\mathcal{L}\end{subarray}}\bra{\mathcal{N}\mathcal{L}nl,\mathcal{J}\,|\, \mathcal{N}_{12}\mathcal{L}_{12}n_{c}l_{c},\mathcal{J}}_{d_{2}}\sum_{ \begin{subarray}{c}\mathcal{K}\mathcal{I}\\ I_{12}I\end{subarray}}\hat{\mathcal{K}}^{2}\hat{\mathcal{I}}\hat{I}_{12}\hat{I} \left\{\begin{array}{ccc}\mathcal{L}&l&\mathcal{J}\\ I_{12}&L&\mathcal{K}\end{array}\right\}\left\{\begin{array}{ccc}\mathcal{L}& \mathcal{K}&L\\ S&J&\mathcal{I}\end{array}\right\}\left\{\begin{array}{ccc}l_{12}&l&\mathcal{ K}\\ S_{12}&\frac{1}{2}&S\\ I_{12}&I&\mathcal{I}\end{array}\right\}\] \[\quad\times\sum_{\mathcal{M}_{\mathcal{L}}\mathcal{M}_{\mathcal{I }}}\left(\mathcal{LM}_{\mathcal{L}}\mathcal{LM}_{\mathcal{I}}|JM_{J}\right) \ket{\phi_{\mathcal{N}\mathcal{LM}_{\mathcal{L}}}}_{A}, \tag{91}\] with \[\ket{i;\mathcal{I}T}_{A}=\sqrt{6}\hat{\mathcal{A}}_{3}\ket{i; \mathcal{I}T}, \tag{92}\] where the Jacobi-HO state \(\ket{i;\mathcal{I}T}\) is defined by Eq. (89), and we simplify the set of quantum numbers: \[i=\left\{n_{12},l_{12},S_{12},I_{12},T_{12},n,l,I\right\}. \tag{93}\] Now our task is to antisymmetrize the Jacobi-HO states. To this end, we expand the antisymmetrizer \(\hat{\mathcal{A}}_{3}\) using the spectral decomposition as \[\hat{\mathcal{A}}_{3}=\sum_{\eta}\epsilon_{\eta}\ket{\eta}\bra{ \eta}, \tag{94}\] where \(\ket{\eta}\) are the eigenfunctions of \(\hat{\mathcal{A}}_{3}\) characterized by a quantum number \(\eta\). The eigenvalue \(\epsilon_{\eta}\) should be \(0\) or \(1\) because the antisymmetrizer \(\hat{\mathcal{A}}_{3}\) is idempotent. The eigenstates corresponding to \(\epsilon_{\eta}=1\) form physical antisymmetrized states, while the other eigenstates with \(\epsilon_{\eta}=0\) give spurious states [258, 259]. By selecting the physical states only, Eq. (92) can be written as \[\ket{i;\mathcal{I}T}_{A} =\sqrt{6}\sum_{j}D_{ij}^{(\mathcal{I}T)}\ket{j;\mathcal{I}T}, \tag{95}\] \[D_{ij}^{(\mathcal{I}T)} =\sum_{\eta}^{N_{\mathrm{P}}}C_{\eta}^{i(\mathcal{I}T)}C_{\eta}^{ j(\mathcal{I}T)*},\] (96) \[C_{\eta}^{i(\mathcal{I}T)} =\bra{\eta}i;\mathcal{I}T\rangle\,, \tag{97}\] with \[j=\left\{n_{12}^{\prime},l_{12}^{\prime},S_{12}^{\prime},I_{12}^{ \prime},T_{12}^{\prime},n^{\prime},l^{\prime},I^{\prime}\right\}. \tag{98}\] Here, \(\ket{\eta}\) is expanded in terms of the partially antisymmetrized states, i.e., the antisymmetrization only for the first two particles (\(ab\)) is considered. Thus, the condition \((-)^{l_{12}^{\prime}+S_{12}^{\prime}+T_{12}^{\prime}}=-1\) is always satisfied. How to evaluate the number of the physical states \(N_{\mathrm{P}}\) is given later. The coefficient \(C_{\eta}^{i(LT)}\) is computed as follows. Since \(\left|\eta\right\rangle\) satisfies the eigenvalue equation, \[\left(\hat{\mathcal{A}}_{3}-\epsilon_{\eta}\right)\left|\eta\right\rangle=0, \tag{99}\] we obtain \[\sum_{j}\left\langle i;\mathcal{I}T\left|\left(\hat{\mathcal{A}}_{3}-\epsilon_ {\eta}\right)\right|j;\mathcal{I}T\right\rangle C_{\eta}^{j(\mathcal{I}T)*}=0. \tag{100}\] Thus, \(C_{\eta}^{j(\mathcal{I}T)*}\) is obtained by diagonalizing the antisymmetrizer matrix, the matrix element of which is given by \[\mathcal{A}_{ij}=\left\langle i;\mathcal{I}T\left|\hat{\mathcal{A}}_{3}\right| j;\mathcal{I}T\right\rangle=\frac{1}{3}\left\langle i;\mathcal{I}T\left| \left(\mathbb{1}-2\hat{\mathcal{P}}_{bc}\right)\right|j;\mathcal{I}T\right\rangle, \tag{101}\] and \[\left\langle i;\mathcal{I}T\left|\hat{\mathcal{P}}_{bc}\right|j; \mathcal{I}T\right\rangle =(-)^{l_{12}+l_{12}^{\prime}}\hat{I}_{12}\hat{I}_{12}^{\prime} \hat{I}^{\prime}\hat{S}_{12}\hat{S}_{12}^{\prime}\hat{T}_{12}\hat{I}_{12}^{ \prime}\begin{cases}\frac{1}{2}&\frac{1}{2}&T_{12}\\ \frac{1}{2}&T&T_{12}^{\prime}\end{cases}\] \[\times\sum_{\lambda\sigma}\hat{\lambda}^{2}\hat{\sigma}^{2}\left\{ \begin{matrix}\frac{1}{2}&\frac{1}{2}&S_{12}\\ \frac{1}{2}&\sigma&S_{12}^{\prime}\end{matrix}\right\}\begin{cases}l_{12}&S_{1 2}&I_{12}\\ l&\frac{1}{2}&I\\ \lambda&\sigma&\mathcal{I}\end{cases}\end{cases}\begin{cases}l_{12}^{\prime} &S_{12}^{\prime}&I_{12}^{\prime}\\ l^{\prime}&\frac{1}{2}&I^{\prime}\\ \lambda&\sigma&\mathcal{I}\end{cases}\] \[\times\left\langle\!\left\langle n_{12}l_{12}nl,\lambda\left|\, n_{12}^{\prime}l_{12}^{\prime}n^{\prime}l^{\prime},\lambda\right\rangle\! \right\rangle_{d_{3}}. \tag{102}\] Note that the Jacobi-HO states are orthonormal: \(\left\langle i;\mathcal{I}T\left|\,j;\mathcal{I}T\right\rangle=\delta_{ij}\). The mass ratio in the harmonic-oscillator bracket is now \(d_{3}=1/3\). The number of the physical states \(N_{\mathrm{P}}\) is given as a sum of the eigenvalues, i.e., the trace of the antisymmetrizer matrix after the diagonalization. ### Structures of three-body matrix elements #### a.3.1 \(Jt\)-coupled three-body matrix elements From Eq. (91), the antisymmetrized \(JT\)-coupled matrix elements for the three-body interaction \(V_{3N}\) is given by \[{}_{A}\langle\left(d,e\right),f;J_{12}^{\prime}JT_{12}^{\prime}T \left|V_{3N}\right|\left(a,b\right),c;J_{12}JT_{12}T\rangle_{A}\] \[\quad=\sum_{\begin{subarray}{c}n_{12}l_{12}S_{12}I_{12}\\ nI\end{subarray}}\sum_{\begin{subarray}{c}n_{12}^{\prime}l_{12}^{\prime}S_{12} ^{\prime}I_{12}^{\prime}\\ nI^{\prime}I^{\prime}I\end{subarray}}\sum_{\mathcal{I}}\,{}_{A}\left\langle \kappa^{\prime};\mathcal{I}T\left|V_{3N}\right|\kappa;\mathcal{I}T\right\rangle _{A}\] \[\quad\times\sum_{\mathcal{N}\mathcal{L}}T_{abcJ_{12}J_{2}J_{2}N \mathcal{L}}^{n_{12}l_{12}^{\prime}l_{12}^{\prime}S_{12}^{\prime}I_{12}^{ \prime}n^{\prime}l^{\prime}}, \tag{103}\] where the coefficient \(T_{abcJ_{12}J\mathcal{L}\mathcal{N}\mathcal{L}}^{n_{12}l_{23}l_{12}nlI}\), called the \(T\) coefficient [260, 261], is defined by \[T_{abcJ_{12}J\mathcal{L}\mathcal{N}\mathcal{L}}^{n_{12}l_{23}l_{12 }nlI2nlI} =(-)^{l_{c}+l_{12}+J}\hat{\bar{a}}_{j}\hat{\bar{b}}_{j}\hat{\bar{b} }_{j}\hat{c}_{j}\hat{I}_{12}\hat{S}_{12}\hat{I}_{12}\hat{I}\mathcal{I}\sum_{L_{ 12}LS\mathcal{J}}(-)^{L_{12}+L+S\mathcal{J}}\hat{L}_{12}^{2}\hat{L}^{2}\hat{S}^{ 2}\hat{\mathcal{J}}^{2}\] \[\times\begin{cases}l_{12}&\mathcal{L}_{12}&L_{12}\\ l_{c}&L&\mathcal{J}\end{cases}\begin{cases}l_{a}&\frac{1}{2}&j_{a}\\ l_{b}&\frac{1}{2}&j_{b}\\ L_{12}&S_{12}&J_{12}\end{cases}\begin{cases}l_{12}&S_{12}&J_{12}\\ l_{c}&\frac{1}{2}&j_{c}\\ L&S&J\end{cases}\] \[\times\sum_{\mathcal{N}_{12}\mathcal{L}_{12}}\left\langle \!\left\langle\mathcal{N}_{12}\mathcal{L}_{12}n_{12}l_{12},L_{12}\,|\,n_{a}l_{ a}n_{b}l_{b},L_{12}\right\rangle\!\right\rangle_{d_{1}}\left\langle\!\left\langle \mathcal{N}\mathcal{L}nl,\mathcal{J}\,\right|\mathcal{N}_{12}\mathcal{L}_{12} n_{c}l_{c},\mathcal{J}\right\rangle\!\right\rangle_{d_{2}}\] \[\times\sum_{\mathcal{K}}\hat{\mathcal{K}}^{2}\begin{cases} \mathcal{L}&l&\mathcal{J}\\ l_{12}&L&\mathcal{K}\end{cases}\begin{cases}\mathcal{L}&\mathcal{K}&L\\ S&J&\mathcal{I}\end{cases}\begin{cases}l_{12}&l&\mathcal{K}\\ S_{12}&\frac{1}{2}&S\\ I_{12}&I&\mathcal{I}\end{cases}. \tag{104}\] Note that Eq. 104 can be simplified by using the \(12\)-\(j\) symbol of the first kind [262]. Through diagonalizing the antisymmetrizer, the antisymmetrized Jaobi-HO matrix element is expressed by \[{}_{A}\!\left\langle\kappa^{\prime};\mathcal{I}T\left|V_{3N}\right|\kappa; \mathcal{I}T\right\rangle_{A} =6\sum_{\bar{\kappa}\bar{\kappa}^{\prime}}D_{\kappa\bar{\kappa}}^{ (\mathcal{I}T)}D_{\kappa^{\prime}\bar{\kappa}^{\prime}}^{(\mathcal{I}T)} \left\langle\bar{\kappa}^{\prime};\mathcal{I}T\left|V_{3N}\right|\bar{\kappa}; \mathcal{I}T\right\rangle. \tag{105}\] The indices \(\kappa\), \(\bar{\kappa}\), \(\kappa^{\prime}\), and \(\bar{\kappa}^{\prime}\), respectively, stand for \[\kappa =\{n_{12},l_{12},S_{12},I_{12},T_{12},n,l,I\} \left((-)^{l_{12}+S_{12}+T_{12}}=-1\right), \tag{106}\] \[\bar{\kappa} =\big{\{}\bar{n}_{12},\bar{l}_{12},\bar{S}_{12},\bar{T}_{12}, \bar{\bar{n}},\bar{\bar{l}},\bar{\bar{I}}\big{\}} \left((-)^{\bar{l}_{12}+\bar{S}_{12}+\bar{\bar{T}}_{12}}=-1\right),\] (107) \[\kappa^{\prime} =\{n^{\prime}_{12},l^{\prime}_{12},S^{\prime}_{12},I^{\prime}_{ 12},T^{\prime}_{12},n^{\prime},l^{\prime},I^{\prime}\} \left((-)^{l^{\prime}_{12}+S^{\prime}_{12}+T^{\prime}_{12}}=-1\right),\] (108) \[\bar{\kappa}^{\prime} =\big{\{}\bar{n}^{\prime}_{12},\bar{l}^{\prime}_{12},\bar{S}^{ \prime}_{12},\bar{\bar{l}}^{\prime}_{12},\bar{\bar{l}}^{\prime}_{12},\bar{n}^{ \prime},\bar{\bar{l}}^{\prime},\bar{\bar{l}}^{\prime}\big{\}} \left((-)^{\bar{\bar{p}}_{12}+\bar{\bar{S}}^{\prime}_{12}+\bar{ \bar{T}}^{\prime}_{12}}=-1\right), \tag{109}\] where \(\bar{\kappa}\) and \(\bar{\kappa}^{\prime}\) are the quantum numbers originating from the expansion by Eq. (95). Once the three-body MEs are obtained withing the \(JT\)-coupled scheme by Eq. (103), those within the proton-neutron formalism can be constructed via \[{}_{A}\!\left\langle\left(d,e\right),f;J^{\prime}_{12}J\left|V_{3N }\right|\left(a,b\right),c;J_{12}J\right\rangle_{A} =\sum_{TT_{12}T^{\prime}_{12}}\left(\frac{1}{2}m_{\tau_{\alpha}} \frac{1}{2}m_{\tau_{\alpha}}\right|T_{12}M_{T_{12}}\right)\,\left(T_{12}M_{T_{1 2}}\frac{1}{2}m_{\tau_{\epsilon}}\right|TM_{T}\right)\] \[\times\,\left(\frac{1}{2}m_{\tau_{d}}\frac{1}{2}m_{\tau_{ \epsilon}}\right|T^{\prime}_{12}M^{\prime}_{T_{12}}\left)\,\left(T^{\prime}_{12} M^{\prime}_{T_{12}}\frac{1}{2}m_{\tau_{\prime}}\right|TM_{T}\right)\] \[\times\,{}_{A}\!\left\langle\left(d,e\right),f;J^{\prime}_{12}JT^{ \prime}_{12}T\left|V_{3N}\right|\left(a,b\right),c;J_{12}JT_{12}T\right\rangle_{A}. \tag{110}\] #### a.3.2 Chiral three-body potentials and nonlocal regularization The three-nucleon force appears at N\({}^{2}\)LO of the chiral EFT, as explained in Sec. 2.2. It consists of the two-pion exchange (2PE) term, the one-pion exchange plus the two-body contact (1PE) term, and the three-body contact term, as shown in Fig. 3. In the momentum space, the potential \(v_{3N}\) of the operator \(V_{3N}\) is explicitly given by Eqs. (1) to (4). Now we can show that the antisymmetrized three-body MEs can be simplified by \[{}_{A}\!\left\langle\kappa^{\prime};\mathcal{I}T\left|V_{3N}\right| \kappa;\mathcal{I}T\right\rangle_{A} =3\,\left\langle\kappa^{\prime};\mathcal{I}T\left|W_{3N}\right| \kappa;\mathcal{I}T\right\rangle_{A}\] \[=18\sum_{\bar{\kappa}\bar{\kappa}^{\prime}}D_{\kappa\bar{\kappa}}^{ (\mathcal{I}T)}D_{\kappa^{\prime}\bar{\kappa}^{\prime}}^{(\mathcal{I}T)}\left\langle \bar{\kappa}^{\prime};\mathcal{I}T\left|W_{3N}\right|\bar{\kappa};\mathcal{I}T \right\rangle. \tag{111}\] This is owing to the symmetry of the three-nucleon force with respect to the permutation of particles. In the following sections, the explicit form of \(\left\langle\bar{\kappa}^{\prime};\mathcal{I}T\left|W_{3N}\right|\bar{\kappa}; \mathcal{I}T\right\rangle\) is presented. The reduced operator \(W_{3N}\) is defined by \[\left\langle\mathbf{p}_{a}^{\prime},\mathbf{p}_{b}^{\prime},\mathbf{p}_{c}^{ \prime}\left|W_{3N}\right|\mathbf{p}_{a},\mathbf{p}_{b},\mathbf{p}_{c}\right\rangle =w_{3N}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right)\delta(\mathbf{q} _{a}+\mathbf{q}_{b}+\mathbf{q}_{c})\,, \tag{112}\] \[w_{3N}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right) =w_{3N}^{(2\pi)}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right)+w_{3 N}^{(1\pi)}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right)+w_{3N}^{(\rm ct)} \left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right). \tag{113}\] The two-pion exchange potential \(w_{3N}^{(2\pi)}\) is given by \[w_{3N}^{(2\pi)}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right) =w_{3N}^{(2\pi;c_{1})}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c} \right)+w_{3N}^{(2\pi;c_{3})}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right)+w_ {3N}^{(2\pi;c_{4})}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right), \tag{114}\] \[w_{3N}^{(2\pi;c_{1})}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right) =-\frac{1}{(2\pi)^{6}}\frac{g_{A}^{2}c_{1}m_{\pi}^{2}}{f_{\pi}^{4 }}\frac{\left(\mathbf{\sigma}_{b}\cdot\mathbf{q}_{b}\right)\left(\mathbf{\sigma}_{c}\cdot \mathbf{q}_{c}\right)}{\left(q_{b}^{2}+m_{\pi}^{2}\right)\left(q_{c}^{2}+m_{\pi}^{ 2}\right)}\mathbf{\tau}_{b}\cdot\mathbf{\tau}_{c},\] (115) \[w_{3N}^{(2\pi;c_{3})}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right) =\frac{1}{(2\pi)^{6}}\frac{g_{A}^{2}c_{3}}{2f_{\pi}^{4}}\frac{ \left(\mathbf{\sigma}_{b}\cdot\mathbf{q}_{b}\right)\left(\mathbf{\sigma}_{c}\cdot\mathbf{q}_{ c}\right)}{\left(q_{b}^{2}+m_{\pi}^{2}\right)\left(q_{c}^{2}+m_{\pi}^{2} \right)}\left(\mathbf{q}_{b}\cdot\mathbf{q}_{c}\right)\left(\mathbf{\tau}_{b}\cdot\mathbf{ \tau}_{c}\right),\] (116) \[w_{3N}^{(2\pi;c_{4})}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right) =\frac{1}{(2\pi)^{6}}\frac{g_{A}^{2}c_{4}}{4f_{\pi}^{4}}\frac{ \left(\mathbf{\sigma}_{b}\cdot\mathbf{q}_{b}\right)\left(\mathbf{\sigma}_{c}\cdot\mathbf{q}_{ c}\right)}{\left(q_{b}^{2}+m_{\pi}^{2}\right)\left(q_{c}^{2}+m_{\pi}^{2} \right)}\left\{\left(\mathbf{q}_{b}\times\mathbf{q}_{c}\right)\cdot\mathbf{\sigma}_{a} \right\}\left\{\left(\mathbf{\tau}_{b}\times\mathbf{\tau}_{c}\right)\cdot\mathbf{\tau}_{a} \right\}, \tag{117}\] while the other potentials \(w_{3N}^{(1\pi)}\) and \(w_{3N}^{(\rm ct)}\) are written as \[w_{3N}^{(1\pi)}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right) =-\frac{1}{(2\pi)^{6}}\frac{g_{A}^{4}c_{D}}{4f_{\pi}^{4}\Lambda_{ \chi}}\frac{\left(\mathbf{\sigma}_{c}\cdot\mathbf{q}_{c}\right)\left(\mathbf{\sigma}_{b} \cdot\mathbf{q}_{c}\right)}{q_{c}^{2}+m_{\pi}^{2}}\mathbf{\tau}_{b}\cdot\mathbf{\tau}_{c}, \tag{118}\] \[w_{3N}^{(\rm ct)}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right) =\frac{1}{(2\pi)^{6}}\frac{c_{E}}{f_{\pi}^{4}\Lambda_{\chi}}\mathbf{ \tau}_{a}\cdot\mathbf{\tau}_{b}. \tag{119}\] Note that \(w_{3N}\) is one component of Eqs. (1) to (4). Also pay attention that our potential \(w_{3N}\) contains a prefactor \(1/(2\pi)^{6}\), which originates from our convention of the normalization:\(\left\langle\mathbf{p}_{a}^{\prime}\left|\,\mathbf{p}_{a}\right\rangle=\delta(\mathbf{q}_{a})\). See Refs. [87, 99] for more details. It is convenient to define the Jacobi-HO momenta as \[\mathbf{k}=\frac{1}{\sqrt{2}}\left(\mathbf{p}_{a}-\mathbf{p}_{b}\right),\quad\mathbf{K}=\sqrt {\frac{2}{3}}\left[\frac{1}{2}\left(\mathbf{p}_{a}+\mathbf{p}_{b}\right)-\mathbf{p}_{c} \right],\quad\mathbf{K}_{0}=\frac{1}{\sqrt{3}}\left(\mathbf{p}_{a}+\mathbf{p}_{b}+\mathbf{p}_{c }\right). \tag{120}\] Then, one finds that the transferred momenta are expressed with the Jacobi-HO momenta: \[\mathbf{q}_{a} =\frac{1}{\sqrt{3}}\left(\mathbf{K}_{0}^{\prime}-\mathbf{K}_{0}\right)+ \frac{1}{\sqrt{2}}\left(\mathbf{k}^{\prime}-\mathbf{k}\right)+\frac{1}{\sqrt{6}}\left( \mathbf{K}^{\prime}-\mathbf{K}\right), \tag{121}\] \[\mathbf{q}_{b} =\frac{1}{\sqrt{3}}\left(\mathbf{K}_{0}^{\prime}-\mathbf{K}_{0}\right)- \frac{1}{\sqrt{2}}\left(\mathbf{k}^{\prime}-\mathbf{k}\right)+\frac{1}{\sqrt{6}}\left( \mathbf{K}^{\prime}-\mathbf{K}\right),\] (122) \[\mathbf{q}_{c} =\frac{1}{\sqrt{3}}\left(\mathbf{K}_{0}^{\prime}-\mathbf{K}_{0}\right)- \sqrt{\frac{2}{3}}\left(\mathbf{K}^{\prime}-\mathbf{K}\right). \tag{123}\] Thus, the operator \(W_{3N}\) is given as \[W_{3N} =\iiiint\limits d\mathbf{k}d\mathbf{K}d\mathbf{K}_{0}d\mathbf{k}^{\prime}d\mathbf{K}^{ \prime}d\mathbf{K}^{\prime}d\mathbf{K}_{0}^{\prime}\left|\mathbf{k}^{\prime},\mathbf{K}^{ \prime},\mathbf{K}_{0}^{\prime}\right\rangle\] \[\quad\times w_{3N}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right) \delta\Big{(}\sqrt{3}\left[\mathbf{K}_{0}^{\prime}-\mathbf{K}_{0}\right]\Big{)} \left\langle\mathbf{k},\mathbf{K},\mathbf{K}_{0}\right|\] \[=\frac{1}{\left(\sqrt{3}\right)^{3}}\iiiint\limits d\mathbf{k}d\mathbf{K}d \mathbf{k}^{\prime}d\mathbf{K}^{\prime}\left|\mathbf{k}^{\prime},\mathbf{K}^{\prime}\right\rangle w _{3N}^{(\rm c.m.)}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right)\left\langle\bm {k},\mathbf{K}\right|. \tag{124}\] Here \(w_{3N}^{\rm(c.m.)}\left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right)=\left.w_{3N} \left(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c}\right)\right|_{\mathbf{K}_{0}=\mathbf{K}_{0}^{ \prime}}\). Below we adopt always the condition \(\mathbf{K}_{0}=\mathbf{K}_{0}^{\prime}\), and therefore, we omit the superscript (c.m.) from \(w_{3N}\) for simplicity. It should be paid attention that, in Eq. (124), we have the prefactor \(1/\left(\sqrt{3}\right)^{3}\) originating from the delta function in Eq. (112). Since the chiral EFT is valid only in the low-momentum region, we introduce the regularization to suppress the high-momentum component of the potential. In our approach, the nonlocal regulator [263] depending on the sum of the Jacobi momenta is employed. Note that, consistently, we employ the chiral two-nucleon potential nonlocally regularized [16, 188]. Moreover, in the nonlocal regularization, there is an advantage over the local regularization that the Fierz rearrangement freedom holds exactly [264, 265, 266]. The nonlocal regulator has the form, \[u_{\nu_{0}}(k,K,\Lambda_{0})=\exp\!\left[-\!\left(\frac{k^{2}+K^{2}}{2\Lambda _{0}^{2}}\right)^{\!\nu_{0}}\right]. \tag{125}\] The cutoff momentum \(\Lambda_{0}\) and the power \(\nu_{0}\) must be fixed consistently with the LECs, \(c_{D}\) and \(c_{E}\). Thus, the nonlocally regularized potential reads \[w_{3N}(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c})\to u_{\nu_{0}}(k^{\prime},K^{\prime}, \Lambda_{0})\,w_{3N}(\mathbf{q}_{a},\mathbf{q}_{b},\mathbf{q}_{c})u_{\nu_{0}}(k,K,\Lambda_ {0})\,. \tag{126}\] #### a.3.3 Contact term Here, we formulate the Jacobi-HO matrix element \(\left\langle\bar{\kappa}^{\prime};\mathcal{I}T\left|W_{3N}^{\rm(ct)}\right| \bar{\kappa};\mathcal{I}T\right\rangle\) of the contact term. Using (124), we obtain \[\left\langle\bar{\kappa}^{\prime};\mathcal{I}T\left|W_{3N}^{\rm( ct)}\right|\bar{\kappa};\mathcal{I}T\right\rangle\] \[\quad=\frac{1}{2\sqrt{3}\pi^{4}}\frac{c_{E}}{f_{\pi}^{4}\Lambda_{ \chi}}\delta_{\bar{l}_{12}0}\delta_{\bar{l}0}\delta_{\bar{l}_{12}0}\delta_{ \bar{l}_{10}^{\prime}}\delta_{\bar{l}_{12}}\delta_{\bar{l}_{12}}\delta_{\bar{ S}_{12}\bar{l}_{12}}\delta_{\bar{S}_{12}\bar{l}_{12}}\delta_{\bar{I}_{1 2}}\delta_{\bar{I}_{12}}\delta_{\bar{S}_{12}\bar{S}_{12}}\delta_{\bar{T}_{12} \bar{T}_{12}}(-)^{\bar{\bar{T}}_{12}+1}\begin{Bmatrix}\frac{1}{2}&\frac{1}{2}& \bar{T}_{12}\\ \frac{1}{2}&\frac{1}{2}&1\end{Bmatrix}\right\rangle\] \[\quad\times\iint dkdKkKP_{\bar{n}_{12}0}(k)\,P_{\bar{n}0}(K)\,u_{ \nu_{0}}(k,K,\Lambda_{0})\] \[\quad\times\iint dk_{1}^{\prime}dk_{1}^{\prime}k_{1}^{\prime}K_{1 }^{\prime}P_{\bar{n}_{12}0}(k_{1}^{\prime})\,P_{\bar{n}^{\prime}0}(K_{1}^{ \prime})\,u_{\nu_{0}}(k^{\prime},K^{\prime},\Lambda_{0})\,, \tag{127}\] where HO-wave function in the momentum space is written as \[P_{nl}(k)=\left(\frac{2}{\pi}\right)^{\frac{1}{2}}k\int drrj_{l}(kr)R_{nl}(r,b _{0})=(-)^{n}R_{nl}\!\left(k,\frac{1}{b_{0}}\right). \tag{128}\] Here \(j_{l}\) is the spherical Bessel function and we explicitly put the argument \(b_{0}\) in \(R_{nl}\) defined by Eq. (87). The momentum integration in Eq. (127) needs to be carried out numerically. #### a.3.4 One-pion exchange plus contact term The Jacobi-HO matrix element \(\left\langle\bar{\kappa}^{\prime};\mathcal{I}T\left|W_{3N}^{(1\pi)}\right|\bar{ \kappa};\mathcal{I}T\right\rangle\) of the 1PE term is slightly complicated: \[\left\langle\bar{\kappa}^{\prime};\mathcal{I}T\left|W_{3N}^{(1\pi) }\right|\bar{\kappa};\mathcal{I}T\right\rangle\] \[\quad=\frac{\sqrt{3}}{4\pi^{4}}\frac{g_{A}c_{D}}{f_{A}^{4} \Lambda_{\chi}}\delta_{\bar{l}_{12}0}\delta_{\bar{l}_{12}0}^{\bar{l}_{12}0} \delta_{\bar{S}_{12}\bar{l}_{2}}\delta_{\bar{S}_{12}^{\prime}\bar{l}_{12}^{ \prime}}\bar{i}^{\bar{l}\bar{l}^{\prime}}(-)^{\bar{l}+\bar{l}+\mathcal{I}+ \mathcal{I}+\frac{1}{2}}\hat{\bar{S}}_{12}\hat{\bar{S}}_{12}^{\hat{\bar{l}}} \hat{\bar{l}^{\prime}}\hat{\bar{T}}_{12}^{\hat{\bar{l}}^{\prime}}\] \[\quad\times\begin{cases}\bar{T}_{12}&\bar{T}_{12}^{\prime}&1\\ \frac{1}{2}&\frac{1}{2}\end{cases}\begin{cases}\bar{T}_{12}&\bar{T}_{12}^{ \prime}&1\\ \frac{1}{2}&\frac{1}{2}\end{cases}\begin{cases}\bar{S}_{12}&\bar{S}_{12}^{ \prime}&1\\ \frac{1}{2}&\frac{1}{2}\end{cases}\begin{cases}\bar{S}_{12}&\bar{S}_{12}^{ \prime}&1\\ \bar{I}^{\prime}&\bar{I}&\mathcal{I}\end{cases}\] \[\quad\times\sum_{\lambda_{0}\lambda_{1}\lambda_{2}}\hat{\lambda }_{0}\hat{\lambda_{0}-\lambda_{1}}\binom{2\lambda_{0}+1}{2\lambda_{1}}^{\frac {1}{2}}\left(101|\lambda_{0}0\right)\left(\lambda_{0}-\lambda_{1},0\lambda_{2} 0|\bar{l}0\right)\left(\lambda_{1}0\lambda_{2}0|\bar{l}0\right)\] \[\quad\times\begin{cases}\lambda_{0}-\lambda_{1}&\lambda_{1}& \lambda_{0}\\ \bar{l}^{\prime}&\bar{l}&\lambda_{2}\end{cases}\begin{cases}\frac{1}{2}&\bar{ l}^{\prime}&\bar{I}^{\prime}\\ \frac{1}{2}&\bar{l}&\bar{I}\\ 1&\lambda_{0}&1\end{cases}\] \[\quad\times\iiint dkdk^{\prime}dKdk^{\prime}kk^{\prime}K^{\lambda _{0}-\lambda_{1}+1}K^{\prime\lambda_{1}+1}f_{\lambda_{2}}^{(\lambda_{0})}(K,K ^{\prime})\] \[\quad\times\left.P_{\bar{n}_{12}0}(k)\,P_{\bar{n}^{\prime}_{12}0} (k^{\prime})\,P_{\bar{n}\bar{l}}(K)\,P_{\bar{n}^{\prime}\bar{l}^{\prime}}(K^{ \prime})\,u_{v_{0}}(k,K,\Lambda_{0})\,u_{\nu_{0}}(k^{\prime},K^{\prime}, \Lambda_{0})\,,\right. \tag{129}\] with the binomial coefficient \(\binom{n_{1}}{n_{2}}=n_{1}!/\left[(n_{1}-n_{2})!n_{2}!\right]\). The function \(f_{\lambda_{2}}^{(\lambda_{0})}\) originating from the multipole expansion of the propagator, together with the factor \((2/3)^{\lambda_{0}/2}q_{c}^{2-\lambda_{0}}\) coming from \(\left(\boldsymbol{\sigma}_{c}\cdot\boldsymbol{q}_{c}\right)\left(\boldsymbol{ \sigma}_{b}\cdot\boldsymbol{q}_{c}\right)\) is given by \[f_{\lambda_{2}}^{(\lambda_{0})}(K,K^{\prime})=\frac{\hat{\lambda}_{2}^{2}}{2} \int_{-1}^{1}dwP_{\lambda_{2}}(w)\frac{\left(\frac{2}{3}\right)^{\frac{\lambda _{0}}{2}}q_{c}^{2-\lambda_{0}}}{q_{c}^{2}+m_{\pi}^{2}}, \tag{130}\] where \(P_{\lambda_{2}}\) is the Legendre polynomial as a function of \(w=\cos\theta\) with the angle \(\theta\) between \(\boldsymbol{K}\) and \(\boldsymbol{K}^{\prime}\). #### a.3.5 Two-pion exchange term The 2PE potentials given by Eqs. (115) to (117) depend on two momenta, \(\boldsymbol{q}_{b}\) and \(\boldsymbol{q}_{c}\), which make the matrix elements cumbersome. After some manipulations, one can derive the Jacobi-HO matrix elements as \[\left\langle\bar{\kappa}^{\prime};\mathcal{I}T\left|W_{3N}^{(2\pi;c_{ 1})}\right|\bar{\kappa};\mathcal{I}T\right\rangle\] \[\quad=3c_{1}m_{\pi}^{2}S_{\bar{\kappa}\bar{\kappa}^{\prime}}^{ \mathcal{I}T}\left\{\begin{array}{ccc}\bar{S}_{12}&\bar{S}_{12}^{\prime}&1 \\ \frac{1}{2}&\frac{1}{2}\end{array}\right\}\begin{cases}\bar{T}_{12}&\bar{T}_{12}^ {\prime}&1\\ \frac{1}{2}&\frac{1}{2}\end{cases}\] \[\quad\times\sum_{\lambda_{b}\lambda_{c}}(-)^{\lambda_{b}+1}\sum_{ \lambda_{1}\lambda_{2}\lambda_{3}}\,I_{\bar{\kappa}\bar{\kappa}^{\prime}}^{ \prime\lambda_{b}\lambda_{c}\lambda_{b}^{\prime}\lambda_{b}^{\prime}\lambda_{1} \lambda_{2}\lambda_{3}\lambda_{3}^{\prime}\lambda_{3}^{\prime\prime}}\] \[\quad\times\sum_{l_{1}}(-)^{l_{1}}\bar{l}_{1}^{2}X_{\bar{\kappa} \bar{\kappa}^{\prime}}^{\lambda_{b}\lambda_{c}\lambda_{b}^{\prime}\lambda_{1} \lambda_{2}\lambda_{3}\lambda_{3}^{\prime}\lambda_{3}^{\prime\prime}} \tag{131}\] \[\left\langle\bar{\kappa}^{\prime};\mathcal{I}T\left|W_{3N}^{(2\pi;c_{ 3})}\right|\bar{\kappa};\mathcal{I}T\right\rangle\] \[\quad=\frac{\sqrt{3}}{2}c_{3}S_{\bar{\kappa}\bar{\kappa}^{ \prime}}^{\mathcal{I}T}\left\{\begin{array}{ccc}\bar{S}_{12}&\bar{S}_{12}^{ \prime}&1\\ \frac{1}{2}&\frac{1}{2}\end{array}\right\}\begin{cases}\bar{T}_{12}&\bar{T}_{1 2}^{\prime}&1\\ \frac{1}{2}&\frac{1}{2}\end{cases}\] \[\quad\times\sum_{l_{b}L_{c}}\hat{L}_{b}\hat{L}_{c}\left(1010|L_{b }0\right)\left(101|L_{c}0\right)\sum_{\begin{subarray}{c}\lambda_{b}\lambda_{ c}\lambda_{b}^{\prime}\lambda_{b}^{\prime}\lambda_{1}\lambda_{2}\lambda_{3} \lambda_{3}^{\prime}\lambda_{3}^{\prime\prime}\\ \lambda_{b}^{\prime}\lambda_{b}^{\prime\prime}\lambda_{b}^{\prime\prime} \lambda_{1}\lambda_{2}\lambda_{3}\lambda_{3}^{\prime}\lambda_{3}^{\prime} \end{subarray}}I_{\bar{\kappa}\bar{\kappa}^{\prime}\bar{\kappa}^{\prime} \bar{\kappa}^{\prime}\bar{\kappa}^{\prime}\bar{\kappa}^{\prime}\bar{\kappa}^{ \prime}\bar{\kappa}^{\prime}L_{b}L_{c}L_{b}^{\prime}=L_{b},L_{c}^{\prime}=L_{c }}^{\left(\Lambda_{0}\right)}\] \[\quad\times\sum_{l_{0}l_{1}}\bar{l}_{1}^{2}\bar{l}_{0}^{2}\begin{cases} L_{b}-\lambda_{b}&\lambda_{b}&L_{b}\\ 1&1&l_{0}\end{cases}\begin{cases}l_{0}&l_{1}&1\\ L_{c}&1&\lambda_{b}\end{cases}X_{\bar{\kappa}\bar{\kappa}^{\prime}\mathcal{I},L_{0}=1,L_{b}^{\prime}=L_{b},L_{c}^{\prime}=L_{c},l_{0}l_{1}}^{\lambda_{b} \lambda_{c}\lambda_{b}^{\prime}\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{3}^{ \prime}\lambda_{3}^{\prime\prime}}\end{cases}\] \[\left\langle\bar{\kappa}^{\prime};\mathcal{I}T\left|W_{3N}^{(2\pi; c_{4})}\right|\bar{\kappa};\mathcal{I}T\right\rangle\] \[\quad=9\sqrt{3}c_{4}(-)^{\bar{l}_{12}+1}S_{\bar{\kappa}\bar{ \kappa}^{\prime}}^{\mathcal{I}T}\left\{\begin{array}{ccc}\frac{1}{2}&\frac{1 }{2}&\bar{T}_{12}^{\prime}\\ \frac{1}{2}&\frac{1}{2}&\bar{T}_{12}\\ 1&1&1\end{array}\right\}\] \[\quad\times\sum_{L_{0}L_{b}L_{c}}\hat{L}_{0}^{2}\hat{L}_{b}\hat{L }_{c}\left(101|L_{b}0\right)\left(101|L_{c}0\right)\left\{\begin{array}{ccc}L_ {0}&L_{b}&1\\ 1&1&1\end{array}\right\}\begin{cases}\frac{1}{2}&\frac{1}{2}&\bar{S}_{12}^{ \prime}\\ \frac{1}{2}&\frac{1}{2}&\bar{S}_{12}\\ 1&1&L_{0}\end{cases}\] \[\quad\times\sum_{\begin{subarray}{c}\lambda_{b}\lambda_{c}\\ \lambda_{b}^{\prime}\lambda_{b}^{\prime}\end{subarray}}\sum_{\begin{subarray}{c} \lambda_{1}\lambda_{2}\lambda_{3}\\ \lambda_{b}^{\prime}\lambda_{b}^{\prime}\lambda_{3}^{\prime}\end{subarray}}I_{\bar{ \kappa}\bar{\kappa}^{\prime}\mathcal{I},L_{0}L_{b}^{\prime}=L_{b},L_{c}^{ \prime}=L_{c}}^{\left(\Lambda_{0}\right)}\] \[\quad\times\sum_{l_{0}l_{1}}\bar{l}_{0}^{2}\hat{l}_{1}^{2}\begin{cases} L_{b}-\lambda_{b}&\lambda_{b}&L_{b}\\ 1&L_{0}&l_{0}\end{cases}\begin{cases}l_{0}&l_{1}&1\\ L_{c}&1&\lambda_{b}\end{cases}X_{\bar{\kappa}\bar{\kappa}^{\prime}\mathcal{I},L _{0}L_{b}^{\prime}=L_{b},L_{c}^{\prime}=L_{c},l_{0}l_{1}}^{\lambda_{b} \lambda_{c}\lambda_{b}^{\prime}\lambda_{b}^{\prime}\lambda_{1}\lambda_{2}\lambda_{3 }\lambda_{3}^{\prime}\lambda_{3}^{\prime\prime}}\end{cases}\] \[\quad\times\sum_{l_{0}l_{1}}\bar{l}_{0}^{2}\hat{l}_{1}^{2}\begin{cases} L_{b}-\lambda_{b}&\lambda_{b}&L_{b}\\ 1&L_{0}&l_{0}\end{cases}\begin{cases}l_{0}&l_{1}&1\\ L_{c}&1&\lambda_{b}\end{cases}X_{\bar{\kappa}\bar{\kappa}^{\prime}\mathcal{I},L _{0}L_{b}^{\prime}=L_{b},L_{c}^{\prime}=L_{c},l_{0}l_{1}}^{\lambda_{b}\lambda_{c} \lambda_{b}^{\prime}\lambda_{b}^{\prime}\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{3} ^{\prime}\lambda_{3}^{\prime\prime}}\end{cases}\] \[\quad\times\sum_{l_{0}l_{1}}\bar{l}_{0}^{2}\hat{l}_{1}^{2}\begin{cases} L_{b}-\lambda_{b}&\lambda_{b}&L_{b}\\ 1&L_{0}&l_{0}\end{cases}\begin{cases}l_{0}&l_{1}&1\\ L_{c}&1&\lambda_{b}\end{cases}X_{\bar{\kappa}\bar{\kappa}^{\prime}\mathcal{I},L _{0}L_{b}^{\prime}=L_{b},L_{c}^{\prime}=L_{c},l_{0}l_{1}}^{\lambda_{b} \lambda_{c}\lambda_{b}^{\prime}\lambda_{b}^{\prime}\lambda_{1}\lambda_{2} \lambda_{3}\lambda_{3}^{\prime}\lambda_{3}^{\prime\prime}}\end{cases}\] \[\quad\times\sum_{l_{0}l_{1}}\bar{l}_{0}^{2}\hat{l}_{1}^{2}\begin{cases} L_{b}-\lambda_{b}&\lambda_{b}&L_{b}\\ 1&L_{0}&l_{0}\end{cases}\begin{cases}l_{0}&l_{1}&1\\ L_{c}&1&\lambda_{b}\end{cases}X_{\bar{\kappa}\bar{\kappa}^{\prime}\mathcal{I},L _{0}L_{b}^{\prime}=L_{b},L_{c}^{\prime}=L_{c},l_{0}l_{1}}^{\lambda_{b}\lambda_{c} \lambda_{b}^{\prime}\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{3}^{\prime}\lambda_{3 }^{\prime\prime}}\end{cases}\] \[\quad\times\sum_{l_{0}l_{1}}\bar{l}_{0}^{2}\hat{l}_{1}^{2}\begin{cases} L_{b}-\lambda_{b}&\lambda_{b}&L_{b}\\ 1&L_{0}&l_{0}\end{cases}\begin{cases}l_{0}&l_{1}&1\\ L_{c}&1&\lambda_{b}\end{cases}X_{\bar{\kappa}\bar{\kappa}^{\prime}\mathcal{I},L _{0}L_{b}^{\prime}=L_{b},L_{c}^{\prime}=L_{c},l_{ \[I^{\nu_{0}\lambda_{0}\lambda_{0}\lambda_{0}^{\prime}\lambda_{0}^{ \prime}\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{3}^{\prime}\lambda_{3}^{\prime \prime}}_{\bar{\kappa}\bar{\kappa}^{\prime}\bar{\kappa}L_{D}L^{\prime}_{L}L^{ \prime}_{L}L^{\prime}_{L}}\] \[\quad=\sum_{l_{2}l_{3}}\hat{l}_{2}\hat{l}_{3}\left(L^{\prime}_{c}- \lambda_{c},0,\lambda_{b}-\lambda_{b}^{\prime\prime},0|l_{2}0\right)\left( \lambda_{c}0\lambda_{0}^{\prime\prime}0|l_{3}0\right)\left\{\begin{matrix} \lambda_{b}-\lambda_{b}^{\prime\prime}&\lambda_{b}^{\prime\prime}&\lambda_{ b}\\ L^{\prime}_{c}-\lambda_{c}&\lambda_{c}&L^{\prime}_{c}\\ l_{2}&l_{3}&l_{1}\end{matrix}\right\}\] \[\quad\times\sum_{\begin{matrix}\lambda_{0}^{\prime}\\ \lambda_{\Lambda}\lambda^{\prime}\end{matrix}}\hat{\lambda}\hat{\lambda}^{ \prime}\hat{\Lambda}\hat{\Lambda}^{\prime}\left(L^{\prime}_{b}-\lambda_{b}- \lambda^{\prime}_{b},0\lambda 0|\bar{l}_{12}0\right)\left(\lambda^{\prime}_{0}0\lambda^{ \prime}0|\bar{l}^{\prime}_{12}0\right)\left(l_{2}0\Lambda 0|\bar{l}0\right)\left(l_{3}0 \Lambda^{\prime}0|\bar{l}^{\prime}0\right)\] \[\quad\times\left(\lambda_{2}0,\lambda_{3}-\lambda_{3}^{\prime},0 |\lambda 0\right)\left(\lambda_{2}0\lambda_{3}^{\prime}0|\lambda^{\prime}0 \right)\left(\lambda_{1}0,\lambda_{3}-\lambda_{3}^{\prime\prime},0|\Lambda 0\right)\left(\lambda_{1}0\lambda_{3}^{\prime\prime}0|\Lambda^{\prime}0\right)\] \[\quad\times\left\{\begin{matrix}\lambda_{3}-\lambda_{3}^{\prime}& \lambda_{3}^{\prime}&\lambda_{3}\\ \lambda^{\prime}&\lambda&\lambda_{2}\end{matrix}\right\}\left\{\begin{matrix} \lambda_{3}-\lambda_{3}^{\prime\prime}&\lambda_{3}^{\prime\prime}&\lambda_{3 }\\ \Lambda^{\prime}&\Lambda&\lambda_{1}\end{matrix}\right\}\] \[\quad\times\sum_{\begin{matrix}L_{1}L_{2}L_{3}\\ \end{matrix}}(-)^{L_{1}+L_{2}+L_{3}}\hat{L}_{1}^{2}\hat{L}_{2}^{2}\hat{L}_{3} ^{2}\left\{\begin{matrix}\bar{l}_{12}&\bar{l}_{12}^{\prime}&L_{1}\\ \bar{l}^{\prime}&\bar{l}&\mathcal{I}\end{matrix}\right\}\left\{\begin{matrix} L_{0}&L^{\prime}_{b}-\lambda_{b}&l_{0}\\ \lambda_{3}&L_{1}&L_{2}\end{matrix}\right\}\left\{\begin{matrix}1&l_{1}&l_{0}\\ \lambda_{3}&L_{1}&L_{3}\end{matrix}\right\}\] \[\quad\times\left\{\begin{matrix}\bar{S}_{12}^{\prime}&\bar{l}_{12} &\bar{I}_{12}^{\prime}\\ \bar{S}_{12}&\bar{l}_{12}&\bar{I}_{12}\\ L_{0}&L_{2}&L_{1}\end{matrix}\right\}\left\{\begin{matrix}\frac{1}{2}&\bar{l} ^{\prime}&\bar{l}^{\prime}\\ \frac{1}{2}&\bar{l}&\bar{I}\\ 1&L_{3}&L_{1}\end{matrix}\right\}\left\{\begin{matrix}L^{\prime}_{b}-\lambda_{ b}-\lambda^{\prime}_{b}&\lambda^{\prime}_{b}&L^{\prime}_{b}-\lambda_{b}\\ \lambda&\lambda^{\prime}&\lambda_{3}\\ \bar{l}_{12}&\bar{l}_{12}^{\prime}&L_{2}\end{matrix}\right\}\left\{\begin{matrix} l_{2}&l_{3}&l_{1}\\ \Lambda&\Lambda^{\prime}&\lambda_{3}\\ \bar{l}&\bar{l}^{\prime}&L_{3}\end{matrix}\right\}, \tag{136}\] as well as \[f^{\left(L_{b}L_{c}\right)}_{\lambda_{1}\lambda_{2}\lambda_{3}}(k, k^{\prime},K,K^{\prime}) =\frac{\hat{\lambda}_{1}^{2}\hat{\lambda}_{2}^{2}\hat{\lambda}_{3} ^{2}}{8}\int_{-1}^{1}\int_{-1}^{1}\int_{-1}^{1}dw_{1}dw_{2}dw_{3}P_{\lambda_{1 }}(w_{1})P_{\lambda_{2}}(w_{2})P_{\lambda_{3}}(w_{3})\] \[\times\left(|\mathbf{k}-\mathbf{k}^{\prime}|\,|\mathbf{K}-\mathbf{K}^{\prime}| \right)^{-\lambda_{3}}\frac{2^{-\frac{L_{b}}{2}}\left(\frac{2}{3}\right)^{ \frac{L_{b}}{2}}q_{b}^{2-L_{b}}q_{c}^{2-L_{c}}}{\left(q_{b}^{2}+m_{\pi}^{2} \right)\left(q_{c}^{2}+m_{\pi}^{2}\right)}, \tag{137}\] which originates from the triple-fold multipole expansion with respect to \(w_{1}=\cos\theta_{1}\), \(w_{2}=\cos\theta_{2}\), and \(w_{3}=\cos\theta_{3}\) respectively defined by the angles \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) between \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\), \(\mathbf{k}\) and \(\mathbf{k}^{\prime}\), and \(\mathbf{K}-\mathbf{K}^{\prime}\) and \(\mathbf{k}-\mathbf{k}^{\prime}\).
2303.07237
The atomic-to-molecular hydrogen transition in the TNG50 simulation: Using realistic UV fields to create spatially resolved HI maps
Cold gas in galaxies provides a crucial test to evaluate the realism of cosmological hydrodynamical simulations. To extract the atomic and molecular hydrogen properties of the simulated galaxy population, postprocessing methods taking the local UV field into account are required. We improve upon previous studies by calculating realistic UV fields with the dust radiative transfer code SKIRT to model the atomic-to-molecular transition in TNG50, the highest-resolution run of the IllustrisTNG suite. Comparing integrated quantities such as the HI mass function, we study to what detail the UV field needs to be modelled in order to calculate realistic cold gas properties. We then evaluate new, spatially resolved comparisons for cold gas in galaxies by exploring synthetic maps of atomic hydrogen at redshift zero and compare them to 21-cm observations of local galaxies from the WHISP survey. In terms of non-parametric morphologies, we find that TNG50 HI maps are less concentrated than their WHISP counterparts (median $\Delta C\approx0.3$), due in part to central HI deficits related to the ejective character of supermassive black hole feedback in TNG. In terms of the HI column density distribution function, we find discrepancies between WHISP and IllustrisTNG that depend on the total HI abundance in these datasets as well as the postprocessing method. To fully exploit the synergy between cosmological simulations and upcoming deep HI/H2 data, we advocate the use of accurate methods to estimate the UV radiation field and to generate mock maps.
Andrea Gebek, Maarten Baes, Benedikt Diemer, W. J. G. de Blok, Dylan Nelson, Anand Utsav Kapoor, Peter Camps, Omphile Rabyang, Lerothodi Leeuw
2023-03-13T16:10:12Z
http://arxiv.org/abs/2303.07237v1
The atomic-to-molecular hydrogen transition in the TNG50 simulation: Using realistic UV fields to create spatially resolved H i maps ###### Abstract Cold gas in galaxies provides a crucial test to evaluate the realism of cosmological hydrodynamical simulations. To extract the atomic and molecular hydrogen properties of the simulated galaxy population, postprocessing methods taking the local UV field into account are required. We improve upon previous studies by calculating realistic UV fields with the dust radiative transfer code SKIRT to model the atomic-to-molecular transition in TNG50, the highest-resolution run of the IllustrisTNG suite. Comparing integrated quantities such as the H i mass function, we study to what detail the UV field needs to be modelled in order to calculate realistic cold gas properties. We then evaluate new, spatially resolved comparisons for cold gas in galaxies by exploring synthetic maps of atomic hydrogen at redshift zero and compare them to 21-cm observations of local galaxies from the WHISP survey. In terms of non-parametric morphologies, we find that TNG50 H i maps are less concentrated than their WHISP counterparts (median \(\Delta C\approx 0.3\)), due in part to central H i deficits related to the ejective character of supermassive black hole feedback in TNG. In terms of the H i column density distribution function, we find discrepancies between WHISP and IllustrisTNG that depend on the total H i abundance in these datasets as well as the postprocessing method. To fully exploit the synergy between cosmological simulations and upcoming deep H i /H\({}_{2}\) data, we advocate the use of accurate methods to estimate the UV radiation field and to generate mock maps. keywords: galaxies: ISM - galaxies: structure - ISM: molecules - radio lines: ISM - methods: numerical ## 1 Introduction The physical processes regulating the interstellar medium (ISM) are integral to understand the evolution of galaxies. The most important process in galaxy evolution, star formation, occurs in the cold and dense molecular ISM. For molecular hydrogen to form, enough atomic gas (potentially supported by the presence of dust) is required to cool and shield the molecular clouds against energetic ultraviolet (UV) radiation. This leads to the empirical correlation between the surface densities of gas and star-formation rate (the Kennicutt-Schmidt law, Schmidt, 1959; Kennicutt, 1998), which is significantly tighter when using the surface density of molecular gas on sub-kpc scales (e.g. Bigiel et al., 2008). On the other hand, the ISM is actively shaped by various processes of galaxy evolution such as momentum and energy injections from supernovae and active galactic nuclei, chemical evolution due to metal return from evolved stars, large-scale galactic inflows from the gas residing in the halo, and interactions with other galaxies like ram-pressure stripping. These processes shape the ISM into a complex, multi-phase environment which is a prime target for observational campaigns targeted at understanding galaxy evolution. As a dominant fraction of the mass of the ISM exists in the form of atomic hydrogen (H i ), observations targeted at the 21-cm line of H i are useful to study the structure of nearby galaxies. Unlike Ly-\(\alpha\) radiation, photons emitted due to the H i spin-flip transition which gives rise to the 21-cm line penetrate both interstellar dust clouds and Earth's atmosphere. H i exists in the ISM mostly in two equilibrium states, the warm (\(T\sim 5000\) K) and the cold (\(T\sim 100\) K) neutral media (WNM and CNM, e.g. Saintonge and Catinella, 2022). 21-cm emission from the WNM is generally assumed to be optically thin which gives a trivial relation between the observable 21-cm flux density and the physical quantity of interest, the H i mass surface density. 21-cm observations of the Milky Way indicate that the WNM contains most of the H i mass (Murray et al., 2018). The CNM is dense and cold enough for H i self-absorption to become important which complicates the relation between 21-cm flux and H i mass. For 21-cm emission observations, correction factors to account for the 'hidden' H i mass in the CNM are rather uncertain. While earlier measurements in the LMC, M31 and M33 indicated a correction factor of \(\approx 35\,\%\)(Braun, 2012), the more recent study by Koch et al. (2021) finds for the same objects a correction factor of \(\approx 10\,\%\), in line with the ALFALFA survey of local galaxies (Jones et al., 2018). Observational campaigns using 21-cm observations to measure the galactic H i content of local galaxies include blind H i-selected surveys (HIPASS, Barnes et al., 2001; ALFALFA, Giovanelli et al., 2005) and surveys targeting samples representative of the galaxy population (GASS, Catinella et al., 2010; xGASS, Catinella et al., 2018). These spatially unresolved surveys made use of the superior sensitivity of the single-dish Parkes and Arecibo telescopes to measure galactic H i masses. With these data, multiple scaling relations between the atomic gas mass and other galactic properties such as stellar mass and star formation rate have been established (see Saintonge and Catinella, 2022 for a review), with far-reaching implications for galaxy evolution. Interferometric observations obtained spatially resolved 21-cm fluxes to map the structure and kinematics of H i (e.g. WHISP, van der Hulst et al., 2001; THINGS, Walter et al., 2008; BLUEDISK, Wang et al., 2013). Such resolved data has been used to study radial H i surface density profiles (Wang et al., 2014), H i morphologies in the context of mergers (Holwerda et al., 2011), and the Tully-Fisher relation (Tully and Fisher, 1977) constructed from spatially resolved H i kinematics (Ponomareva et al., 2017). These observational insights into the cold gas content of galaxies provide a vital test for simulations of galaxy evolution. Modelling atomic and molecular hydrogen within galaxies numerically requires resolving the molecular clouds in the CNM to capture the formation of H\({}_{2}\) within a chemical network. For cosmological hydrodynamical simulations that follow the evolution of baryons and dark matter of large volumes (\(\sim 100^{3}\,\mathrm{\,Mpc^{3}}\)), modelling this chemical network on the required spatial scales is not feasible at present. Moreover, this chemical network depends on the local radiation field, which for current large-volume cosmological simulations run to \(z=0\) is not explicitly followed. Hence, as a critical test for cosmological simulations, postprocessing of the simulation output is required to extract the atomic and molecular hydrogen content and compare to existing or future observational data. Such postprocessing studies (EAGLE: Lagos et al., 2015; Bahe et al., 2016; Marasco et al., 2016; Crain et al., 2017; AURIGA: Marinacci et al., 2017; IllustrisTNG: Diemer et al., 2018; Villaescusa-Navarro et al., 2018; Popping et al., 2019; Diemer et al., 2019; Stevens et al., 2019; Watts et al., 2020; Inoue et al., 2020; Stevens et al., 2021; Yates et al., 2021; SIMBA: Dave et al., 2020; FireBOX: Gensior et al., 2022) mostly focus on redshift zero, where observational data is most available. Despite qualitative agreement, inferred discrepancies include the presence of spurious H i holes in the EAGLE simulation (Bahe et al., 2016), an excess of H i and too large H i disks in the AURIGA simulation (Marinacci et al., 2017) and an overabundance of H i at \(z=0\) for TNG50 (Diemer et al., 2019). For molecular hydrogen, a robust comparison between simulations and observations is much more challenging (e.g. Popping et al., 2019; Inoue et al., 2020). In contrast to the direct detectability of H i, molecular hydrogen is very difficult to observe directly due to the lack of a permanent dipole moment and high excitation temperatures. The molecular gas is usually observed through transitions of the second most abundant molecule, CO, introducing a rather uncertain factor to convert the CO line luminosity to the H\({}_{2}\) mass (see Bolatto et al., 2013 for a review). Key to H i/H\({}_{2}\) postprocessing studies is a scheme to partition the neutral hydrogen simulation output into its atomic and molecular phases. Partitioning schemes based on analytical arguments or high-resolution galaxy simulations both consider the formation of H\({}_{2}\) on dust grains and H\({}_{2}\) photodissociation from UV radiation in the Lyman-Werner band (\(11.2-13.6\,\mathrm{eV}\), \(912-1108\,\mathrm{\AA}\)). Hence, applying such hydrogen partitioning models to cosmological simulations requires estimating the local radiation field strength at 1000 A since the partitioning models use this value as a proxy for the radiation field strength in the Lyman-Werner band. Two different approximations to estimate the UV field exist in the literature: Lagos et al. (2015) introduced a scaling with the local star-formation rate of gas cells. Cosmological simulations typically use a Kennicutt-Schmidt type relation above a specific density threshold to model star formation, while lower-density cells are not star-forming (see Section 2.1). This leads to a strong discontinuity in the estimated UV field. To overcome this limitation, Diemer et al. (2018) introduced a method which spreads the UV flux from star-forming gas cells without attenuation (but with an escape fraction of 10%) throughout the galaxy. Since attenuation by dust is significant in the UV these radiation field estimates introduce large modelling uncertainties. In this study we improve upon previous methods by using the Monte Carlo dust radiative transfer code SKIRT (Baes et al., 2011; Camps and Baes, 2015; Camps and Baes, 2020) to obtain a realistic estimate for the UV field. We describe our simulation methods and the simulation and observational datasets in Section 2. Using the highest-resolution installment of the IllustrisTNG simulation suite, TNG50-1, we explore how the different UV field estimates affect the H i/H\({}_{2}\) statistics of the simulated galaxy population in terms of mass functions and average radial profiles in Section 3. Exploiting the realistic UV fields and the high resolution of TNG50-1, we evaluate the realism of cold gas in cosmological simulations by generating spatially resolved H i maps. We compare the simulated H i maps to 21-cm data from the WHISP (Westerbork H i survey of Spiral and Irregular Galaxies) survey in terms of their non-parametric morphologies in Section 4. As complementary metric to non-parametric morphologies, we consider H i column density distribution functions (tracing _how much_ H i exists per column density instead of how the H i columns are spatially distributed) in Section 5. We discuss and contextualize our results in Section 6, and conclude in Section 7. ## 2 Methods ### The IllustrisTNG simulations The IllustrisTNG suite (Pillepich et al., 2018; Springel et al., 2018; Nelson et al., 2018; Naiman et al., 2018; Marinacci et al., 2018) is a set of cosmological, magnetohydrodynamical simulations run using the moving-mesh code AREPO (Springel, 2010). The simulation suite consists of three different volumes with box sizes of approximately 50, 100, and 300 comoving Mpc, each realized with three to four different resolutions. All of these simulations were run with the same physical model, which means that their subgrid parameters were not recalibrated (unlike the EAGLE simulation, Schaye et al., 2015). For the cosmological parameters, the simulations use the 2015 results measured by the Planck satellite (Planck Collaboration et al., 2016). Since we explore synthetic H i maps which are sensitive to the spatial resolution we use the simulation with the highest resolution, TNG50-1, hereafter referred to as TNG50 (Pillepich et al., 2019; Nelson et al., 2019). We also consider the lower-resolution TNG50-2 and the larger-volume TNG100-1 (hereafter referred to as TNG100) runs to test the convergence of some of our results against resolution and box size. We summarize the box sizes and resolutions of the different IllustrisTNG simulations considered in this study in Table 1. In the following, we briefly describe the aspects of IllustrisTNG and its galaxy formation model (Weinberg et al., 2017; Pillepich et al., 2018) that are most relevant to this study. TNG50 simulates a cube with box size of 51.7 comoving Mpc from \(z=127\) to \(z=0\). This volume is resolved with \(2160^{3}\) baryonic and dark matter particles, corresponding to a mean particle mass of \(8.5\times 10^{4}\,M_{\odot}\) and \(4.5\times 10^{5}\,M_{\odot}\), respectively. This mass resolution enables a spatial resolution of \(70-140\) pc for the densest star-forming regions of galaxies. Galaxies are identified using the SUBFIND algorithm (Springel et al., 2001). The IllustrisTNG model incorporates gas radiative processes including metal-line cooling, evolution of stellar populations and chemical enrichment, and feedback from supernovae and black holes that drives galactic outflows and winds. Since molecular clouds cannot be resolved in the simulation, star formation is modelled stochastically for gas with \(n_{\rm H}>0.106\,{\rm cm}^{-3}\) according to the two-phase model of Springel & Hernquist (2003). The ISM above this density threshold is emulated as cold, star-forming clouds with \(T=1000\,{\rm K}\) embedded in hot, ionized gas. This model prescribes an effective equation of state by calculating effective pressures and temperatures as averages over the cold clouds and the hot gas. As we are interested in the cold gas properties of local galaxies we select all subhalos in TNG50 at \(z=0\) with a gas mass larger than \(10^{7}\,M_{\odot}\), such that the galaxies are resolved by at least \(\approx 100\) gas cells. Furthermore, the SKIRT calculation requires star particles to estimate the radiation field. We choose a minimum total stellar mass of \(10^{7}\,M_{\odot}\) such that galaxies are resolved by at least \(\approx 100\) stellar particles. These criteria lead to a sample of \(12\hbox{$.\!\!^{\prime}$}431\) galaxies, comprising both centrals and satellites. This provides a broad base sample which contains the vast majority of galaxies detectable in 21-cm observations. To facilitate the comparison between the different simulation runs we apply the same galaxy sample criteria (\(M_{\rm gas}>10^{7}\,{\rm M}_{\odot}\), \(M_{\star}>10^{7}\,{\rm M}_{\odot}\)) to TNG50-2 and TNG100 as well (the sizes of the galaxy samples are given in Table 1). For all calculations in this study we consider the subhalos in the simulation individually, meaning that when modelling the H i/H\({}_{2}\) content of a galaxy we only select gas cells and star particles that are bound to this specific galaxy as identified by the SUBFIND algorithm. Villaescusa-Navarro et al. (2018) find that for TNG100 at \(z=0\), \(\approx 98\,\%\) of the H i gas mass is bound to subhalos (for H\({}_{2}\) we expect an even higher fraction), hence we do not miss substantial amounts of H i/H\({}_{2}\) gas with our methodology. ### UV field estimates Radiation in the Lyman-Werner band (\(912-1108\) A) can photodissociate molecular hydrogen. Consequently, the radiation field strength at \(1000\,{\rm\AA}\) (generally used as a proxy for the radiation of the entire Lyman-Werner band) is a key parameter for many partitioning schemes. We use three different methods to calculate the \(U_{\rm MW}\) parameter, which is the UV field at \(1000\,{\rm\AA}\) normalized to the average Milky Way value of \(3.43\times 10^{-8}\,{\rm photons\,s^{-1}cm^{-2}Hz^{-1}}\)(Draine, 1978). All methods rely on estimating the flux from stars and/or star-forming regions. For all gas cells we set a floor on \(U_{\rm MW}\) of \(0.00137\) which corresponds to the homogeneous UV background (UVB) of Faucher-Giguere et al. (2009) (in the updated 2011 version) at \(z=0\) (the same UVB model is implemented in IllustrisTNG for the calculation of the ionization state of the gas). Dense gas could be self-shielded against the UVB which would lower the floor value of \(U_{\rm MW}\). However, such gas cells are typically near high star-formation areas so that the local UV radiation surpasses the UVB, rendering the actual floor value irrelevant. We consider three different schemes to estimate the UV field, summarized in Table 2. The simplest method to estimate the UV field follows Lagos et al. (2015). The 'Lagos' approximation is based on the insight that the largest fraction of the UV flux typically comes from very young stars. \(U_{\rm MW}\) is calculated by scaling the local star formation rate surface density by the typical Milky Way value: \[U_{\rm MW}=\frac{\rm SFR\cdot\rho/m\cdot\lambda_{\rm J}}{10^{-9}\,M_{\odot}\, {\rm pc}^{-2}}, \tag{1}\] where \(\rho\) is the total gas mass density, \(m\) the gas cell mass, SFR its star-formation rate, and \(\lambda_{\rm J}\) the Jeans length which approximates the size of a self-gravitating gas cloud (Schaye, 2001; Schaye & Dalla, 2008): \[\lambda_{\rm J}=\sqrt{\frac{\gamma(\gamma-1)u}{G\rho}}. \tag{2}\] In this equation, \(\gamma=5/3\) is the ratio of specific heat capacities, \(u\) the internal energy per unit mass, and \(G\) the gravitational constant. Due to temporal and mass resolution limits in the simulation, it is impossible to resolve arbitrarily small star-formation rates for the gas cells. This leads to a minimum value for \(U_{\rm MW}\) in the Lagos approach for the star-forming gas cells on the order of unity, while all gas cells with zero star-formation rate have \(U_{\rm MW}=0.00137\) (the UV background). Hence, this UV field approximation creates a substantial, unphysical dichotomy in the UV field distribution of the gas cells. Diemer et al. (2018) introduce an improved UV field calculation to overcome this limitation. In the 'Diemer' approximation, star-forming gas cells are assigned a UV flux based on a Starburst99 (Leitherer et al., 1999) calculation of a continuously forming population of stars following a Kroupa IMF (Kroupa, 2001). This UV flux is scaled to the star-formation rate of the gas cell, the flux of a \(1\,M_{\odot}\,{\rm yr}^{-1}\) cell at a distance of \(1\,{\rm kpc}\) is \(3.3\times 10^{-6}\,{\rm photons\,s^{-1}cm^{-2}Hz^{-1}}\) corresponding to \(U_{\rm MW}=96.2\). It is assumed that a certain fraction of this radiation is absorbed by dust within the star-forming region and the remaining fraction propagates through a transparent medium. Diemer et al. (2018) calibrated this escape fraction to \(10\,\%\) based on the SFR-UV relation in the solar neighbourhood. The propagation of the UV flux is calculated via a Fourier transform on a regular cubic grid. We refer the reader to Appendix A of Diemer et al. (2018) for details. While the 'Diemer' method models in-situ absorption of UV flux generated from star-forming regions, it does not capture the significant absorption by diffuse dust in the ISM. Furthermore, older stellar populations can also contribute significantly to the galactic UV flux (Viaene et al., 2016; Bianchi et al., 2018; Nersesian et al., 2019). To \begin{table} \begin{tabular}{c c c c} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ Simulation & \(V_{\rm sim}\) [\(\rm cMpc^{3}\)] & \(m_{\rm baryon}\) [\(10^{4}\,{\rm M}_{\odot}\)] & \(N_{\rm gal}\) \\ \hline TNG50-1 & \(51.7^{3}\) & 8.5 & \(12\hbox{$.\!\!^{\prime}$}431\) \\ TNG50-2 & \(51.7^{3}\) & 68 & \(9\hbox{$.\!\!^{\prime}$}044\) \\ TNG100-1 & \(106.5^{3}\) & 140 & \(74\hbox{$.\!\!^{\prime}$}460\) \\ \end{tabular} \end{table} Table 1: Runs of the IllustrisTNG suite that we consider in this study. For each simulation, we list the volume, the target baryon mass (the resolution), and the number of galaxies that conform to our sample selection criteria. \begin{table} \begin{tabular}{c c c} UV field scheme & UV source & Radiation distribution \\ \hline Lagos\({}^{1}\) & Star-forming gas cells & None \\ Diemer\({}^{2}\) & Star-forming gas cells & Optically thin \\ SKIRT\({}^{3}\) & Star particles & Dust radiative transfer \\ \end{tabular} \end{table} Table 2: Overview of the different UV field schemes considered in this work. generate realistic UV fields taking the complex galactic star-dust geometries into account, 3D radiative transfer modelling is required. Here we use the Monte Carlo dust radiative transfer code SKIRT 9 (Camps & Baes, 2015; Camps & Baes, 2020). We run SKIRT in the oligochromatic (i.e. single wavelength) mode at 1000 A including dust attenuation. A SKIRT simulation requires to define a box to set the domain for the radiative transfer calculation. We use a cube with side length \(2\cdot r_{\rm A}\)1, with the aperture radius \(r_{\rm A}\) chosen such that a sphere with this radius contains at least 99.9 % of the neutral hydrogen gas mass bound to a specific subhalo (see Section 2.3 for the calculation of the neutral hydrogen fraction). We choose the different components in a SKIRT simulation following Kapoor et al. (2021); Trcka et al. (2022) as follows: Footnote 1: The same cube size is used to compute the Diemer UV field. * Evolved stellar populations: We select star particles bound to the subhalo within the spherical aperture to model the UV emission. Star particles with ages above 10 Myr are treated as evolved stellar populations and modelled with a Bruzual-Charlot spectral energy distribution (Bruzual & Charlot, 2003) with Chabrier initial mass function (Chabrier, 2003). We have verified that the choice of an alternative stellar population library (BPASS, Eldridge et al., 2017; Stanway & Eldridge, 2018) does not affect our results. * Star-forming regions: Bound star particles within the spherical aperture and with ages below 10 Myr are treated as star-forming regions and modelled with Mappings-III templates (Groves et al., 2008). These templates contain the emission of the young stellar population and its subsequent attenuation by the surrounding dusty birth cloud. We refer to Trcka et al. (2022) for the determination of the various parameters required for the Mappings-III templates such as the compactness of the star-forming region or the ISM pressure. * Diffuse dust: To track the radiation field and its attenuation by diffuse dust we select all gas cells bound to the subhalo within a cube of side length \(2\cdot r_{\rm A}\). We allocate dust only in relatively dense and cold gas cells that are in line with the criterion from Torrey et al. (2012). We assume that 40 % of the metallic mass in these gas cells exists as dust (Liang et al., 2023 and references therein), which we model with the THEMIS dust mix (Jones et al., 2017). * Diffuse dust: To track the radiation field and its attenuation by diffuse dust we select all gas cells bound to the subhalo within a cube of side length \(2\cdot r_{\rm A}\). We allocate dust only in relatively dense and cold gas cells that are in line with the criterion from Torrey et al. (2012). We assume that 40 % of the metallic mass in these gas cells exists as dust (Liang et al., 2023 and references therein), which we model with the THEMIS dust mix (Jones et al., 2017). The spatial discretization within SKIRT uses the imported Voronoi cell centers of the gas cells (Camps et al., 2013). This allows SKIRT to reconstruct the same Voronoi mesh that was used in IllustrisTNG. Tracking the photon packages through a Voronoi mesh is computationally more expensive than octree grids, but the advantage of importing a grid instead of calculating one outweighs the additional cost here (see also Camps & Baes, 2015). We find converged results when running SKIRT with \(10^{5}\) photon packages, which runs two minutes for a large galaxy with \(M_{\star}\approx 10^{11}\,\rm M_{\odot}\) and eight seconds for a smaller galaxy with \(M_{\star}\approx 10^{8}\,\rm M_{\odot}\) on a 16-core machine. With this 'SKIRT' method we can calculate realistic 3D UV fields for IllustrisTNG galaxies taking dust attenuation into account. The largest modelling uncertainty in this approach lies in the modelling of the star-forming regions, since they require estimates of multiple parameters that are not directly available from the output of the cosmological simulation (see also Kapoor et al. (2021); Trcka et al. 2022). We also tested an alternative template library for star-forming regions (described in Kapoor et al. in prep.) which requires parameters that are more readily estimated from the TNG50 output. We found that using this alternative template library typically yields integrated H i masses that agree with the MAPPINGS-III templates within one percent. Hence, we consider the MAPPINGS-III templates to be sufficiently accurate for this work. For the UV modelling of TNG50-2/TNG100 galaxies, we note that these lower-resolution simulations can have very few star particles with our sample selection criteria (down to seven star particles for TNG100). For these barely resolved objects SKIRT cannot reproduce a meaningful UV field. Hence we use the Diemer UV field as replacement for the SKIRT UV field for galaxies with stellar masses below \(10^{8}\,\rm M_{\odot}\) for TNG50-2 and TNG100. ### Atomic and molecular hydrogen fractions The \(z=0\) snapshot data of IllustrisTNG contain the total and neutral hydrogen fraction in gas cells. Ideally, the neutral hydrogen fraction in the simulation is determined using on-the-fly ionizing radiative transfer calculations. For IllustrisTNG, the hydrogen ionization is approximated using the ionizing background (Faucher-Giguere et al., 2009) and gas self-shielding with the fitting formula from Rahmati et al. (2013). Note that the neutral fractions for star-forming gas cells are for the effective gas temperature and must be recomputed. Hence, we use the intrinsic IllustrisTNG value for the neutral hydrogen fraction only for non star-forming cells. For star-forming cells, we follow Diemer et al. (2018)2 and calculate the mass fraction of cold gas in the two-phase model of Springel & Hernquist (2003). This cold gas at a fixed \(T=1000\,\rm K\) is assumed to be fully neutral while the hot phase is fully ionized. Footnote 2: See Eaps. Al-A6 in Stevens et al. (2019). The breakdown of neutral hydrogen gas into its atomic and molecular constituents requires the use of partitioning schemes. A variety of partitioning schemes exist and have been used to postprocess cosmological simulations (see Diemer et al., 2018 for a detailed comparison between the different partitioning schemes and how that influences the H i and H\({}_{2}\) properties of IllustrisTNG galaxies). Empirical partitioning schemes based on observations of local galaxies (Blitz & Rosolowsky, 2006; Leroy et al., 2008) find that the ratio between molecular and atomic surface densities scales with the hydrostatic midplane pressure as a power law. As shown by Diemer et al. (2018), estimates of the midplane pressure are only physically reasonable when computed from a projection (in face-on orientation). The hydrogen partitioning then happens on this 2D grid on a pixel-by-pixel basis, hence 3D information is lost and it is only possible to create H i maps in face-on orientation (it is not possible to rotate the galaxies back into a random orientation). Since we consider H i maps in random orientation (as the WHISP data to which we compare the simulated H i maps consist of randomly oriented H i maps), we omit the empirical partitioning schemes completely from this study. Analytical studies of Krumholz et al. (2009)3(KMT09), Krumholz (2013)(K13), and Sternberg et al. (2014)(S14) consider idealized gas geometries with H\({}_{2}\) formation-dissociation balance to obtain the atomic and molecular hydrogen fractions. The main input parameters for these partitioning schemes are the surface density of neutral hydrogen, the dust-to-gas ratio, and (only for K13)4 the UV field strength at 1000 A. Lastly, Gnedin & Kravtsov (2011)(GK11) and Gnedin & Draine (2014) (GD14) use high-resolution (mass resolution up to \(10^{3}\,M_{\odot}\)) ISM simulations of isolated galaxies, coupled to a chemical network including H\({}_{2}\) formation on dust grains and photodissociation from Lyman-Werner photons, to partition the hydrogen based on the same three main parameters as the analytical partitioning scheme of K13. More specifically, the GK11 and GD14 approach consists of two steps: The first step consists of a cosmological simulation that is ran until \(z=4\). This simulation extends over five virial radii of a system that would evolve to a typical \(M\approx 10^{12}\,\mathrm{M}_{\odot}\) system. In a second step, the UV radiation field at \(1000\,\mathrm{\AA}\) and the dust-to-gas ratio are fixed to a grid of different values and the simulation is continued for \(600\,\mathrm{Myr}\). This setup allows for fresh gas infall into the galaxy, but cannot model effects from the larger environment such as mergers or filamentary accretion streams. In this study we use the simulation-calibrated GD14 recipe as our default partitioning scheme as it is an update of the GK11 recipe (adding self-shielding of H\({}_{2}\) from line overlap) and has the UV field as an input parameter. Furthermore, it is more directly applicable to cosmological simulations as it is a hydrodynamical simulation that models the various ISM phases and outputs the H i and H\({}_{2}\) fractions averaged over larger patches of the galaxy, while the analytical partitioning schemes consider simpler geometries such as a spherical cloud of gas. For our results we display the default partitioning scheme and treat the other UV-dependent partitioning schemes (GK11, K13) as uncertainties, shown as shaded areas. For some results we also consider the UV-independent KMT09 and S14 models which are plotted separately. The application of all of these partitioning schemes to IllustrisTNG is described in detail in Appendix C of Diemer et al. (2018), except for KMT09 which we present in Appendix A1. The analytical and simulation-calibrated partitioning schemes used here have up to three main input parameters; the surface density of neutral hydrogen (\(\Sigma_{\mathrm{H\,+H_{2}}}\)), the dust-to-gas ratio relative to the average Milky Way value (\(D_{\mathrm{MW}}\)), and the UV field strength at \(1000\,\mathrm{\AA}\) relative to the average Milky Way value (\(U_{\mathrm{MW}}\)). Our estimates for \(U_{\mathrm{MW}}\) are described in Section 2.2. As in previous studies (e.g. Lagos et al., 2015), we assume that the dust-to-gas ratio scales with metallicity (as found observationally, e.g. Remy-Ruyer et al., 2014; De Vis et al., 2017; De Vis et al., 2019) such that \(D_{\mathrm{MW}}=Z/Z_{\odot}\) with \(Z\) the metal mass fraction and \(Z_{\odot}=0.0127\) the solar metallicity used in IllustrisTNG (Asplund et al., 2009). To convert the mass density of neutral hydrogen (\(\rho_{\mathrm{H\,+H_{2}}}\), calculated using the mass fraction of neutral hydrogen) into a surface density a certain length scale is needed. Two strategies to estimate surface densities from the simulation output have emerged: Volumetric modelling using the Jeans approximation and projected modelling using line-of-sight integration (Diemer et al., 2018). In the volumetric modelling approach, the surface density of neutral gas is obtained using the Jeans length \(\lambda_{\mathrm{J}}\): \[\Sigma_{\mathrm{H\,+H_{2}}}=\rho_{\mathrm{H\,+H_{2}}}\cdot\lambda_{\mathrm{J}} =\rho_{\mathrm{H\,+H_{2}}}\cdot\sqrt{\frac{\gamma(\gamma-1)\,u}{G\rho}}. \tag{3}\] In theory, the value of \(\gamma\) changes depending on the molecular fraction. Since the Jeans approximation by itself is already a crude approach we refrain from iterating the partitioning scheme to reach a self-consistent value of \(\gamma\) as in Stevens et al. (2019). In the projected modelling approach the surface density is estimated by rotating the galaxies into face-on position and projecting the gas densities onto a 2D grid. The hydrogen partitioning is then performed on this 2D grid, making it impossible to rotate the galaxy back into random orientation as the 3D information is lost. As we consider H i maps in random orientation, we stick to the volumetric approach in this study and discuss projected modelling in the context of the different UV field estimates in Appendix B. We remark that the Jeans approximation to calculate surface densities breaks down in some regimes. However, the Jeans approximation gives the same H i/H\({}_{2}\) results on average as the projected modelling approach (Diemer et al., 2018). The main output of our H i/H\({}_{2}\) modelling consists of \(U_{\mathrm{MW}}\) parameters for the SKIRT, Diemer and Lagos method as well as 11 different molecular mass fractions \(f_{\mathrm{mol}}\)5 (three different UV fields for GK11, GD14 and K13, plus the UV-independent schemes of KMT09 and S14). We store these values for all gas cells within the spherical aperture6, for all 12'431 galaxies within our TNG50 base sample. We emphasize that our default H i/H\({}_{2}\)-model consists of the GD14 partitioning scheme with the SKIRT UV field. Footnote 5: We use \(f_{\mathrm{mol}}\) to denote the mass ratio of molecular to neutral hydrogen. Footnote 6: Technically, only the SKIRT UV field calculation requires an aperture. To simplify the comparison between the different models we employ the same spherical aperture to store \(U_{\mathrm{MW}}\) and \(f_{\mathrm{mol}}\) for all our models. This aperture is defined as the radius that contains \(99.9\,\mathrm{\char 37}\) of the neutral hydrogen gas mass bound to the subhalo. ### Observational data: The WHISP survey We use observational data from the WHISP survey to compare to simulated H i maps from IllustrisTNG. The WHISP survey targets the 21-cm line of some 300 local galaxies across the Hubble sequence (van der Hulst et al., 2001), selected by their 21-cm flux density and their stellar disc size. This interferometric survey at the Westerbork Synthesis Radio Telescope (WXRT) provides spatially resolved 21-cm line profiles such that H i surface density maps and kinematics can be extracted. In this study we only focus on the H i surface density maps. In the WHISP survey, each object is observed in three different resolutions, with an average FWHM of the beam of 14, 30, and 60 arcsec, respectively. The sensitivity of the observations increased from a typical root-mean square error of 3 mJy/beam to 0.8 mJy/beam when the instrument was upgraded, leading to an average surface density sensitivity of \(0.18\,\mathrm{M}_{\odot}\mathrm{pc}^{-2}\) for the medium-resolution maps (Zwaan et al., 2005). The observational characteristics that are relevant for this work are summarized in Table 3. The field-of-view (FOV) of the WHISP observations is typically 0.25 square degrees, centered on the object of interest. A substantial amount of maps contain other galaxies, galaxy interactions or mergers. While a study of these objects would be interesting on its own (see Holwerda et al., 2011; Holwerda et al., 2011), we choose to avoid this additional complexity for the present study and analyze galaxies in IllustrisTNG one-by-one (i.e. processing each subhalo individually, see Section 2.1). Hence, we want to exclude observations that feature ongoing interactions or have other galaxies in their FOV. Recently, Nalumina et al. (2021) compiled a catalogue of 228 \begin{table} \begin{tabular}{c c c c} Dataset & Beam[arcsec] & Pixel size [arcsec] & Noise[\(\,\mathrm{M}_{\odot}\mathrm{pc}^{-2}\)] \\ \hline High-res. & 14 & 5 & 0.61 \\ Medium-res. & 30 & 10 & 0.18 \\ Low-res. & 60 & 20 & 0.061 \\ \end{tabular} \end{table} Table 3: Main characteristics of the observational WHISP data. The angular extent of the beam is listed as its FWHM. The noise corresponds to 1-\(\sigma\) noise in the H i surface density map. The beam FWHM and noise levels are average values for the entire WHISP sample, taken from Zwaan et al. (2005). galaxies observed within WHISP, containing only isolated objects (in their FOV, this does not refer to the environment in general) with good data quality. This sample is representative of the full WHISP sample in terms of galactic morphology. Naluminsa et al. (2021) also supplemented the WHISP data with infrared data from the WISE survey, enabling estimates of the stellar masses and star-formation rates. This is very advantageous for our purpose as we can select WHISP-like galaxies in IllustrisTNG based on their H i mass, stellar mass, and star-formation rate. Hence, we use the WHISP sample defined in Naluminsa et al. (2021) for this study. Additionally, we remove two objects (UGC4483 and UGC9128) as their stellar masses are below our IllustrisTNG selection threshold of \(10^{7}\,\mathrm{M}_{\odot}\), ending up with 226 galaxies in the observational galaxy sample. We retrieve the moment-zero H i maps for these 226 galaxies in all three available resolutions from the online Westerbork database7. The moment-zero maps, \(\sum_{\nu}S_{\nu}\), are stored in mJy/beam (\(S_{\nu}\) denotes the flux in a specific channel). We convert them to surface density maps assuming optically thin 21-cm emission (e.g. Naluminsa et al., 2021): Footnote 7: [http://www.astron.nl/](http://www.astron.nl/) \[\frac{\Sigma_{\mathrm{H}_{1}}}{\mathrm{M}_{\odot}\mathrm{pc}^{-2}}=8.85\frac{ \sum_{\nu}S_{\nu}}{\mathrm{mJy/beam}}\frac{\Delta v}{\mathrm{km\,s}^{-1}} \Big{(}\frac{\Delta\alpha\cdot\Delta\delta}{\arccos^{2}}\Big{)}^{-1}, \tag{4}\] where \(\Delta v\) is the velocity channel width and \(\Delta\alpha\) (\(\Delta\delta\)) denote the FWHM of the major (minor) axes of the beam. We remark that for 47 WHISP galaxies the beam FWHM information was missing, for these galaxies we used the information from the online database for the highest-resolution observations and average values of \(30\times 30\) (\(60\times 60\)) arcsec\({}^{2}\) for the medium (low) resolution observations, respectively. To select WHISP-like galaxies from IllustrisTNG, we need some additional parameters to describe the WHISP galaxies like the H i mass, stellar mass, and star-formation rate. Calculating these quantities requires a distance estimate. While Naluminsa et al. (2021) added distance estimates from the NASA Extragalactic Database (NED8) to their galaxy catalogue, a substantial fraction is derived from redshifts, which can be unreliable for the local WHISP galaxies with a median distance of \(\approx 20\,\mathrm{Mpc}\)(Zwaan et al., 2005). Hence, we opt to compile our own distances from NED, using the most recent redshift-independent distance estimate for each galaxy. If such estimates are not available we use the distance estimated from the redshift, corrected for the influence of the Virgo cluster, the Great Attractor, and the Shapley Supercluster. Footnote 8: [https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/) For each WHISP galaxy we compute the H i mass from the distance estimate and the H i surface density map (Eqn. 4). We use the medium-resolution maps of the WHISP galaxies to compute their H i masses, we note that the map resolution hardly affects the H i masses. For the stellar masses and star-formation rates, we exactly follow the steps outlined in section 3 of Naluminsa et al. (2021) using their compilation of WISE fluxes but our own distance estimate compilation. ### Creation of HI maps Here we describe how we generate H i maps for the simulation and observational galaxy samples. Part of the analysis of this paper compares different simulation postprocessing methods against each other. For these parts, we use a fairly basic algorithm to create 'plain' H i maps for IllustrisTNG galaxies, described in Section 2.5.1. When we compare IllustrisTNG against observational data, we use a more advanced algorithm to generate'mock' H i maps (described in Section 2.5.2) with the aim of emulating the most important steps of the observations. Lastly, we also make some minor adjustments to the observational WHISP H i maps, described in Section 2.5.3. The main steps of the algorithm to create WHISP and mock IllustrisTNG H i maps are visualized in Figure 1. #### 2.5.1 IllustrisTNG: Plain maps (i) Orientation: When creating plain maps in random orientation, we leave the simulated galaxy in its intrinsic orientation. For plain maps in face-on orientation, we rotate the galaxies such that its angular momentum vector aligns with the \(z\)-axis of the simulation box. Following Diemer et al. (2018) we calculate the galactic angular momentum using all gas cells within the 3D gas half-mass radius. If there are fewer than 50 gas cells within this radius we use all star particles within two stellar half-mass radii instead. (ii) Projection: We always project the gas cells along the \(z\)-axis of the simulation box. We employ a consistent projection algorithm for these tasks, specifically we use the method that was introduced by Diemer et al. (2018) which we briefly summarize here. Since the moving-mesh cells of IllustrisTNG have complex shapes we apply an adaptive Gaussian smoothing kernel to the gas cells. For the smoothing kernel, we use a width of \(\sigma=0.5(m/\rho)^{1/3}\). For the calculation of the H i surface density, the algorithm simply sums up the H i masses of all smoothed gas cells in each pixel and divides by the pixel area. The map size is set to lie within the spherical aperture such that the map side length is \(\sqrt{2}\cdot r_{\mathrm{A}}\). We resolve the plain maps by a fixed number of pixels (\(128\times 128\)). In some cases we are also interested in maps of other quantities than H i surface density. We employ the same projection algorithm in these cases. For quantities that are averaged (instead of summed) within each pixel (e.g. \(U_{\mathrm{MW}}\), \(D_{\mathrm{MW}}\)) we weigh each gas cell by its total mass unless otherwise noted. #### 2.5.2 IllustrisTNG: Mock maps A fair comparison between observations and cosmological simulations needs the two galaxy samples to be broadly comparable. Furthermore, it requires some sort of mock-observation routine which mimics the observational procedure. We do not attempt to fully emulate the interferometric observations of WHISP but include the three most important effects: Mimicking the angular pixel sizes of the observations, smoothing with the beam of the instrument, and adding noise. This is a similar strategy as is applied in the MARTINI tool to create mock H i datacubes from cosmological simulations (Oman et al., 2019), although there is no noise incorporated in MARTINI. (i) Selection: For each WHISP galaxy, we select IllustrisTNG galaxies with similar stellar masses (\(\pm 0.2\,\mathrm{dex}\)) and star-formation rates (\(\pm 0.3\,\mathrm{dex}\)). The tolerances for these quantities correspond to the 2-\(\sigma\) error bars adopted by Naluminsa et al. (2021). For \(\approx 20\,\%\) of the WHISP galaxies in the Naluminsa et al. 2021 sample, no SFR estimate is available because they were not detected in the WISE W3 band. For these galaxies we only used the stellar mass criterion. In a second step, we then select the five IllustrisTNG galaxies that are closest to the WHISP galaxy in terms of \(\log_{10}M_{\mathrm{HI}}\) out of the galaxies that are similar in terms of stellar mass and star-formation rate. The factor of five is chosen to increase the statistics of the simulated galaxy sample while keeping the H i mass distribution similar. As we find at least five similar IllustrisTNG galaxies (in terms of stellar mass and SFR) for each WHISP galaxy (typically dozens of IllustrisTNG galaxies conform to the stellar mass-SFR selection criteria), the'mock' TNG50 sample consists of \(5\cdot 226=1130\) galaxies. We remark that \(\approx 20\) % of this sample is just selected by \(M_{\bullet}\) and \(M_{\rm H\,I}\), due to some WHISP galaxies being undetected in the WISE W3 band. We have verified that these TNG50 galaxies typically have low star-formation rates, as expected from their observational counterparts for which we only have an upper bound on the SFR. (ii) Orientation: We match the inclinations of the IllustrisTNG galaxies to their corresponding WHISP galaxies, by rotating IllustrisTNG galaxies such that the angle between the \(z\)-axis of the simulation box and the angular momentum vector matches the WHISP galaxy inclination. We retrieve the inclinations of the WHISP galaxies from the galaxy catalogue of Naluminsa et al. (2021). (iii) Projection: We put the IllustrisTNG galaxies at the distances of their corresponding WHISP galaxies. The galaxies are then projected along the \(z\)-axis of the simulation onto a 2D grid. Each map is created in three different resolutions as in the observational data, the angular pixel sizes (5, 10, and 20 arcsec, respectively) are chosen to match the WHISP data. We use the same map size as in Section 2.5.1 of \(\sqrt{2}\cdot r_{\rm A}\). (iv) Smoothing: To emulate the effect of the WHISP beam we then convolve these H i maps with a circular 2D Gaussian kernel, using the average WHISP beam FWHM (14, 30, and 60 arcseconds, respectively) for the three different resolutions. (v) Noise: After beam convolution, we add Gaussian noise to the H i maps. We use the average 1-\(\sigma\) WHISP noise levels (see Table 3), which vary depending on the map resolution. (vi) Segmentation: For the calculation of the morphological parameters and the CDDF, the main object needs to be identified and separated from the background in the H i map. While we always project only one subhalo per IllustrisTNG H i map, it is possible that a subhalo consists of a neutral gas disk and additional disconnected neutral gas structures. For our analysis, we only want to consider the main object (in this case the neutral gas disk), and hence need to Figure 1: Visualization of various steps in our algorithms to create WHISP/mock IllustrisTNG H i surface density maps. The different rows correspond to the three different map resolutions. The physical scale is the same in all images and indicated by the black 15 kpc line. ’WHISP’: Example H i map from WHISP (UGC528), where we replaced the NaN background pixels with Gaussian noise (after step (i) in Section 2.5.3). ’WHISP Segment’: The same WHISP H i map, cutting out the object identified by the segmentation map (after step (ii) in Section 2.5.3). ’TNG50’: H i map for the TNG50 galaxy that matches UGC528 best according to our algorithm (subhalo ID: 632310). To create the map we put the galaxy at the same distance and inclination as UGC528 (i.e. this map is generated after step (iii) in Section 2.5.2). ’TNG50 Mock’: The same TNG50 H i map, after convolution with a WHISP-like beam and adding noise (after step (v) in Section 2.5.2). ’TNG50 Segment’: The same TNG50 H i map, cutting out the object identified by the segmentation map (after step (vi) in Section 2.5.2). segment the H i map. We use the photutils9 python package to find the object which belongs to the central pixel. If the central pixel is assigned to the background we instead use the largest object in the H i map. To make the segmentation map which cuts out the galaxy of interest more regular we smooth it with a uniform boxcar filter measuring five pixels in each dimensions. By visual inspection, we find that this algorithm reliably identifies the galaxy in the H i map. Footnote 9: We use the photutils_detect_sources function with the average 1-\(\sigma\) noise (see Table 3) as threshold and five pixels as the minimum number of connected pixels. #### 2.5.3 WHISP: Observational maps The extraction of H i maps for the WHISP galaxies is described in Section 2.4. We perform two additional steps to generate the final WHISP H i maps that we analyze in this study. (i) Noise: The WHISP H i maps are inherently 3-\(\sigma\)-clipped, meaning that they do not contain any background (all background pixels are set to NaN values). This is problematic for the calculation of the morphological parameters, as some of the parameters need a background in the image in order to be accurately computed. Hence, we replace all NaN pixels from the WHISP H i maps with Gaussian Figure 2: Maps of the UV fields (upper panels), H i surface densities (middle panels), and molecular gas fractions (lower panels) for an example TNG50 galaxy at \(z=0\) (subhalo ID: 474008). The atomic/molecular mass fractions are calculated with our default partitioning scheme (GD14). The different columns correspond to three different schemes to calculate the \(U_{\rm MW}\) parameter. The black circles indicate the stellar (small, solid) and gas (large, dashed) half-mass radii (note that this includes all gas cells bound to the subhalo, not just H i / H\({}_{2}\)). We used the plain map algorithm (Section 2.5.1) with random orientation to generate all maps. For the \(f_{\rm mod}\) maps we weighed each gas cell by its neutral hydrogen mass. noise, using the average 1-\(\sigma\) WHISP noise levels depending on map resolution. (ii) Segmentation: We create segmentation maps for the WHISP galaxies using exactly the same algorithm as described in Section 2.5.2. Again, we find by visual inspection that this algorithm reliably identifies the main galaxy in the FOV of the observation. ## 3 UV field comparison We start our analysis by assessing the differences in the SKIRT, Diemer, and Lagos UV field estimates, and explore how this affects the H i and H\({}_{2}\) properties of the simulated galaxy population in TNG50. To qualitatively compare the different UV field estimates we show 2D projections of the \(U_{\rm MW}\) parameter (calculated using the plain map algorithm in random orientation, see Section 2.5.1) for an example TNG50 galaxy in the top panels of Figure 2. This galaxy has a stellar mass of \(9.3\times 10^{10}\,\rm M_{\odot}\), a high gas-to-stellar mass ratio of 2.1, and a star-formation rate of \(9.0\,\rm M_{\odot}yr^{-1}\). The 3D stellar and gas half-mass radii are shown as solid and dashed circles, respectively. The angular momentum vector of the galaxy is almost parallel to the \(z\)-axis of the simulation, hence the random projection in Figure 2 displays an almost face-on orientation of the galaxy. For the SKIRT UV field (top left panel), some clumpy structure in the spiral arms due to the star-forming regions (young star particles with ages less than \(10\,\rm Myr\)) that strongly emit UV is visible. In the outskirts (outside the gas half-mass radius), the attenuation and scattering by dust grains impurities some structure on the UV field. We remark that in the SKIRT UV field, radiation from the evolved stellar population is also modelled, unlike in the other UV field models. For the Diemer UV field (top center panel), radiation is emitted from all star-forming gas cells. These cells are more common but less luminous than the star-forming regions in the SKIRT recipe (star particles with ages below \(30\,\rm Myr\)), leading to a less clumpy UV field in the spiral arms for the Diemer UV field. Since the radiation is transported without considering absorption/scattering the radiation field is very smooth in the galactic outskirts as well. We remark that there is a minor discontinuity in the Diemer UV field at the edges of a square enclosing the gas half-mass radius. This is due to numerical resolution issues in the calculation of the Diemer UV field as the algorithm splits the galaxy into a high-resolution region within the gas half-mass radius and a low-resolution region outside of it, allowing a significant speedup of the algorithm. We do not expect that this affects any of our results. Lastly, the Lagos UV field (top right panel) scales the UV flux to the star-formation rate of gas cells. Since there is no transport of radiation in this approach there is a strong bimodality between star-forming and quiescent gas cells. The gas cells without star formation only receive the homogeneous background UV field of \(U_{\rm MW}=0.00137\)(Faucher-Giguere et al., 2009), note that this floor is applied to all three UV field estimates. We also show maps of H i surface densities and molecular fractions calculated with the three different UV field estimates in Figure 2 (using our default partitioning scheme of GD14). Despite significant differences in the UV field, the \(\Sigma_{\rm H\,I}\) and \(f_{\rm mol}\) maps for this example galaxy appear almost identical for the SKIRT and Diemer UV field estimates. We remark that the neutral gas hole in the center of this galaxy is due to feedback from the supermassive black hole (see Section 4.3). In Section 3.2 and Section 3.1 we examine whether the different UV field estimates lead to significant statistical discrepancies for ensemble quantities of the entire galaxy population (within our base sample of \(12\,\aas@@fstack{\prime}\,431\) galaxies) in TNG50. ### Radial profiles To assess the impact of the UV field on the H i-H\({}_{2}\)-transition on the galaxy population in a more statistical fashion, we calculate 1D projected radial profiles of UV fields and molecular gas fractions for the TNG50 base sample in this section. Following Diemer et al. (2018), we only consider galaxies with \(M_{\rm H_{2}}>10^{8}\,\rm M_{\odot}\) (we use our default H i model here to calculate the mass for the galaxy selection) in this section as the \(f_{\rm mol}\) profiles of galaxies with less molecular gas are noisy. The exact value of this cutoff does not affect our conclusions. We show median \(U_{\rm MW}\) radial profiles in Figure 3. The interquartile ranges are shown as hatched areas. For the calculation of the radial profiles we first rotated each galaxy into a face-on orientation (see step (i) of Section 2.5.1) and considered 2D radial profiles (i.e. using the projected distance to the galaxy center, not the 3D distance). For each radial bin, we then compute the median of the \(U_{\rm MW}\) parameter of all gas cells within this bin (using the mean instead of the median would significantly bias \(U_{\rm MW}\) for the Lagos UV field, as star-forming gas cells receive UV fluxes that are 4-6 orders of magnitude larger than non star-forming cells). If there are no gas cells for a galaxy within a radial bin \(U_{\rm MW}\) is set to the uniform UV background. To compute a single average profile for the entire galaxy sample we stacked the galaxies by their gas half-mass radii and computed the median profile. At the center the Lagos UV field exceeds the other estimates by more than an order of magnitude due to the large unattenuated UV field in star-forming gas. As the Lagos UV field is scaled to the gas cell star-formation rate and most of the gas cells at larger radii do not form stars, the Lagos UV field quickly decreases to the uniform UV background at \(U_{\rm MW}=0.00137\). The Diemer UV field has a steeper UV profile than the SKIRT field, which is due to the centrally concentrated star-forming gas cells being the only UV sources in the Figure 3: Median radial profiles for the UV field parameter \(U_{\rm MW}\), considering only galaxies with \(M_{\rm H_{2}}>10^{8}\,\rm M_{\odot}\). The galaxies are projected onto a face-on orientation and stacked by their gas half-mass radius. The lines indicate the median UV profiles, hatched areas show the interquartile range. Diemer UV field. The SKIRT UV field is more extended as we treat all star particles as UV sources (see Table 2). We compare the different calculations for the H\({}_{2}\) mass fractions by plotting the median projected radial profiles of \(f_{\rm mol}\) in Figure 4. For each galaxy, we calculate its profile by computing the mean molecular fraction in each radial bin, similar to the calculation of the UV field profiles. If there are no gas cells for a galaxy within a radial bin, \(f_{\rm mol}\) is set to zero. We find that if we instead ignore these radial bins for the calculation of the median profiles \(f_{\rm mol}\) increases nonphysically for large radii. As for the UV field profiles, we stack the profiles of all galaxies by their half-mass radius and compute the median \(f_{\rm mol}\) profile of the galaxy population. Applying the mean instead of the median to the stacked profiles (as is done in Diemer et al., 2018) gives higher \(f_{\rm mol}\) at larger radii and reduces the scatter between the different models substantially. As the mean profiles are dominated by few galaxies that have high molecular fractions at larger radii and for consistency with Figure 3 we opt for median profiles in Figure 4. For the three UV models (SKIRT, Diemer, Lagos) we show the default hydrogen partitioning scheme (GD14) as solid lines and the spread between the UV-dependent schemes as shaded areas. We also show the UV-independent partitioning schemes (KMT09 and S14). The differences in the molecular fractions relative to the default H i model (GD14 with the SKIRT UV field) are shown in the bottom panels. The significantly lower Lagos UV field (except in the center) manifests in higher molecular fractions. For the SKIRT and Diemer \(f_{\rm mol}\) profiles, the spread due to the different partitioning schemes is much larger than the difference due to changing the UV field. The discrepancy in the \(f_{\rm mol}\) profiles comparing the different UV fields reaches its maximum at \(\approx 0.4\ R_{\rm gas}^{1/2}\). This is not particularly surprising because \(f_{\rm mol}\) is 20-40 % at this radius, so this is broadly the region at which the H i-H\({}_{2}\)-transition takes place. The H i-H\({}_{2}\)-transition occurs typically over a very narrow range in densities, and other parameters like \(U_{\rm MW}\) can slightly shift this density range (see e.g. Figure 1 of Gnedin & Kravtsov, 2011). This means that the gas density in this region is in a range where the partitioning schemes are particularly susceptible to changes in the UV field. Lastly, we remark that the default GD14 partitioning scheme minimizes the differences in the molecular fractions upon variation of the UV field. Hence, we expect more significant discrepancies in the H i/H\({}_{2}\) properties of galaxies between different UV field estimates when using other partitioning schemes (GK11 or K13). ### HI and H2 mass functions We consider H i/H\({}_{2}\) mass functions for our base sample in the upper panels of Figure 5. As in Figure 4 we display the UV-dependent partitioning schemes by showing the default partitioning scheme and spread between schemes for each UV field estimate individually (solid lines and shaded areas). Differences in the molecular fraction relative to the default model are shown in the bottom panel. The H i mass function (HIMF) is very robust against variations of the partitioning schemes and UV field estimates, except for the high-mass end. H i masses calculated using the Lagos UV field are lower compared to the SKIRT and Diemer UV fields, with discrepancies in the HIMF up to 20 %. For the H\({}_{2}\) mass function (H2MF), discrepancies are similar but begin at much lower masses. The differences between the partitioning schemes are generally larger for H\({}_{2}\) than for H i. For both mass functions, the default GD14 partitioning scheme is the H i model with the least tension between the different UV models (as seen in Figure 4). Still, the shaded areas between the different UV models do not overlap for high H i or intermediate H\({}_{2}\) masses, indicating that the discrepancy between different UV models is robust to the choice of the partitioning scheme. For the UV-independent partitioning schemes, we remark that the S14 model is in good agreement while the KMT09 model significantly exceeds other HIMF model predctions at the high-mass end (and vice versa for the H2MF). We note that our H i mass functions are not directly comparable to the results of Diemer et al. (2019) due to different sample definitions: Diemer et al. (2019) consider all galaxies above a certain gas _or_ stellar mass threshold, while we need a minimal stellar mass for all galaxies for SKIRT to work properly. This means we miss galaxies with low stellar mass (\(M_{\star}<10^{7}\) M\({}_{\odot}\)) and high gas fractions when calculating the gas mass functions. Since the H i-to-stellar mass fraction in TNG50 at a stellar mass of \(\sim 10^{7}\) M\({}_{\odot}\) reaches values up to \(\sim 10\)(Diemer et al., 2019, figures including the TNG50 simulation can be found online10), the calculated H i mass function is complete only above an H i mass of \(\sim 10^{8}\) M\({}_{\odot}\). Indeed, we find that our HIMF agrees with the one from Diemer et al. (2019) above H i masses of \(2\times 10^{8}\) M\({}_{\odot}\). For molecular hydrogen the stellar mass cutoff is irrelevant: Because the H\({}_{2}\)-to-stellar mass fractions are always smaller than unity, a stellar mass cutoff at \(M_{\star}=10^{7}\) M\({}_{\odot}\) does not exclude any galaxies with \(M_{\rm H_{2}}>10^{7}\) M\({}_{\odot}\). Hence, our H\({}_{2}\) mass function calculated with the Diemer UV field matches the result from Diemer et al. (2019). Figure 4: Median radial profiles for the molecular hydrogen mass fraction \(f_{\rm mol}\). We only consider galaxies with \(M_{\rm H_{2}}>10^{8}\) M\({}_{\odot}\) here. The galaxies are projected onto a face-on orientation and stacked by their gas half-mass radius. Solid lines indicate the default partitioning scheme (GD14), the spreads between the UV-dependent partitioning schemes are visualized as shaded areas. The UV-independent models are shown as dashed lines (KMT09 and S14). The bottom panel shows the difference in the \(f_{\rm mol}\) profiles relative to the default model. For a clearer visualization we smoothed all curves with a Gaussian filter with \(\sigma=0.05\ R_{\rm gas}^{1/2}\). ## 4 Non-parametric morphologies of Hi maps To test our H i model and to evaluate the realism of the spatial distribution of H i gas within TNG50 galaxies, we compare simulated H i surface density maps to 21-cm maps from the observational WHISP dataset. We use the full mock map algorithm (Section 2.5.2) to generate a WHISP-like sample of 1130 TNG50 H i maps, to be compared to 226 WHISP H i maps. To quantitatively compare the H i maps of the two samples we consider non-parametric morphologies (concentration, asymmetry, smoothness, Gini and \(M_{20}\) statistics). We calculate these statistics for the TNG50 and WHISP maps consistently with the statmorph tool (Rodriguez-Gomez et al., 2019). ### Statmorph stamorph is a python tool for parametric and non-parametric analyses of astronomical images, and has already been used to study mock images in optical bands for IllustrisTNG (Rodriguez-Gomez et al., 2019; Guzman-Ortega et al., 2022). Besides the image, the H i surface density map in our case, statmorph requires a segmentation map and an estimate of the noise as input. A segmentation map separates all objects in the image from the background. As we process galaxies individually in IllustrisTNG and use a galaxy sample from WHISP that only contains isolated objects, we have only one object per image to analyze. The segmentation map is calculated as described in Section 2.5. stamorph also requires an estimate of the noise on the maps. We use the simpler gain option of statmorph which calculates Poisson noise, the non-parametric morphologies are independent of the actual gain value. For \(\approx\) 5% of the galaxies11, statmorph does not finish properly, mostly due to irregular segmentation maps. Such galaxies are omitted from our analysis. We find that the results at medium resolution are most reliable, as they pick up more features than the low-resolution maps and statmorph successfully runs more often than for the high resolution. Hence, we consider the medium-resolution H i maps of 30 arcsec as default in the following analysis. Footnote 11: This fraction is very similar for the WHISP and the TNG50 galaxy samples individually. ### Morphological results We show histograms of the non-parametric morphologies for the TNG50 and WHISP H i maps in varying angular resolution in Figure 6. To avoid overcrowded figures we only show our default partitioning scheme (GD14 plus SKIRT UV field) in this section, using other partitioning schemes or UV fields yields comparable results. We remark that the number of galaxies plotted in the histograms has a small dependency on the angular resolution, since statmorph is not always able to measure the morphological parameters. We have verified that this does not affect the distributions in Figure 6, i.e. the galaxies that are lost in the high-resolution TNG50 maps do not differ significantly in terms of the morphological statistics from the overall galaxy sample. Comparing the WHISP and TNG50 medium-resolution (30 arcsec) maps, we find that the asymmetry statistic matches well, the TNG50 smoothness statistic is slightly lower (i.e. the TNG50 Figure 5: H i (left) and H\({}_{2}\) (right) mass functions for our TNG50 galaxy sample. We show the different partitioning schemes as in Figure 4: Solid lines show the mass functions for the three UV models with the default scheme. The shaded areas indicate the spread due to the different UV-dependent partitioning schemes. The two UV-independent schemes are shown as dashed lines (KMT09 and S14). The differences between the mass functions relative to the default H i/H\({}_{2}\) (GD14 with the SKIRT UV field) model are shown in the bottom panels. For a clearer visualization we smoothed all curves with a Gaussian filter with \(\sigma\approx 0.17\,\)dex. H i maps are slightly smoother than the WHISP maps), and the distribution of the Gini statistic is a bit more skewed in the WHISP sample. For the TNG50 maps, the scaling with angular resolution seems to be weaker for the smoothness statistic and reversed for the asymmetry statistic. As the maps at resolutions other than 30" are less reliable we do not investigate this discrepancy further. More significant differences arise in the concentration and \(M_{20}\) statistics. Denoting the difference of the TNG50 and WHISP medians of a morphological statistic \(k\) as \(\Delta\tilde{k}\), we have more concentrated WHISP H i maps (\(\Delta\tilde{C}\approx-0.32\)) and find \(\Delta\tilde{M}_{20}\approx 0.30\). We further explore the deviations in the concentration and \(M_{20}\) statistics in Figure 7 by plotting their correlation for the medium-resolution maps. This is also the strongest cross-correlation between non-parametric morphologies found by Holwerda et al. (2011) and can be used to identify interacting galaxies as outliers in the top right corner. Due to the definitions of \(C\) (increasing with brighter pixels in the galaxy center) and \(M_{20}\) (increasing with brighter pixels in the outskirts), these statistics are usually anticorrelated. Such an anticorrelation is found in the WHISP data (see Figure 7 or Holwerda et al., 2011), in synthetic optical images of IllustrisTNG galaxies (Rodriguez-Gomez et al., 2019; Guzman-Ortega et al., 2022), and in images across the UV-submm wavelength range for both observed and simulated galaxies (Baes et al., 2020; Kapoor et al., 2021; Camps et al., 2022). We find that the two samples broadly follow the same anticorrelation, with the TNG50 galaxies shifted to the bottom right corner. Vi Figure 6: Histograms of non-parametric morphologies (concentration, asymmetry, smoothness, Gini, and \(M_{20}\) statistics) for the WHISP and TNG50 H i maps, at varying angular resolution. We only use the default H i model for the TNG50 H i maps here. The sizes of the galaxy samples vary slightly with resolution as statmorph does not always run successfully, depending on the resolution. For a clearer visualization we smoothed the histograms with a Gaussian filter with \(\sigma\) equal to 6.7 % of the \(x\)-axis stretch. Figure 7: \(M_{20}\)-concentration relation for the WHISP (black data points) and TNG50 samples (colored 2D histogram) for the medium-resolution H i maps. We only use the default H i model for the TNG50 H i maps here. The 1D histograms for the \(M_{20}\) and \(C\) statistics for the same galaxy samples are shown at the edges of the plot (smoothed as in Figure 6), straight lines indicate the median values. sual inspection of the H i maps with low concentration and high \(M_{20}\) reveals that those are exclusively face-on galaxies, while the TNG50 outliers at intermediate concentration and high \(M_{20}\) are mostly edge-on. These angular momentum trends are less pronounced for the WHISP galaxies. We do not find any other obvious characteristic such as H i mass that distinguishes the TNG50 outliers from the WHISP galaxies. ### Central HI holes in TNG50 By visual inspection of the TNG50 and WHISP H i maps, we find that many TNG50 galaxies feature large central H i holes, while such H i holes are much less prevalent in the WHISP data12 (see Figure 1 for a typical example). Central H i holes could simultaneously explain the offsets in the concentration and \(M_{20}\) statistics, with face-on galaxies deviating the most from observations as the impact of the central hole is maximized. In the following we discuss some possible sources of the apparently unrealistic prevalence of large central H i holes in TNG50. Footnote 12: On the other hand, by visual inspection of the 34 H i maps from the THINGS H i survey (Walter et al., 2008) it seems that central H i holes are prevalent in observed galaxies. This difference compared to WHISP could be related to different spatial resolutions (the THINGS beam has a FWHM of \(6\degr\)’). We point out that if central H i holes are washed out in WHISP due to the lower resolution, then the same should happen to the TNG50 H i maps as they are mock-observed at the WHISP resolution. An obvious mechanism to create large central H i holes is to ionize the neutral hydrogen or expel it from the galaxy center, for instance by feedback from active galactic nuclei (AGN). Indeed, Nelson et al. (2021) showed that red AGN feedback, star-formation rates of TNG50 galaxies at \(z\sim 1\) are centrally suppressed, in agreement with observational data. In IllustrisTNG, AGN feedback occurs along a thermal and a kinetic mode, depending on the black hole accretion rate. For galaxies with stellar masses roughly below \(10^{10.5}\,\mathrm{M}_{\odot}\), the AGN feedback is typically in the thermal mode with a continuous energy injection. For more massive galaxies, the kinetic mode kicks in and energy is injected in a pulsed, directed fashion into the surroundings (see Weinberger et al., 2017 for more details). Furthermore, AGN radiation can ionize a substantial fraction of the hydrogen gas (see figure A6 of Byrohl et al., 2021). Recently, Ma et al. (2022) found that the kinetic AGN feedback in IllustrisTNG is very effective at redistributing neutral gas from the central to the outer galaxy regions compared to the SIMBA simulation, hinting that AGN feedback could indeed create central H i holes. Upon examination of the neutral gas reservoirs of central and satellite galaxies in TNG100, Stevens et al. (2019) and Stevens et al. (2021) also speculate that some tension with observational data could arise due to AGN feedback ejecting/depleting neutral gas from the center. As a first ad-hoc experiment to check if AGN feedback is responsible for the central H i holes in TNG50, we exclude all galaxies that experienced AGN feedback in the kinetic mode (\(\approx 25\,\%\) of the TNG50 sample of 1130 galaxies). This slightly reduces the tension in the morphological statistics to \(\Delta\dot{C}\approx-0.23\) and \(\Delta\dot{M}_{20}\approx 0.25\). If we instead discard all TNG50 galaxies with an above-average energy injection in the thermal mode, the tension reduces to \(\Delta\dot{C}\approx-0.18\) and \(\Delta\dot{M}_{20}\approx 0.18\). While it seems plausible that AGN feedback is at least partially responsible for the tension in the concentration statistic, we caution that the AGN feedback properties of the TNG50 galaxies are probably cross-correlated with other galaxy properties (e.g. stellar mass, gas-to-stellar mass ratio) which could also influence the morphological statistics. To unequivocally pinpoint the impact of AGN feedback, different cosmological simulation runs varying the feedback parameters need to be considered, which is beyond the scope of this paper. The impact of AGN feedback is also somewhat depending on the numerical mass resolution of the cosmological simulation. For the IllustrisTNG suite, the feedback parameters are calibrated for TNG100, and are then kept the same for all IllustrisTNG runs (Pillepich et al., 2018). Increasing the resolution of the simulation also increases the black hole accretion rate, especially in the thermal mode (see figure B1 of Weinberger et al., 2017). We test the impact of this numerical effect by repeating the construction of H i maps and morphological analysis for TNG100, which has a mass resolution twenty times coarser than TNG50 (see Table 1). We find that the tension is reduced to \(\Delta\dot{C}\approx-0.16\) and \(\Delta\dot{M}_{20}\approx 0.22\) when using TNG100. Lastly, it is also possible that there is neutral hydrogen in galaxy centers but the gas is predominantly molecular. To assess this, we check if the choice of the H i model affects the central H i holes. We find almost no changes in the morphological statistics upon variation of the UV field model or partitioning scheme, except for the UV-independent partitioning scheme S14 (which increases the tension in the \(C\) and \(M_{20}\) statistics). However, when using 'local' hydrogen partitioning schemes based on volume instead of surface densities (described in more detail in Section 6.2), we find that the tension reduces to \(\Delta\dot{C}\approx-0.19\) and \(\Delta\dot{M}_{20}\approx 0.20\) (using the local GD14 scheme with the SKIRT UV field, this choice hardly affects the result). This could be related to the Jeans approximation breaking down for the dense galactic centers, as discussed in more detail in Section 6.2. We point out that the unusual central H i holes could also be due to the AGN feedback in IllustrisTNG being _too weak_. If AGN activity would evacuate the cold gas from the entire galaxy instead of just the central region, the concentration statistics of such galaxies would increase and better match the WHISP data. It is also possible that such a reduction of cold gas in the galaxy leads to the object not being selected in the TNG50 sample, as it would be unobservable with WHISP. Indeed, results from Ma et al. (2022) indicate that for TNG100, the AGN feedback quenches the star formation but only redistributes the cold gas instead of decreasing the galactic cold gas reservoir as is observed (Guo et al., 2021). On the other hand, we caution that Ma et al. (2022) did not mimic the observational steps to mock-observe the simulated galaxies. Stevens et al. (2019) used the same observational data and hydrogen postprocessing method as Ma et al. (2022) but a careful mock-observation routine, finding excellent agreement for the cold gas reservoirs of TNG100 and observations. The prevalence of central H i holes is reminiscent of the finding of Bahe et al. (2016), who discover unrealistic holes in the H i disks of galaxies in the EAGLE simulation. Bahe et al. (2016) attribute this to the implementation of the stellar feedback in the physical model of EAGLE. Along similar lines, we partly attribute the large central H i holes in TNG50 to the AGN feedback implementation. The effect is stronger in the higher-resolution simulation TNG50 compared to TNG100 due to the IllustrisTNG calibration scheme. Diemer et al. (2019) found results along similar lines as the H i surface density profiles of IllustrisTNG galaxies exhibit central deficits compared to observational data from the Bluedisk survey (Wang et al., 2014), with larger mismatches for higher resolution simulations. However, we note that the usage of local partitioning schemes reduces the prevalence of central H i holes. We conclude that these different effects (and potentially other systematics that we have not uncovered here) conspire to lead to the central H i holes and eventually the mismatch in the \(C\) and \(M_{20}\) statistics. ## 5 HI column density distribution function The H i column density distribution function is a widely used metric to quantify the distribution of atomic hydrogen in the Universe (e.g. Ryan-Weber et al. 2003; Altay et al. 2011; Noterdaeme et al. 2012; Rahmati et al. 2015). In this study, we use the H i CDDF as the second metric (apart from the H i morphologies) to compare TNG50 to observational data. The CDDF is insensitive to the spatial distribution of the H i, it only probes _how much_ atomic hydrogen per column density interval exists in the galaxy population. Hence, this CDDF comparison is somewhat complementary to the analysis of H i morphologies. We describe how we extract the H i CDDF from TNG50 (Section 5.1) and WHISP (Section 5.2), and compare them in Section 5.3. We only discuss the default H i model (GD14 + SKIRT UV field) for TNG50 in section and compare to other partitioning schemes in Section 6.2. ### CDDF measurement from TNG50 As we intend to compare the H i CDDF from TNG50 to observational data, we use the full mock map algorithm (Section 2.5.2) to generate 1130 synthetic H i maps (in the three different angular resolutions of the WHISP survey) for TNG50 galaxies. We then follow the observational procedure to compute the H i CDDF, which consists of summing the area covered by columns within a specific column density interval over all galaxies (e.g. Zwaan et al. 2005b; Szakacs et al. 2022): \[{\rm CDDF_{H\,I}=\frac{c}{H_{0}}\frac{\sum_{j}\phi(M_{\rm H\,I}^{j})\,w(M_{ \rm H\,I}^{j})\,A(N_{\rm H\,I})^{j}}{N_{\rm H\,I}\,\ln 10\,\Delta(\log_{10}N_{ \rm H\,I})}}, \tag{5}\] where \(c\) denotes the speed of light, \(N_{\rm H\,I}\) is the H i column density, and \(\Delta(\log_{10}N_{\rm H\,I})\) is the constant logarithmic column density bin spacing. This equation is applicable when the sample from which the CDDF is derived (in our case the 1130 mock-selected TNG50 galaxies) is drawn from a broader underlying galaxy population (the base TNG50 sample of 12431 galaxies). The CDDF of the Universe can then be computed by scaling the CDDF from the smaller sample using some galaxy property (e.g. stellar mass or H i mass) in which the underlying broader sample is assumed to be complete13. For the H i CDDF, the H i mass is a natural choice for this galaxy property (also adopted by e.g. Zwaan et al. 2005b). Hence, the sum in Eqn. 5 runs over bins of H i mass. The area function \(A(N_{\rm H\,I})^{j}\) denotes the area covered by columns within the (logarithmic) column density interval summing over all galaxies (in the smaller sample) within the \(j\)-th H i mass bin. The weight factor \(w(M_{\rm HI}^{j})\) corresponds to one over the number of galaxies in the smaller sample within the \(j\)-th H i mass bin, thereby normalizing the area function. To scale the CDDF to the entire galaxy population, the normalized area function \((w(M_{\rm H\,I})A(N_{\rm H\,I}))\) measured from the smaller sample is then multiplied with the abundance of objects in the broader sample in the \(j\)-th H i mass bin per volume, \(\phi(M_{\rm HI}^{j})\). Hence, \(\phi(M_{\rm H\,I})\) corresponds to the H i mass function (derived from the broader sample), but without dividing by the (logarithmic) H i mass bin width. Footnote 13: We remark that CDFs calculated in this fashion are not necessarily representative of the broader galaxy population since only the \(M_{\rm H\,I}\) distributions are matched. This does not affect our comparison between TNG50 and WHISP, as we follow the same methodology for the simulated and observed H i CDDF. Hence, the mock TNG50/WHISP samples are equally (un-)representative of their broader galaxy populations. For our calculation of the H i CDDF in TNG50, we compute the area function and the H i masses for the weight factor from the 1130 mock H i maps14. \(\phi(M_{\rm H\,I})\) is calculated from the base TNG50 sample of 12431 galaxies. This approach is equivalent to observational determinations of the CDDF, where a smaller galaxy sample is observed with high spatial resolution to derive the area function. The area function is then scaled using the H i mass function derived from blind galaxy surveys which measure the broader galaxy population. For cosmological simulations, we have the option to generate spatially resolved H i maps directly for the broad galaxy population, circumventing the scaling of the CDDF from a smaller galaxy sample. As we want to emulate the observational procedure with realistic mock H i maps (which requires a mock sample selection and putting the galaxies at the expected distances), we find the approach using Eqn. 5 more appropriate for this study. We test the simpler approach of measuring the CDDF from plain H i maps of the broader galaxy population in Section 6.3. Footnote 14: Instead of computing the H i masses from the mock H i maps we could also use the ‘intrinsic’ H i masses of the galaxies, simply summing the H i masses of the gas cells. However, this introduces inconsistencies with the observational procedure and the normalization of the CDDF. ### CDDF measurement from WHISP The H i CDDF at redshift zero can be measured with interferometric 21-cm surveys. A precise determination of the H i CDDF is presented in Zwaan et al. (2005b) based on WHISP data (see Section 2.4 for details about the WHISP survey). Although we could measure the H i CDDF from the WHISP H i maps ourselves, we refrain from doing so as the Zwaan et al. (2005b) measurement constitutes the gold standard for the observational H i CDDF at redshift zero and their result is used in many studies (e.g. Braun 2012; Rahmati et al. 2013a; Rahmati et al. 2013b; Szakacs et al. 2022). The H i CDDF measurement from Zwaan et al. (2005b) is based on the full WHISP survey (in its three available resolutions). Since the smaller WHISP sample of Nahimans et al. (2021) that we use throughout this study is representative of the full WHISP sample, we do not expect that the usage of different WHISP samples affects our results. Zwaan et al. (2005b) have scaled their measurement of the WHISP H i CDDF to the broader galaxy population (see Eqn. 5) using the H i mass function from the blind HIPASS survey (Zwaan et al. 2003). We add a cautionary note on \(H_{0}\) correction factors. While the IllustrING simulations were run with \(h=0.6774\), the observational result from Zwaan et al. (2005b) was obtained with \(h=0.75\). It would be desirable to correct the observational result to the more recent value of \(H_{0}\). For the H i CDDF, however, this is an involved calculation due to the scaling factor \(\phi\) (which is the HIMF modulo the mass bin width) in Eqn. 5. This HIMF is parametrized as a Schechter function with parameters that themselves depend on the value of \(H_{0}\). Hence, \(H_{0}\) propagates in a non-polynomial way into the CDDF result from Zwaan et al. (2005b). We have run a test calculation of the H i CDDF based on the WHISP H i maps and the HIPASS mass function from Zwaan et al. (2005) to emulate the Zwaan et al. (2005b) calculation (the actual CDDF calculation is more complex as Zwaan et al. 2005b split the WHISP galaxies by morphological type and use type-specific mass functions). We find that updating \(h\) from 0.75 to 0.6774 decreases the H i CDDF by \(\approx 10-20\) %, except for the highest columns (\(N_{\rm H\,I}>1.5\times 10^{22}\) cm\({}^{-2}\)) where the CDDF increases by \(\approx 10\) %. Since the \(H_{0}\) correction factors that we find are much smaller than the difference between the original Zwaan et al. (2005b) result and our test calculation, we keep using the Zwaan et al. (2005b) result but multiply it with 0.85 to correct it to our value of \(h=0.6774\). Furthermore, the various biases in the HIMF from Zwaan et al. (2003) (e.g. selection effects, H i self-absorption) are not taken into account by Zwaan et al. (2005b) when calculating the WHISP CDDF. Zwaan et al. (2003) find that the cumulative effect of these biases reduces the total H i abundance in the Universe by 11.6 %. As the CDDF propagates linearly into the H i abundance (see Eqn. 6), we additionally decrease the WHISP CDDFs uniformly by 11.6 %. For studies of the H i CDDF, which extends over nine orders of magnitude for \(N_{\rm H\,i}\sim 10^{19}-10^{22}\) cm\({}^{-2}\), these correction factors are completely negligible. For our study, we also consider the H i abundance (which effectively corresponds to the normalization of the H i CDDF), which scales linearly with any correction factors to the CDDF. Hence, the \(H_{0}\) and HIMF correction factors are not negligible for our purpose. ### CDDF results Since the H i CDDF measures the area covered by columns per column density interval, the total H i abundance in the Universe can be derived from the CDDF. Denoting the H i abundance \(\Omega_{\rm H\,i}\) as a fraction of the critical density of the Universe, \(\Omega_{\rm H\,i}=\rho_{\rm H\,i}/\rho_{C}\) with \(\rho_{\rm H\,i}\) the H i mass density in the Universe and \(\rho_{C}=1.274\times 10^{11}\) M\({}_{\odot}\)Mpc\({}^{-3}\), we can write the fractional contribution to \(\Omega_{\rm H\,i}\) per (logarithmic) column density interval as follows: \[\begin{split}\frac{\Delta\Omega_{\rm H\,i}}{\Delta(\log_{10}N_{ \rm H\,i})}&=\frac{m_{\rm H\,i}}{\rho_{C}V}\frac{N_{\rm H\,i}\sum _{j}\phi(M_{\rm H\,i}^{j})\,w(M_{\rm H\,i}^{j})\,A(N_{\rm H\,i})^{j}}{\Delta( \log_{10}N_{\rm H\,i})}\\ &=\frac{m_{\rm H\,i}\rho_{0}\ln(10)}{\rho_{C}c}\big{(}N_{\rm H\,i })^{2}\cdot{\rm CDDF_{H\,i}(N_{\rm H\,i})},\end{split} \tag{6}\] where \(m_{\rm H\,i}\) is the mass of a hydrogen atom, \(V\) is the volume of the survey or the simulation box, and the sum in the first line of Eqn. 6 runs over bins of H i mass. We show our main CDDF result, Figure 8, as the conventional H i CDDF (left panel) and as fractional contribution to \(\Omega_{\rm H\,i}\) (right panel). Observational results from Zwaan et al. (2005b) (slightly modified as described in Section 5.2) are shown in grey to black markers for the three available WHISP resolutions. The TNG50 CDDFs for the different angular resolutions are displayed as colored lines. For our comparison it is more convenient to analyze \(\Delta\Omega_{\rm H\,i}/\Delta(\log_{10}N_{\rm H\,i})\) instead of the conventional CDDF for two reasons: \(\Delta\Omega_{\rm H\,i}/\Delta(\log_{10}N_{\rm H\,i})\) spans much fewer orders of magnitude which enables a more precise visual comparison for the different column density distributions. Furthermore, it also relates directly to the H i abundance as \(\Omega_{\rm H\,i}\) effectively corresponds to the integral of \(\Delta\Omega_{\rm H\,i}/\Delta(\log_{10}N_{\rm H\,i})\) over the column density. To visualize this integral we also show \(\Delta\Omega_{\rm H\,i}/\Delta(\log_{10}N_{\rm H\,i})\) with a linear \(y\)-axis in the inset in Figure 8, where the areas under the curves directly correspond to the H i abundance of TNG50 or the measured H i abundance in the Universe. We caution that the comparison of the column density distributions becomes unreliable below the sensitivity limits, indicated by the arrows at the top of Figure 8. The segmentation for our TNG50 H i maps is more aggressive (the main object cutout is smaller) than the 3-\(\sigma\)-clipped WHISP maps used by Zwaan et al. (2005b) such that the TNG50 CDDFs decrease quickly for column densities below the sensitivity limits (note that the median sensitivity limits of WHISP and the TNG50 mock maps are equal due to our mock map generation algorithm, see Section 2.5.2). Intermediate column densities (\(N_{\rm H\,i}\approx 10^{20}-10^{21}\) cm\({}^{-2}\)) are significantly more abundant in TNG50 than in WHISP. This leads to a substantial mismatch in the H i abundance comparing TNG50 and observational data. We point out that the H i abundance (equivalent to the normalization of the CDDF) as visualized in the inset of Figure 8 Figure 8: The H i CDDF (left panel) and fractional contribution to \(\Omega_{\rm H\,i}\) per column density interval (Eqn. 6, right panel). The data points for the WHISP survey are taken from Zwaan et al. (2005b) and corrected for various minor effects (Section 5.2). The colored lines show the results for TNG50 with varying angular resolutions, using the mock H i maps and the default H i model. Arrows at the top indicate the median 3-\(\sigma\) sensitivity of the WHISP data (which equals the sensitivity of the TNG50 mock maps at the corresponding angular resolution). We show \(\Delta\Omega_{\rm H\,i}/\Delta(\log_{10}N_{\rm H\,i})\) with a linear \(y\)-axis for the TNG50 results and the medium-resolution WHISP data in the inset. All curves are smoothed with a Gaussian filter with \(\sigma\approx 0.033\) dex. is actually completely independent of the WHISP data or the TNG50 H i mock maps due to the scaling factor \(\phi(M_{\rm H\,\textsc{i}})\) in Eqn. 5. Instead, the normalizations of the CDDFs are given by \(\Omega_{\rm H\,\textsc{i}}\) as measured by the blind HIPASS survey (for the WHISP CDDFs) or the total amount of H i in the TNG50 base galaxy sample of 12431 galaxies (for the TNG50 CDDFs), respectively. Hence, the tension in the CDDF at these intermediate column densities points towards an excess of H i in TNG50 or an underestimation of \(\Omega_{\rm H\,\textsc{i}}\) in blind H i surveys, which we discuss in more detail in Section 6.1. At the highest column densities (\(N_{\rm H\,\textsc{i}}>10^{21}\,\rm cm^{-2}\)), the effect of beam smearing becomes substantial both for WHISP and TNG50, which manifests in smearing out the high-density peaks. In this column density range, TNG50 significantly underpredicts the CDDF compared to WHISP. We find that there is neutral hydrogen in TNG50 in these column densities (see also Szakacs et al.2022), but the hydrogen partitioning that we perform is very effective at converting H i into H\({}_{2}\) such that the abundance of high H i column densities in the simulation is lower than observed. We discuss this tension and a potential resolution in more detail in Section 6.2. Lastly, we point out that our result contrasts with other studies of the H i CDDF (Rahmati et al.2013a; Rahmati et al.2013b; Villaescusa-Navarro et al.2018) who find a good agreement between WHISP and cosmological simulations, especially at the high-column end. We attribute this primarily to the beam smoothing of the mock H i maps, which is not performed in these studies (see Section 6.3 for more details). ## 6 Discussion ### Normalization of the CDDF: HI abundance in IllustrisTNG As shown in the inset of Figure 8, the CDDF normalization (equivalent to \(\Omega_{\rm H\,\textsc{i}}\)) in TNG50 significantly exceeds the observational measurement. The observational value of \(\Omega_{\rm H\,\textsc{i}}\) at redshift zero is most reliably determined with blind H i surveys (see figure 14 of Rhee et al.2018 for a data compilation). For HIPASS, Zwaan et al. (2005a) find \(\Omega_{\rm H\,\textsc{i}}=(3.9\pm 0.4^{\rm stat}\pm 0.4^{\rm sys})\times 10^{-4}\), consistent with the result for ALFALFA of \(\Omega_{\rm H\,\textsc{i}}=(4.0\pm 0.1^{\rm stat}\pm 0.6^{\rm sys})\times 10^{-4}\)(Jones et al.2018). Both of these measurements are corrected for various biases such as selection bias or H i self-absorption. Furthermore, we corrected these H i abundances to our adopted value of the Hubble constant, \(h=0.6774\). As the WHISP CDDF is scaled with the HIPASS HIMF, we expect the exactly same value of \(\Omega_{\rm H\,\textsc{i}}=3.9\times 10^{-4}\) when calculating the H i abundance from the WHISP CDDF. With the raw CDDF from Zwaan et al. (2005b) we obtain \(\Omega_{\rm H\,\textsc{i}}=5.1\times 10^{-4}\), but with the Hubble and HIMF biases correction factors (see Section 5.2) the H i abundance is reduced to the expected value of \(\Omega_{\rm H\,\textsc{i}}=3.9\times 10^{-4}\) (with very little impact on the WHISP resolution). Hence, we consider our correction factors somewhat realistic, at least at the level of total H i abundance. Studies investigating the H i abundance in IllustrisTNG (Villaescusa-Navarro et al.2018; Diemer et al.2019, Yates et al.2021) found higher values of \(\Omega_{\rm H\,\textsc{i}}\) compared to blind H i surveys. The magnitude of this discrepancy depends on the simulation version of the IllustrisTNG suite and the postprocessing routines. Summing all H i in galaxies in the TNG100 simulation, Diemer et al. (2019) report \(\Omega_{\rm H\,\textsc{i}}=(5.4\pm 0.5)\times 10^{-4}\), in tension with the observational results. For TNG50, Diemer et al. (2019) report \(\Omega_{\rm H\,\textsc{i}}=(7.2\pm 1.1)\times 10^{-4}\), in significant excess of the data. The error bars on these values correspond to the scatter due to the different partitioning schemes considered by Diemer et al. (2019). Using our default H i model (GD14 plus SKIRT UV field) for our base sample of 12431 galaxies in TNG50, we obtain \(\Omega_{\rm H\,\textsc{i}}=6.0\times 10^{-4}\). This is at the lower edge of the value reported by Diemer et al. (2019), since the GD14 scheme typically predicts the highest molecular fractions (together with the S14 scheme) and because we miss a few gas-rich galaxies with very low stellar mass due to our galaxy sample selection. #### 6.1.1 Observational considerations A natural explanation for this excess of H i in IllustrisTNG is that \(\Omega_{\rm H\,\textsc{i}}\) is underestimated by the observations. H i gas in low-column-density gas could be difficult to detect, while very high columns are prone to H i self-absorption and hence an underestimation of the actual H i column density. Indeed, Braun (2012) find based on observations of M31, M33 and the LMC that self-absorption hides a significant fraction of H i gas. Correcting for this self-absorption, Braun (2012) claim that \(\Omega_{\rm H\,\textsc{i}}=(5.9\pm 0.9)\times 10^{-4}\). However, we remark that this study is questioned by more recent findings of Koch et al. (2021) who use a more thorough H i model to measure the self-absorption correction. In any case, the distribution of H i in terms of column densities provides a convenient means to explore if we miss H i gas when measuring \(\Omega_{\rm H\,\textsc{i}}\) from 21-cm emission data. From the linear inset of Figure 8, we learn that \(\Omega_{\rm H\,\textsc{i}}\) is dominated by an intermediate column density regime of \(N_{\rm H\,\textsc{i}}\approx 10^{19}-4\times 10^{21}\,\rm cm^{-2}\). From our mock H i maps it is impossible to assess if H i column densities below \(N_{\rm H\,\textsc{i}}\approx 10^{19}\) contribute substantially to \(\Omega_{\rm H\,\textsc{i}}\) as the mock H i maps are segmented at approximately this column density (for the low-resolution maps). Generating plain H i maps (without noise and segmentation) for the full TNG50 base sample (see Section 6.3 for more details), we find that column densities in the range \(10^{17}-10^{19}\,\rm cm^{-2}\) only contribute \(2.2\,\%\) to the total H i abundance. At the other extreme, column densities above \(\approx 4\times 10^{21}\,\rm cm^{-2}\) hardly contribute to \(\Omega_{\rm H\,\textsc{i}}\) both for TNG50 and the WHISP data, indicating that H i self-absorption does not hide a significant amount of H i gas (Braun2012 find that H i self-absorption only becomes important above \(N_{\rm H\,\textsc{i}}\approx 10^{22}\,\rm cm^{-2}\)). It is also possible that blind H i surveys simply miss a significant population of objects. If the additional H i gas in TNG50 stemmed from galaxies with very small H i masses, these objects could be missed, while very heavy galaxies could be rare and hence difficult to observe. For TNG50, we find from the HIMF (Figure 5) that the dominating objects for \(\Omega_{\rm H\,\textsc{i}}\) have H i masses within \(3\times 10^{8}-3\times 10^{10}\,\rm M_{\odot}\). We do not expect that our base sample selection (imposing \(M_{\rm gas}>10^{7}\,\rm M_{\odot}\) and \(M_{\star}>10^{7}\,\rm M_{\odot}\)) or the limited simulation volume affects this finding. Galaxies in this mass range are easy to detect with blind H i surveys, for ALFALFA the sensitivity is good enough such that the HIMF can be precisely determined for H i masses larger than \(10^{7}\,\rm M_{\odot}\)(figure 2 of Jones et al.2018). We conclude that the mismatch in \(\Omega_{\rm H\,\textsc{i}}\) (i.e. the CDDF normalization) between TNG50 and WHISP is due to an excess of atomic hydrogen in the simulation and not an underestimation by the observational data. #### 6.1.2 Simulation considerations A possible resolution of this \(\Omega_{\rm H\,\textsc{i}}\) discrepancy consists in mimicking the blind H i surveys like HIPASS or ALFALFA in IllustrisTNG, taking sensitivity limits and other observational effects into account (which were neglected in previous studies of \(\Omega_{\rm H\,\textsc{i}}\) in IllustrisTNG). While it is beyond the scope of this paper to fully emulate the ALFALFA or HIPASS observational setup, we assess the impact of the observational effect which probably affects the inferred H i abundance most, which is beam smoothing. An ALFALFA-like beam smoothing was used in Diemer et al. (2019) (based on Bahe et al.2016) to emulate the H i mass function measurement in IllustrisTNG. We follow their approach and apply a Gaussian beam with \(\sigma=70\,\mathrm{kpc}\), i.e. we multiply the H i mass of each gas cell by \(\exp[-r^{2}/(2\sigma^{2})]\), with \(r\) being the 2D distance of the gas cell to the galaxy center in face-on projection (see (i) in 2.5.1 for the algorithm to rotate galaxies into face-on projection). For our default H i model, we find that applying this beam smoothing effect reduces the H i abundance in TNG50 by ten percent to \(\Omega_{\mathrm{H}1}=5.4\times 10^{-4}\). Hence, mimicking blind H i surveys reduces the tension in H i abundance and the CDDF normalization, but does not resolve it for TNG50. The IllustrisTNG simulation resolution also affects the H i abundance. For the base galaxy samples of TNG50-2 (TNG100) and the default H i model, we find \(\Omega_{\mathrm{H}1}=5.1\times 10^{-4}\) (\(\Omega_{\mathrm{H}1}=4.8\times 10^{-4}\)). Applying the ALFALFA-like beam smoothing further reduces these values by approximately ten percent. All observational and simulated inferences of \(\Omega_{\mathrm{H}1}\) are summarized in Table 4. The H i abundances of the lower-resolution simulations TNG50-2 and TNG100, taking the ALFALFA beam into account, are consistent with the observational values. We show the H i CDDF for various IllustrisTNG simulations (calculated as described in Section 5.1) in Figure 9. Note that the scaling factor \(\phi(M_{H\,\textsc{i}})\) was computed as in Section 5.1, i.e. without ALFALFA-like beam smoothing. This means that the normalization of the IllustrisTNG CDDFs corresponds to the larger values in Table 4 (e.g. \(4.8\times 10^{-4}\) for TNG100). Differences in the H i CDDF (left panel) for the various IllustrisTNG runs appear marginal, except at the highest column densities which are dominated by very few high-column-density pixels. In the linear inset in the right panel, it becomes clear that TNG100 is closer to the observational data than TNG50, which is due to the lower H i abundance in TNG100. Furthermore, we note that the effect of varying the simulation volume is negligible compared to varying the simulation resolution, as TNG100 and TNG50-2 have similar H i CDDFs (the TNG100 volume is roughly eight times larger than the TNG50-2 one, while the TNG100 mass resolution is comparable to TNG50-2, see Table 1). The \(\Omega_{\mathrm{H}1}\) excess in TNG50 is reminiscent of the luminosity functions analyzed in Trcka et al. (2022), who find that the lower-resolution TNG50-2 provides a substantially better match to observational data than TNG50. This is expected given that the physical model calibration for IllustrisTNG was undertaken at the resolution of roughly TNG100 (TNG50-2 has a comparable resolution). Indeed, Pillepich et al. (2018) show that the stellar mass function in IllustrisTNG is converging, but not fully converged, as a function of numerical resolution. For a given dark matter halo, galaxies form slightly more stars at higher resolutions, shifting the stellar mass function. It is not surprising that these higher stellar masses lead to \begin{table} \begin{tabular}{c c c} Data set & \(\Omega_{\mathrm{H}1}\) [\(10^{-4}\)] & Reference \\ \hline ALFALFA & \(4.0\pm 0.1^{\mathrm{stat}}\pm 0.69^{\mathrm{sys}}\) & Jones et al. (2018) \\ HIPASS & \(3.9\pm 0.4^{\mathrm{stat}}\pm 0.4^{\mathrm{sys}}\) & Zwaan et al. (2005a) \\ Local H i CDDF & \(5.9\pm 0.9\) & Rhee et al. (2018) \\ \hline TNG50-1 & \(7.2\pm 1.1\) & Diemer et al. (2019) \\ TNG100-1 & \(5.4\pm 0.5\) & Diemer et al. (2019) \\ TNG50-1 GD14 (apetre) & \(6.0\,(5.4)\) & This work \\ TNG50-2 GD14 (apetre) & \(5.1\,(4.6)\) & This work \\ TNG100-1 GD14 (apetre) & \(4.8\,(4.3)\) & This work \\ \end{tabular} \end{table} Table 4: All values of the H i abundance \(\Omega_{\mathrm{H}1}\) that we consider in this study. The upper three entries are observational determinations of \(\Omega_{\mathrm{H}1}\), the lower five entries correspond to results from IllustrisTNG. For the Diemer et al. (2019) values, the error bars correspond to the spread between different partitioning schemes. For the results from this work, we calculate the H i abundance using only our default H i model. The values inside the brackets denote results when correcting for the ALFALFA beam. Figure 9: Same as Figure 8, varying the IllustrisTNG simulations instead of the angular resolution of the simulation data. The inset in the right panel shows just the TNG50-1, TNG100-1, and medium-resolution WHISP result. higher H i masses15 and ultimately too much atomic hydrogen in the TNG50 galaxy population. Footnote 15: This argument assumes that the H i-to-stellar mass ratio is independent of numerical resolution. Indeed, Diemer et al. (2019) find that this is true to first order and the H i fractions of various IllustrisTNG simulations agree well with an observational compilation from Calette et al. (2018). ### Hydrogen partitioning based on volume densities The simulation-calibrated partitioning schemes of GK11 and GD14 also exist in 'local' variants that are based on volume instead of surface densities. At first sight this is a very attractive feature because it completely circumvents the difficulties of assigning surface densities to gas cells in simulations. According to Gnedin & Kravtsov (2011), a cosmological simulation using their local partitioning scheme needs to have a spatial resolution of \(L_{\rm cell}\sim 100\) pc. Additionally, Gnedin & Draine (2014) introduce a correction factor for the spatial resolution which is tested up to \(L_{\rm cell}\sim 500\) pc, well in excess of the median size of star-forming gas cells in TNG50 of 138 pc (Nelson et al., 2019). For the H i morphologies, we find that the usage of local partitioning schemes provides a slightly better match to the concentration and \(M_{\rm 20}\) indices between TNG50 and WHISP galaxy samples. Here, we test the local partitioning schemes in terms of the H i CDDF in Figure 10. The application of these schemes to IllustrisTNG with the relevant equations is described in Appendix A2 and A3. Since the predictions of the local partitioning models are similar upon variation of the UV field scheme we only consider the SKIRT UV field here. For completeness we also check the H i CDDF using the two UV-independent partitioning schemes (KMT09 and S14) in Figure 10. We find that the KMT09 model (as well as the UV-dependent GK11 and K13 schemes which are not shown here) produces results comparable to the default GD14 partitioning scheme, while S14 underestimates the abundance of high-column H i gas even more than the other models. The local partitioning schemes provide a perfect match at the high-column end, which is a substantial improvement over all other H i models. The higher concentration values of the H i maps using local partitioning schemes are related to this, as the central H i holes are less prevalent and more H i is formed in the dense galactic central regions (see Section 4.3). However, the H i abundance is very high with the local partitioning schemes (visible in the linear inset in the right panel of Figure 10), and reaches almost \(\Omega_{\rm H\,I}=10\times 10^{-4}\) for TNG50, 2.5 times more than what is observed. This excess of H i using local partitioning schemes persists to TNG100, where we find \(\Omega_{\rm H\,I}=6.9\times 10^{-4}\). This provides an a posteriori justification for refraining from using the local partitioning schemes, albeit they are conceptually appealing as they circumvent the need to estimate column densities for the 3D cosmological simulations. A naive explanation for the difference between conventional, column-density based and local partitioning schemes is that the Jeans approximation breaks down at high column densities. For the presumably star-forming gas cells in these high-density regions, the internal energy of the gas cell which is recorded by IllustrisTNG is an average over cold and hot phases in the context of the two-phase model of Springel & Hernquist (2003). This means that the internal energy of the cold, neutral gas phase is overestimated, artificially increasing the Jeans length (see Eqn. 3) for the neutral star-forming gas. This unrealistically boosts neutral hydrogen column densities estimated from the Jeans approximation and molecular hydrogen formation, thereby explaining the low abundance of H i in high-density regions. Diemer et al. (2018) tested this issue by computing neutral hydrogen surface densities with the Jeans approximation, both with the internal energy recorded by IllustrisTNG and the internal energy of the cold gas (using \(T=1000\) K for the cold gas). Surprisingly, they find that the Jeans approximation with the internal energy recorded by IllustrisTNG fits the true surface densities (estimated from projecting the gas cells) better. Instead of tweaking the Jeans approximation, a resolution for discrepancies in the morphological statistics and the CDDF is to use local partitioning schemes, under the assumption that there is simply too much neutral hydrogen in IllustrisTNG. For local partitioning schemes applied to TNG100, the H i abundance is approximately 1.75 times higher than the observational value. Scaling down the H i CDDF (computed from local partitioning schemes) by this factor yields an excellent agreement to the WHISP data, even at the high-column-density end. If the local partitioning schemes realistically split the gas into atomic and molecular phases, then the only way to reconcile the observational data with IllustrisTNG is to reduce the amount of neutral hydrogen in the simulation. To test if IllustrisTNG galaxies exhibit too much neutral gas requires mimicking both H i and H\({}_{2}\) abundance measurements using consistent post-processing methods (i.e. a consistent hydrogen partitioning scheme). While this is beyond the scope of this paper, we note that results from Popping et al. (2019) for TNG100 tentatively indicate that there is indeed an overabundance of molecular hydrogen (using conventional, column-density based partitioning schemes). It would be interesting to test if this H\({}_{2}\) overabundance persists when using local partitioning schemes. ### TNG-intrinsic CDDF and comparison to other studies #### 6.3.1 CDDF from plain H i maps Our results for the H i CDDF (Figure 8), especially the mismatch between simulations and observations for high column densities, contrasts with similar studies of the CDDF (Rahmati et al., 2013; Rahmati et al., 2013; Villaescusa-Navarro et al., 2018). We attribute the discrepancies to our mock H i map creation (see Section 2.5.2) which takes observational effects into account, while other studies of the H i CDDF in cosmological simulations adopted simpler prescriptions to generate synthetic H i maps. To verify this and to assess the impact of the observational effects modelled in the context of our mock H i map algorithm, we construct the H i CDDF for TNG50 using simpler plain H i maps (see Section 2.5.1) in this section. We generate plain H i maps for the TNG50 base sample of 12431 galaxies, and calculate the CDDF directly from this broad sample without the need of a scaling factor as in Eqn. 5 ('TNG-intrinsic CDDF'): \[{\rm CDDF}_{\rm H\,I}=\frac{c}{H_{\rm 0}}\frac{\sum_{j}A(N_{\rm H\,I})_{j}}{N_{ \rm H\,I}\,\ln 10\,\Delta(\log_{10}N_{\rm H\,I})\,V}, \tag{7}\] where the sum runs over all galaxies in the (broad) sample. Typically, the approach to extract the CDDF from cosmological simulations is to project the full simulation box onto a 2D grid and measure the CDDF there. However, since we focus on the high-density end of the column density distribution which traces gas within galaxies, we find Eqn. 7 more appropriate here. The intrinsic CDDF generated from plain H i maps is shown in Figure 11 (dashed red line). The beam smoothing effect on the mock H i maps is clearly visible in Figure 11, as the CDDF from the mock H i maps (solid red line) has the highest column density peaks smoothed out and moved to intermediate densities, thereby increasing the tension with the WHISP CDDF. In fact, the CDDFs generated from plain H i maps (especially from the lower-resolution TNG50-2 and TNG100 runs which are not shown in Figure 11) show a good agreement with the observational data, as found in previous studies of the H i CDDF. We have verified that the choice of the map resolution of the plain H i maps (\(128\times 128\) by default) does not impact our CDDF result. Figure 11: Same as Figure 8, with TNG50-1 (30”) corresponding to the CDDF calculated from the mock H i maps (Section 2.5.2) using Eqn. 5. The dashed red line indicates the TNG50-1 CDDF calculated from plain H i maps (Section 2.5.1) using Eqn. 7. Simulation results for the H i CDDF at redshift zero from Rahmati et al. (2013a), Rahmati et al. (2013b), and Villaescusa-Navarro et al. (2018) are also shown. Figure 10: Same as Figure 8, varying the hydrogen partitioning scheme instead of the angular resolution for the simulation data. The default H i model (GD14 with the SKIRT UV field) is shown in red, dashed lines indicate the UV-independent partitioning schemes S14 and KMT09. Furthermore, the UV-dependent local partitioning schemes of Gnedin & Kravtsov (2011) (GK11 local) and Gnedin & Draine (2014) (GD14 local) are shown in cyan. We have only used the SKIRT UV field for the application of the local partitioning schemes. The GD14, GD14 local, and medium-resolution WHISP results are also displayed in the linear inset. #### 6.3.2 Other studies of the CDDF in cosmological simulations A few studies which analyzed CDDFs in cosmological simulations exist. Here, we compare our CDDF to their findings to provide some additional context for our results. Rahmati et al. (2013a) computed the H i CDDF for a set of cosmological simulations which explicitly track collisional ionization, radiation from hydrogen recombination and the UV background which sets the hydrogen ionization state. The CDDF itself is calculated by projecting the entire simulation box onto one side, contrary to our approach of summing up the column density distributions of individual galaxies. Molecular hydrogen formation is modelled with the empirical partitioning scheme of Blitz & Rosolowsky (2006). Rahmati et al. (2013a) find an excellent agreement at redshift zero between their H i CDDF (green line in Figure 11) and the WHISP measurement by Zwaan et al. (2005b), as their CDDF at the low-column end is lower and hence more in line with observational data. This difference probably arises due to the usage of different cosmological simulations (for instance, the reference simulation of Rahmati et al. 2013a has baryonic particle masses two orders of magnitude larger than TNG50). With radiation from local stellar sources added in Rahmati et al. (2013b), the H i CDDF sharply falls by 0.6 dex for \(N_{\rm H{\textsc{i}}\,\,}>10^{21}\) cm\({}^{-2}\) due to a larger ionized hydrogen fraction at large column densities (blue line in Figure 11). This leads to a significant underestimation of the H i CDDF at these high columns. H\({}_{2}\) formation is not considered in Rahmati et al. (2013b), leading to the upturn of the H i CDDF at the highest column densities. We remark that the calculation of the neutral hydrogen fraction in IllustrisTNG is based on Rahmati et al. (2013a), without the correction of local stellar sources. Since we base the calculation of the neutral hydrogen fraction in star-forming gas cells on the fraction of gas in the cold phase instead of the IllustrisTNG output, it is not directly clear how the inclusion of local ionizing sources would affect the H i CDDF. More recently, Villaescusa-Navarro et al. (2018) postprocessed the TNG100 simulation with the UV-independent partioning scheme of KMT09 and computed the H i CDDF in a similar fashion as Rahmati et al. (2013a), i.e. by projecting the full simulation box onto one side with 20'000\(\times\)20'000 points. A notable difference to our method is the usage of the cell size instead of the Jeans length for the calculation of surface densities which are used for the KMT09 partioning scheme. We find that for star-forming gas cells, the cell sizes are smaller than the Jeans lengths by approximately one order of magnitude. This should significantly lower the molecular fractions for the gas cells. Furthermore, their calculation of the CDDF consists of modelling the gas cells as uniform density spheres, and computing the column density from the length of the line-of-sight segment intersecting the spheres. We find that their H i CDDF (purple line in Figure 11) is lower than ours for all columns below \(\approx 5\times 10^{21}\) cm\({}^{-2}\). The discrepancy rises to almost one order of magnitude at \(N_{\rm H{\textsc{i}}\,\,}\approx 10^{21}\) cm\({}^{-2}\), and this result holds even when we apply the KMT09 partioning scheme to the TNG100 simulation for our result. We attribute this discrepancy to the usage of uniform density spheres in Villaescusa-Navarro et al. (2018) when calculating the H i CDDF. We find that for \(N_{\rm H{\textsc{i}}\,\,}\sim 10^{21}\) cm\({}^{-2}\), the radius of a gas cell (modelled as a sphere) does not exceed \(R_{\rm sphere}=290\) pc: \(R_{\rm sphere}=(3M_{\rm cell}/(4\pi\rho_{\rm gas}))^{1/3}\), where \(M_{\rm cell}\approx 1.4\times 10^{6}\) M\({}_{\odot}\) is the mass of a TNG100 gas cell and \(\rho_{\rm gas}\) its gas mass density. The H i column density in Villaescusa-Navarro et al. (2018) is calculated according to \(N_{\rm H{\textsc{i}}\,\,}=d_{\rm segment}\rho_{\rm H{\textsc{i}}\,\,}/m_{\rm H{ \textsc{i}}\,\,}\), with \(d_{\rm segment}\) being the segment through the spherically modelled gas cell. Since \(d_{\rm segment}<2R_{\rm sphere}\) and \(\rho_{\rm HI}<\rho_{\rm gas}\), we have \(R_{\rm sphere}<\sqrt{3M_{\rm cell}/(2\pi m_{\rm H{\textsc{i}}\,\,}N_{\rm H{ \textsc{i}}\,\,})}\approx 290\) pc for the TNG100 gas cell mass and \(N_{\rm H{\textsc{i}}\,\,}=10^{21}\) cm\({}^{-2}\). For the 20'000 \(\times\) 20'000 grid used by Villaescusa-Navarro et al. (2018) to calculate the CDDF, the different line-of-sights are separated by \(\approx 5\) kpc, which means it is very unlikely for the high-density gas to be picked up by the line-of-sights. Since we smooth the Voronoi gas cells when projecting them to a 2D grid we can more accurately take the high-density gas into account. We stress that for our approach for the TNG-intrinsic CDDF (using plain H i maps), the H i map resolution (with a default \(128\times 128\) grid for each galaxy) does not affect the H i CDDF. The discrepancy in the H i CDDF between our result and Villaescusa-Navarro et al. (2018) diminishes at the highest column densities, which we attribute to an underestimate of molecular hydrogen formation in Villaescusa-Navarro et al. (2018). This effect increases the H i CDDF at the highest column densities and hence counters the effect of missing high-density gas cells in Villaescusa-Navarro et al. (2018). Lastly, Szakacs et al. (2022) examined the H\({}_{2}\) CDDF in TNG50/TNG100 and its dependency on simulation and map resolution. The H\({}_{2}\) fraction is taken from Popping et al. (2019) with different partitioning schemes, and the CDDF is calculated by summing over the contributions from individual galaxies as in the present study. In line with our results, Szakacs et al. (2022) find that the map resolution (changing from 150 pc to 1 kpc maps) hardly affects the CDDF. The simulation resolution (comparing TNG50 and TNG100) affects the H\({}_{2}\) CDDF in the very high column density regime (\(N_{\rm H_{2}}>10^{22}\) cm\({}^{-2}\)), with more molecular hydrogen formed in TNG100. This contrasts to our finding in Figure 9 where there is more H i in TNG100 for \(N_{\rm H{\textsc{i}}\,\,}>5\times 10^{21}\) cm\({}^{-2}\) compared to TNG50. ## 7 Conclusions We postprocessed the IllustrisTNG simulations at redshift zero, partitioning the neutral hydrogen content into its atomic and molecular fractions. We explored how the UV radiation from different stellar populations and the propagation of the UV field through the dusty interstellar medium affect the molecular fractions with the radiative transfer code SKIRT. We generated WHISP-like mock H i maps for IllustrisTNG galaxies and compared them to 21-cm data from the WHISP survey. To compare these resolved H i maps, we used non-parametric morphologies and the column density distribution function. Our main findings are summarized as follows: * Realistic UV fields taking dust attenuation into account can substantially affect the H i distribution for a subset of individual galaxies, but for statistical averages such as the H i/H\({}_{2}\) mass functions (Figure 5) and average radial profiles (Figure 4) the dust attenuation effect is negligible compared to the optically thin (Diemer) scheme. * If the UV radiation is not propagated at all (Lagos scheme), significant statistical differences compared to the SKIRT/Diemer schemes arise. For instance, the H i mass function is underestimated by 25 % at the high-mass end, while the H\({}_{2}\) mass function is overestimated by 25 % for \(M_{\rm H_{2}}>3\times 10^{8}\) M\({}_{\odot}\) (Figure 5). We remark that our default partitioning scheme (GD14) minimizes the differences between the various UV fields. * For the 30'-resolution maps (which are most reliable), the non-parametric morphologies of H i maps of WHISP observational data and mock H i maps of TNG50 galaxies are in good agreement for the asymmetry, smoothness and Gini statistics (Figure 6). On the other hand, the TNG50 galaxies feature lower \(C\) and higher \(M_{20}\) values (Figure 7). Visual inspection of the TNG50 H i maps reveals that a substantial amount of TNG50 galaxies exhibits large central H i holes, which are not seen in the WHISP data. The TNG50 galaxies with a very low concentration statistic are exclusively face-on, where the impact of the central H i hole on \(C\) is maximized. * We attribute the prevalence of central H i holes in TNG50 galaxies mostly to feedback from AGN, which ionizes and/or expels the neutral hydrogen gas from galaxy centers. Excluding galaxies that experienced feedback from the kinetic channel or above-average energy injection from the thermal channel reduces the tension in the \(C\) and \(M_{20}\) statistics. Switching to the lower-resolution TNG100 simulation (the resolution at which the IllustrisTNG physical model is calibrated) also lowers the tension. * The H i column density distribution function (CDDF) of TNG50 differs from the one based on WHISP data. TNG50 contains more H i at intermediate column densities (\(N_{\rm H\,I}\approx 10^{20}-20^{21}\,{\rm cm}^{-2}\)), but less high-column-density H i gas compared to WHISP (Figure 8). As the bulk of the H i abundance (\(\Omega_{\rm H\,I}\)) stems from intermediate column densities, TNG50 also has \(\Omega_{\rm H\,I}\) exceeding observational estimates based on blind H i surveys (e.g. ALFALFA). The lack of high-column H i in TNG50 (and IllustrisTNG in general) is due to the beam smoothing in the mock H i map creation, an effect that was neglected in previous studies of the H i CDDF in cosmological simulations (Rahmati et al., 2013; Rahmati et al., 2013; Villaescusa-Navarro et al., 2018). * As the bulk of the H i gas resides in a sweet spot for observational detection according to TNG50 (both in terms of H i masses and H i column densities), we attribute the \(\Omega_{\rm H\,I}\) tension to an overabundance of atomic hydrogen in TNG50 (and not to an underestimation of the observational value). For the lower-resolution TNG100 simulation, the H i abundance agrees with the observed value, leading to a better match in the CDDF as well. This result is expected as a TNG50 galaxy will generally have higher stellar and gas masses compared to a TNG100 galaxy in a similar dark matter halo, due to the IllustrisTNG model being calibrated at the TNG100 resolution. * The mismatch in the CDDF at the high-column-density end can be remedied by using local partitioning schemes (based on number instead of column densities). Local partitioning schemes also increase the atomic fractions in galaxy centers, which increases the concentration statistic of TNG50 H i maps and reduces the tension with the WHISP morphologies. Hence, we hypothesize that the conventional column-based partitioning schemes fail at the highest column densities as the Jeans approximation could break down. The drawback of the local partitioning schemes is that they significantly overpredict the overall abundance of atomic hydrogen. Hence, albeit conceptually appealing, we refrain from using local partitioning schemes for the main results of this paper. Based on the various partitioning schemes and UV fields explored in this study, we recommend to use the column-based (not the local) GD14 model for hydrogen partitioning in cosmological simulations. The GD14 model is an update of the GK11 model, contains a factor to take the cell size into account to mitigate resolution dependencies, and nicely couples conceptually to cosmological simulations as it is itself a simulation-calibrated partitioning scheme. Furthermore, it predicts the largest H\({}_{2}\) fractions of all partitioning schemes considered in this study (except for S14, but this partitioning scheme leads to the most significant tensions in the non-parametric morphologies and the CDDF comparing TNG50 and WHISP), such that it provides the best match to the observational value of \(\Omega_{\rm H\,I}\). For the UV fields, we recommend to always spread the UV flux, i.e. to use the Diemer or SKIRT schemes. The effect of dust attenuation is negligible for statistical properties of the galaxy population like the H i CDDF or the HIMF. However, the dust attenuation effect can be significant for individual galaxies and becomes more pronounced for molecular hydrogen properties. Studies that consider H\({}_{2}\) in smaller galaxy samples or zoom-simulations could obtain significantly biased results if not correcting for the effect of dust attenuation. ###### Acknowledgements. We wish to express our gratitude towards Ana Trkka who helped setting up the SKIRT analysis of the present study. We also wish to thank Benne Holwerda and Nick Gnedin for fruitful discussions about non-parametric morphologies and partitioning schemes, and are grateful for feedback from Annalisa Pillepich and Matthew Smith. AG gratefully acknowledges financial support from the Fund for Scientific Research Flanders (FWO-Vlaanderen, project FWO.3F02021.0030.01). DN acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) through an Emmy Noether Research Group (grant number NE 2441/1-1). This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 882793 "MeerGas"). This study made extensive use of the Python programming language, especially the numpy (van der Walt et al., 2011), matplotlib (Hunter, 2007), and scipy(Virtanen et al., 2020) packages. The WHISP observation were carried out with the Westerbork Synthesis Radio Telescope, which is operated by the Netherlands Foundation for Research in Astronomy (ASTRON) with financial support from the Netherlands Foundation for Scientific Research (NWO). The WHISP project was carried out at the Kapteyn Astronomical Institute by J. Kamphuis, D. Sijbring and Y. Tang under the supervision of T.S. van Albada, J.M. van der Hulst and R. Sancisi. DATA AND CODE AVAILABILITY The IllustrisTNG data used in this work is publicly available at [https://www.tng-project.org/](https://www.tng-project.org/) as described by Nelson et al. (2019). The WHISP moment-zero H i maps in all resolutions are publicly available at [http://www.astron.nl/](http://www.astron.nl/). WISE fluxes for the WHISP galaxies are publicly available at [https://academic.oup.com/mnras/article/502/4/5711/6095720](https://academic.oup.com/mnras/article/502/4/5711/6095720) (Naluminas et al., 2021). Distances to WHISP galaxies are taken from the publicly available NED database ([https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/)). The SKIRT code (version 9) is publicly available at [https://skirt.ugent.be/root/_home.html](https://skirt.ugent.be/root/_home.html) and described by Camps & Baes (2020). The statmorph code (Rodriguez-Gomez et al., 2019) is publicly available at its github repository ([https://github.com/vrodgom/statmorph](https://github.com/vrodgom/statmorph)). The algorithm to spread the UV field in an optically thin fashion (Diemer UV field) as well as the projection algorithm is part of the private hydrotools repository of Benedikt Diemer. We are happy to share all other parts of the code and generated data of this work upon request.
2301.06750
Sum-free sets in $Z_5^n$
It is well-known that for a prime $p\equiv 2\pmod 3$ and integer $n\ge 1$, the maximum possible size of a sum-free subset of the elementary abelian group $\mathbb Z_p^n$ is $\frac13\,(p+1)p^{n-1}$. We establish a matching stability result in the case $p=5$: if $A\subseteq\mathbb Z_5^n$ is a sum-free subset of size $|A|>\frac32\cdot5^{n-1}$, then there are a subgroup $H<\mathbb Z_5^n$ of size $|H|=5^{n-1}$ and an element $e\notin H$ such that $A\subseteq(e+H)\cup(-e+H)$.
Vsevolod F. Lev
2023-01-17T08:29:11Z
http://arxiv.org/abs/2301.06750v2
# Sum-free sets in \(\mathbb{Z}_{5}^{n}\) ###### Abstract. It is well-known that for a prime \(p\equiv 2\pmod{3}\) and integer \(n\geq 1\), the maximum possible size of a sum-free subset of the elementary abelian group \(\mathbb{Z}_{p}^{n}\) is \(\frac{1}{3}\,(p+1)p^{n-1}\). We establish a matching stability result in the case \(p=5\): if \(A\subseteq\mathbb{Z}_{5}^{n}\) is a sum-free subset with \(|A|>\frac{3}{2}\cdot 5^{n-1}\), then there are a subgroup \(H<\mathbb{Z}_{5}^{n}\) of size \(|H|=5^{n-1}\) and an element \(e\notin H\) such that \(A\subseteq(e+H)\cup(-e+H)\). ## 1. Background and motivation. A subset \(S\) of an abelian group is _sum-free_ if the equation \(x+y=z\) has no solutions in the elements of \(S\); that is, if \(S\) is disjoint from \(2S\) where we use the standard notation \(2S:=\{s_{1}+s_{2}\colon s_{1},s_{2}\in S\}\). The idea of a sum-free set goes back to Schur [13] who was motivated by the modular version of the Fermat equation \(x^{n}+y^{n}=z^{n}\). Despite this initial motivation, sum-free sets are treated in [13] as a combinatorial object of independent interest. Originating from [13], the celebrated _Schur's theorem_ ("the positive integers cannot be partitioned into finitely many sum-free subsets") is considered one of the origins of Ramsey theory. In the 1960's sum-free sets were studied under the name "mutant sets"; see, for instance, [14]. The subject gained popularity when it turned out to be related to a problem of Erdos. The reader is invited to check [15, 16] for a historical account and further references. How large can a sum-free subset of a given finite abelian group be? First considered in 1968 by Diananda and Yap [12, 1], this basic question did not receive a complete answer up until the year 2005 when it was eventually resolved by Green and Ruzsa [15]. Once the largest possible size is known, it is natural to investigate the corresponding stability problem: what is the structure of sum-free subsets of finite abelian groups of size close to the largest possible? In this respect, the cyclic groups of infinite order and prime order, and elementary abelian \(p\)-groups have received particular attention. Here we are concerned with the groups of the latter type. The case \(p=2\) is of special interest due to its connections with the coding theory and the theory of finite geometries, see [1, 10] for a detailed explanation. Motivated by the applications in these areas, Davydov and Tombak [11] established the structure of large sum-free subsets in the binary settings. To state their principal result, we briefly review the basic notions of periodicity and maximality. The _period_ of a subset \(A\) of an abelian group \(G\) is the subgroup \(\pi(A):=\{g\in G\colon A+g=A\}\leq G\); that is, \(\pi(A)\) is the largest subgroup \(H\leq G\) such that \(A\) is a union of \(H\)-cosets. The set \(A\) is _periodic_ if \(\pi(A)\neq\{0\}\) and _aperiodic_ otherwise. One also says that \(A\) is \(H\)_-periodic_ if \(H\leq\pi(A)\); that is, if \(A\) is the inverse image of a subset of the quotient group \(G/H\) under the canonical homomorphism \(G\to G/H\). A sum-free set is _maximal_ if it is not properly contained in another sum-free set. By \(\mathbb{Z}_{p}^{n}\) we denote the elementary abelian \(p\)-group of rank \(n\). **Theorem 1** ([11, Theorem 1]).: _Let \(n\geq 4\) and suppose that \(A\subseteq\mathbb{Z}_{2}^{n}\) is a maximal sum-free set. If \(|A|>2^{n-2}+1\), then \(A\) is periodic._ From Theorem 1 it is not difficult to derive a detailed structural characterization of large sum-free sets in \(\mathbb{Z}_{2}^{n}\). **Theorem 1\({}^{\prime}\)** ([11]).: _Let \(n\geq 4\) and suppose that \(A\subseteq\mathbb{Z}_{2}^{n}\) is sum-free. If \(|A|\geq 2^{n-2}+1\), then either \(A\) is contained in a nonzero coset of a proper subgroup, or there are an integer \(k\in[4,n]\), a subgroup \(H\leq\mathbb{Z}_{2}^{n}\) of size \(|H|=2^{n-k}\), and a maximal sum-free subset \(\mathcal{A}\subseteq\mathbb{Z}_{2}^{n}/H\simeq\mathbb{Z}_{5}^{k}\) of size \(|\mathcal{A}|=2^{k-2}+1\) such that \(A\) is contained in the inverse image of \(\mathcal{A}\) under the canonical homomorphism \(\mathbb{Z}_{2}^{n}\to\mathbb{Z}_{2}^{n}/H\)._ As an easy consequence, we have the following corollary. **Corollary 1** ([11]).: _Let \(n\geq 4\) and suppose that \(A\subseteq\mathbb{Z}_{2}^{n}\) is sum-free. If \(|A|\geq 5\cdot 2^{n-4}+1\), then \(A\) is contained in a nonzero coset of a proper subgroup._ Corollary 1 was independently obtained in [10]. In the ternary case, only an analog of Corollary 1 is known. **Theorem 2** ([10]).: _Let \(n\geq 3\) and suppose that \(A\subseteq\mathbb{Z}_{3}^{n}\) is sum-free. If \(|A|\geq 5\cdot 3^{n-3}+1\), then \(A\) is contained in a nonzero coset of a proper subgroup._ As shown in [10], the bound \(5\cdot 3^{n-3}+1\) is sharp. In this note, we study the first open case \(p=5\) proving the following result. **Theorem 3**.: _Let \(n\geq 1\) and suppose that \(A\subseteq\mathbb{Z}_{5}^{n}\) is sum-free. If \(|A|>\frac{3}{2}\cdot 5^{n-1}\), then there are a proper subgroup \(H<\mathbb{Z}_{5}^{n}\) and an element \(e\notin H\) such that \(A\subseteq(e+H)\cup(-e+H)\)._ There are no reasons to believe that the assumption \(|A|>\frac{3}{2}\cdot 5^{n-1}\) of Theorem 3 is sharp. On the other hand, it cannot be relaxed to \(|A|>5^{n-1}\). _Example 1_.: Suppose that \(n\geq 3\) is an integer, and that \(H<\mathbb{Z}_{5}^{n}\) is a subgroup of index \(5\). Fix arbitrarily an element \(e\notin H\) and a subset \(S\subseteq H\) with \(S\cap(-S)=\varnothing\) and \(S\cup(-S)=H\setminus\{0\}\), and let \(A:=(e+S)\cup\{2e,-2e\}\cup(-e-S)\). A straightforward verification shows that \(A\) is sum-free. Suppose now that \(A\) is contained in a union of two cosets of a subgroup \(F<\mathbb{Z}_{5}^{n}\). Since \(A\) meets four \(H\)-cosets and just two \(F\)-cosets, we have \(F\neq H\). Furthermore, one of these \(F\)-cosets contains at least half the elements of the set \(e+S\). The intersection of this \(F\)-coset with the coset \(e+H\) has therefore size at least \(\frac{1}{2}|S|=(|H|-1)/4>|H|/5\) while, on the other hand, the intersection of an \(H\)-coset with an \(F\)-coset is a coset of a proper subgroup of \(H\), and as such, has size at most \(|H|/5\), a contradiction showing that \(A\) is not contained in a union of two cosets of a proper subgroup. We now turn to the proof of Theorem 3. ## 2. Proof of Theorem 3 Our argument is self-contained except that we need the following classical result of Kneser (but see [1, Theorem 6.1] for our present formulation). **Theorem 4** (Kneser [1, 1]).: _If \(A_{1},\ldots,A_{k}\) are finite, nonempty subsets of an abelian group, then letting \(H:=\pi(A_{1}+\cdots+A_{k})\) we have_ \[|A_{1}+\cdots+A_{k}|\geq|A_{1}+H|+\cdots+|A_{k}+H|-(k-1)|H|.\] Theorem 4 is referred to below as _Kneser's theorem_. We start with a series of "general" claims. At this stage, it is not assumed that \(A\) is a sum-free set satisfying the assumptions of Theorem 3. **Lemma 1**.: _Let \(n\geq 1\) be an integer and suppose that \(A\subseteq\mathbb{Z}_{5}^{n}\) is sum-free. If \(|A|>\frac{3}{2}\cdot 5^{n-1}\) and \(A\) is contained in a union of two cosets of a proper subgroup \(H<\mathbb{Z}_{5}^{n}\), then there is an element \(e\notin H\) such that \(A\subseteq(e+H)\cup(-e+H)\)._ Proof.: Since \(2|H|\geq|A|>\frac{3}{2}\cdot 5^{n-1}\), we have \(|H|=5^{n-1}\). Suppose that \(A=(e_{1}+A_{1})\cup(e_{2}+A_{2})\), where \(A_{1},A_{2}\) are contained in \(H\), and \(e_{1},e_{2}\in\mathbb{Z}_{5}^{n}\) lie in distinct \(H\)-cosets. From \(|A|>\frac{3}{2}\cdot 5^{n-1}\) we get \(|A_{1}|+|A_{2}|=|A|>\frac{3}{2}\,|H|\). Therefore \(\min\{|A_{1}|,|A_{2}|\}>\frac{1}{2}\,|H|\), and by the pigeonhole principle, \(2A_{1}=2A_{2}=A_{1}+A_{2}=H\). It follows that \(2A=(2e_{1}+H)\cup(e_{1}+e_{2}+H)\cup(2e_{2}+H)\). Since \(A\) is sum-free, each of the three cosets in the right-hand side is distinct from each of the cosets \(e_{1}+H\) and \(e_{2}+H\), which is possible only if \(e_{2}+H=-e_{1}+H\neq H\) By Lemma 1, to prove Theorem 3 it suffices to show that any sum-free set in \(\mathbb{Z}_{5}^{n}\) of size larger than \(\frac{3}{2}\cdot 5^{n-1}\) is contained in a union of two cosets of a proper subgroup. **Proposition 1**.: _Let \(n\geq 1\) be an integer and suppose that \(A\subseteq\mathbb{Z}_{5}^{n}\) is sum-free. If \(|A|>\frac{3}{2}\cdot 5^{n-1}\), then \(A\) cannot have non-empty intersections with exactly three cosets of a maximal proper subgroup of \(\mathbb{Z}_{5}^{n}\)._ Proof.: The case \(n=1\) is immediate. Assuming that \(n\geq 2\), \(A\subseteq\mathbb{Z}_{5}^{n}\) is sum-free, and \(H<\mathbb{Z}_{5}^{n}\) is a maximal proper subgroup such that \(A\) intersects non-trivially exactly three \(H\)-cosets, we obtain a contradiction. Fix an element \(e\in\mathbb{Z}_{5}^{n}\setminus H\), and for each \(i\in[0,4]\) let \(A_{i}:=(A-ie)\cap H\); thus, \(A=A_{0}\cup(e+A_{1})\cup(2e+A_{2})\cup(3e+A_{3})\cup(4e+A_{4})\) with exactly three of the sets \(A_{i}\) non-empty. Considering the actions of the automorphisms of \(\mathbb{Z}_{5}\) on its two-element subsets (equivalently, passing from \(e\) to \(2e,3e\), or \(4e\), if necessary), we further assume that one of the following holds: * \(A_{2}=A_{3}=\varnothing\); * \(A_{0}=A_{4}=\varnothing\); * \(A_{3}=A_{4}=\varnothing\). We consider these three cases separately. Case (i): \(A_{2}=A_{3}=\varnothing\). In this case, \(A=A_{0}\cup(e+A_{1})\cup(4e+A_{4})\), and since \(A\) is sum-free, we have \((A_{1}+A_{4})\cap A_{0}=\varnothing\). It follows that \(|A_{0}|+|A_{1}+A_{4}|\leq|H|\). Consequently, letting \(F:=\pi(A_{1}+A_{4})\), we have \(|H|\geq|A_{0}|+|A_{1}|+|A_{4}|-|F|=|A|-|F|\) by Kneser's theorem. Observing that \(|F|\leq\frac{1}{5}|H|=5^{n-2}\), we conclude that \[|A|\leq|H|+|F|\leq\frac{6}{5}|H|=6\cdot 5^{n-2}<\frac{3}{2}\cdot 5^{n-1},\] a contradiction. Case (ii): \(A_{0}=A_{4}=\varnothing\). In this case \(A=(e+A_{1})\cup(2e+A_{2})\cup(3e+A_{3})\) with \((A_{1}+A_{2})\cap A_{3}=\varnothing\), and the proof can be completed as in Case (i). Case (iii): \(A_{3}=A_{4}=\varnothing\). In this case from \((A_{0}+A_{1})\cap A_{1}=\varnothing\), letting \(F:=\pi(A_{0}+A_{1})\), by Kneser's theorem we get \[|H|\geq|A_{0}+A_{1}|+|A_{1}|\geq|A_{0}|+2|A_{1}|-|F|\] whence, in view of \(|F|\leq\frac{1}{5}|H|\), \[2|A_{1}|+|A_{0}|\leq\frac{6}{5}\,|H|. \tag{1}\] Similarly, from \((A_{0}+A_{2})\cap A_{2}=\varnothing\) we get \[2|A_{2}|+|A_{0}|\leq\frac{6}{5}\,|H|. \tag{2}\] Averaging (1) and (2) we obtain \(|A|\leq\frac{6}{5}|H|<\frac{3}{2}\cdot 5^{n-1}\), a contradiction. **Proposition 2**.: _Let \(n\geq 1\) be an integer and suppose that \(A\subseteq\mathbb{Z}_{5}^{n}\) is sum-free, and that \(H<\mathbb{Z}_{5}^{n}\) is a maximal proper subgroup. If there is an \(H\)-coset with more than half of its elements contained in \(A\), then \(A\) has non-empty intersections with at most three \(H\)-cosets._ Proof.: Fix an element \(e\in\mathbb{Z}_{5}^{n}\setminus H\), and for each \(i\in[0,4]\) set \(A_{i}:=(A-ie)\cap H\); thus, \(A=A_{0}\cup(e+A_{1})\cup\cdots\cup(4e+A_{4})\). Suppose that \(|A_{i}|>0.5|H|\) for some \(i\in[0,4]\). Since \(2A_{i}=H\) by the pigeonhole principle, we have \(i>0\) (as otherwise \(2A_{0}=H\) would not be disjoint from \(A_{0}\)). Normalizing, we can assume that \(i=1\). From \(2A_{1}\cap A_{2}=\varnothing\) we now derive \(A_{2}=\varnothing\), and from \((A_{1}-A_{1})\cap A_{0}=\varnothing\) we get \(A_{0}=\varnothing\). In view of Lemma 1 and Propositions 1 and 2, we can assume that the set \(A\subseteq\mathbb{Z}_{5}^{n}\) of Theorem 3 contains fewer than \(\frac{1}{2}\cdot 5^{n-1}\) elements in every coset of every maximal proper subgroup. **Lemma 2**.: _Let \(n\geq 1\) be an integer, and suppose that \(A,B,C\subseteq\mathbb{Z}_{5}^{n}\) satisfy \((A+B)\cap C=\varnothing\). If \(\min\{|A|,|B|\}>2\cdot 5^{n-1}\) and \(C\neq\varnothing\), then \(|A|+|B|+2|C|\leq 6\cdot 5^{n-1}\)._ Proof.: Write \(H:=\pi(A+B-C)\), and define \(k\in[0,n]\) by \(|H|=5^{n-k}\). We have \[\min\{|A+H|,|B+H|\}>2\cdot 5^{n-1}=2\cdot 5^{k-1}|H| \tag{3}\] while, by Kneser's theorem, and since \((A+B)\cap C=\varnothing\) implies \(0\notin A+B-C\) and, consequently, \((A+B-C)\cap H=\varnothing\), \[5^{n}-|H|\geq|A+B-C|\geq|A+H|+|B+H|+|C+H|-2|H|. \tag{4}\] Combining (3) and (4), we obtain \[5^{n}\geq 2(2\cdot 5^{k-1}+1)|H|+|C+H|-|H|\geq 4\cdot 5^{k-1}|H|+|C+H|+|H|.\] Consequently, \[|C|\leq|C+H|\leq 5^{n-1}-|H|.\] On the other hand, from (4), \[|A|+|B|+|C|\leq 5^{n}+|H|.\] Taking the sum of the last two estimates gives the result. **Proposition 3**.: _Let \(n\geq 1\) be an integer and suppose that \(A\subseteq\mathbb{Z}_{5}^{n}\) is a sum-free subset of size \(|A|>\frac{3}{2}\cdot 5^{n-1}\). If \(H<\mathbb{Z}_{5}^{n}\) is a maximal proper subgroup such that every \(H\)-coset contains fewer than \(\frac{1}{2}|H|\) elements of \(A\), then there is at most one \(H\)-coset containing more than \(\frac{2}{5}|H|\) elements of \(A\)._ Proof.: Suppose for a contradiction that there are two (or more) \(H\)-cosets that are _rich_ meaning that they contain more than \(\frac{2}{5}|H|\) elements of \(A\) each. Fix an element \(e\in\mathbb{Z}_{5}^{n}\setminus H\) and write \(A_{i}=(A-ie)\cap H\), \(i\in[0,4]\). Without loss of generality, either \(A_{0}\) and \(A_{1}\), or \(A_{1}\) and \(A_{2}\), or \(A_{1}\) and \(A_{4}\) are rich. If \(A_{0}\) and \(A_{1}\) are rich, then applying Lemma 2 with \(H\) as the underlying group, in view of \((A_{0}+A_{1})\cap A_{1}=\varnothing\) we get \(4\cdot\frac{2}{5}|H|<|A_{0}|+|A_{1}|+2|A_{1}|\leq 6\cdot 5^{n-2}\), which is wrong. If \(A_{1}\) and \(A_{2}\) are rich then, observing that \((A_{1}+A_{1})\cap A_{2}=\varnothing\), we recover the contradictory \(4\cdot\frac{2}{5}|H|<|A_{1}|+|A_{1}|+2|A_{2}|\leq 6\cdot 5^{n-2}\). Finally, if \(A_{1}\) and \(A_{4}\) are rich, then from \[(A_{1}+A_{4})\cap A_{0}=(A_{1}+A_{1})\cap A_{2}=(A_{4}+A_{4})\cap A_{3}=\varnothing\] using Lemma 2 we obtain \[|A_{1}|+|A_{4}|+2|A_{0}| \leq 6\cdot 5^{n-2},\] \[|A_{1}|+|A_{1}|+2|A_{2}| \leq 6\cdot 5^{n-2},\] \[|A_{4}|+|A_{4}|+2|A_{3}| \leq 6\cdot 5^{n-2}.\] Taking the sum, \[3|A_{1}|+3|A_{4}|+2|A_{0}|+2|A_{2}|+2|A_{3}|\leq 18\cdot 5^{n-2};\] that is, \(2|A|+|A_{1}|+|A_{4}|\leq 18\cdot 5^{n-2}\). However, from \(|A|>\frac{3}{2}\cdot 5^{n-1}\) and \(\min\{|A_{1}|,|A_{4}|\}>\frac{2}{5}\cdot 5^{n-1}\) we derive \(2|A|+|A_{1}|+|A_{4}|>3\cdot 5^{n-1}+\frac{4}{5}\cdot 5^{n-1}=19\cdot 5^{n-2}\), a contradiction. We use character sums to complete the argument and prove Theorem 3. Proof of Theorem 3.: Suppose that \(n\geq 2\), and that \(A\subseteq\mathbb{Z}_{5}^{n}\) is a sum-free set with \(\alpha:=|A|/5^{n}>\frac{3}{10}\); we want to show that \(A\) is contained in a union of two cosets of a proper subgroup. Denoting by \(1_{A}\) the indicator function of \(A\), consider the Fourier coefficients \[\hat{1}_{A}(\chi):=5^{-n}\sum_{a\in A}\chi(a),\ \chi\in\widehat{\mathbb{Z}_{5}^{n}}.\] Since \(A\) is sum-free, we have \(A\cap(A-A)=\varnothing\), whence \[\sum_{\chi}|\hat{1}_{A}(\chi)|^{2}\cdot\hat{1}_{A}(\chi)=0;\] consequently, \[\sum_{\chi\neq 1}|\hat{1}_{A}(\chi)|^{2}\cdot\hat{1}_{A}(\chi)=-\alpha^{3}\] and, as a result, \[\sum_{\chi\neq 1}|\hat{1}_{A}(\chi)|^{2}\cdot\Re(\hat{1}_{A}(\chi))=-\alpha^{3}.\] Comparing this to \[\sum_{\chi\neq 1}|\hat{1}_{A}(\chi)|^{2}=\alpha(1-\alpha)\] (which is an immediate corollary of the Parseval's identity), we obtain \[\sum_{\chi\neq 1}|\hat{1}_{A}(\chi)|^{2}\big{(}(1-\alpha)\,\Re(\hat{1}_{A}( \chi))+\alpha^{2}\big{)}=0.\] We conclude that there exists a non-principal character \(\chi\in\widehat{\mathbb{Z}_{5}^{n}}\) such that \[\Re(\hat{1}_{A}(\chi))\leq-\frac{\alpha^{2}}{1-\alpha}. \tag{5}\] Let \(F:=\ker\chi\), fix \(e\in\mathbb{Z}_{5}^{n}\) with \(\chi(e)=\exp(2\pi i/5)\), and for each \(i\in[0,4]\), let \(\alpha_{i}:=|(A-ie)\cap F|/|F|\). By Propositions 1 and 2, we can assume that \(\max\{\alpha_{i}\colon i\in[0,4]\}<0.5\), and then by Proposition 3 we can further assume that there is at most one index \(i\in[0,4]\) with \(\alpha_{i}>0.4\); that is, of the five inequalities \(\alpha_{i}\leq 0.4\) (\(i\in[0,4]\)), at most one fails, but holds true once the inequality is relaxed to \(\alpha_{i}<0.5\). We show that this set of assumptions is inconsistent with (5). To this end, we notice that \[5\Re(\hat{1}_{A}(\chi))=\alpha_{0}+s_{1}\cos(2\pi/5)+s_{2}\cos(4\pi/5)\] where \(s_{1}:=\alpha_{1}+\alpha_{4}\) and \(s_{2}:=\alpha_{2}+\alpha_{3}\leq 0.9\). Comparing with (5), we get \[-\frac{5\alpha^{2}}{1-\alpha} \geq\alpha_{0}+s_{1}\cos(2\pi/5)+s_{2}\cos(4\pi/5)\] \[=\alpha_{0}+s_{1}\cos(2\pi/5)+(s_{2}-0.9)\cos(4\pi/5)+0.9\cos(4 \pi/5)\] \[\geq\alpha_{0}+s_{1}\cos(2\pi/5)+(s_{2}-0.9)\cos(2\pi/5)+0.9\cos(4 \pi/5)\] \[\geq(5\alpha-0.9)\cos(2\pi/5)+0.9\cos(4\pi/5),\] while the resulting inequality \[-\frac{5\alpha^{2}}{1-\alpha}\geq(5\alpha-0.9)\cos(2\pi/5)+0.9\cos(4\pi/5)\] is easily seen to be wrong for all \(\alpha\in[0.3,1)\). This completes the proof of Theorem 3. ## Acknowledgment I am grateful to Leo Versteegen for the careful reading of the manuscript and for spotting out a problem with the initial version of Example 1.
2305.05976
Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge
Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge. However, negative knowledge, such as "lions don't live in the ocean", is also ubiquitous in the world but rarely mentioned explicitly in the text. What do LLMs know about negative knowledge? This work examines the ability of LLMs to negative commonsense knowledge. We design a constrained keywords-to-sentence generation task (CG) and a Boolean question-answering task (QA) to probe LLMs. Our experiments reveal that LLMs frequently fail to generate valid sentences grounded in negative commonsense knowledge, yet they can correctly answer polar yes-or-no questions. We term this phenomenon the belief conflict of LLMs. Our further analysis shows that statistical shortcuts and negation reporting bias from language modeling pre-training cause this conflict.
Jiangjie Chen, Wei Shi, Ziquan Fu, Sijie Cheng, Lei Li, Yanghua Xiao
2023-05-10T08:35:50Z
http://arxiv.org/abs/2305.05976v2
_Say What You Mean!_ Large Language Models Speak Too Positively about Negative Commonsense Knowledge ###### Abstract Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge. However, negative knowledge, such as "_lions don't live in the ocean_", is also ubiquitous in the world but rarely mentioned explicitly in the text. _What do LLMs know about negative knowledge?_ This work examines the ability of LLMs to negative commonsense knowledge. We design a constrained keywords-to-sentence generation task (CG) and a Boolean question-answering task (QA) to probe LLMs. Our experiments reveal that LLMs frequently fail to generate valid sentences grounded in negative commonsense knowledge, yet they can correctly answer polar yes-or-no questions. We term this phenomenon the _belief conflict_ of LLMs. Our further analysis shows that statistical shortcuts and negation reporting bias from language modeling pre-training cause this conflict.1 Footnote 1: dagger}\)Resources of this paper are available at [https://github.com/jiangjiechen/uncommongen](https://github.com/jiangjiechen/uncommongen). Footnote 2: footnotetext: \({}^{\dagger}\)Hossain et al. (2022) report that sentences with negation hold up to 14.5% in the CommonsenseQA dataset (Talmor et al., 2019), 8.7% in QNLIM (Rajpurkar et al., 2016), and 22.6-29.9% in general-purposed texts. ## 1 Introduction Most of the world knowledge exists in a positive and affirmative form Molnar (2000); Barker and Jago (2012); Vrandecic and Krotzsch (2014); Speer et al. (2017). As a result, large language models (LLMs) pre-trained on a colossal amount of texts, such as GPT-3 Brown et al. (2020); Ouyang et al. (2022) and PaLM Chowdhery et al. (2022), have demonstrated their remarkable abilities for storing and utilizing positive knowledge in downstream tasks. In contrast, negative knowledge, such as the commonsense statement that "_lions do not live in the ocean_", is rarely mentioned in the textual world Hossain et al. (2022).2 Such negative knowledge also exists in the real world, and is important for cognitive skills such as knowing _what is not true_ or _what not to think_MacDonald (1965); Minsky (1997); Barker and Jago (2012). Therefore, we ask this question: _Do LLMs (such as GPT-3 models) acquire such implicit negative knowledge through extensive language modeling pre-training?_ Footnote 2: dagger}\)Hossain et al. (2022) report that sentences with negation hold up to 14.5% in the CommonsenseQA dataset (Talmor et al., 2019), 8.7% in QNLIM (Rajpurkar et al., 2016), and 22.6-29.9% in general-purposed texts. One important way of probing LLMs, which are mostly generative models, is checking whether the generated texts are knowledge-grounded. This is because the generation of texts is a direct manifestation of a model's internal beliefs towards world knowledge Kassner et al. (2021); Sumers et al. (2021); Tafjord et al. (2022).3 Knowledge-grounded text generation has been a focus of NLP research Yu et al. (2022). For example, the CommonGen benchmark Lin et al. (2020) evaluates generative commonsense reasoning that organizes concepts as keyword input and generates a sentence grounded in commonsense knowledge. However, Figure 1: An example of the probing tasks studied in this paper. For the same negative commonsense knowledge <_lion, located at, ocean_> which is false, we find LLMs often fail to generate texts grounded in such negative knowledge while knowing its validity according to question answering. previous work does not consider negative knowledge, nor do they probe the consistency between what models know and what they generate. Another line of work on probing [23, 24, 25] is conducted through the mask-infilling task. However, this task mainly evaluates bidirectional models [13], and is not natural for unidirectional LLMs. Also, this task suffers from the _open-world problem_ in evaluation, _i.e._, there could be multiple valid answers to fill the mask. This is vital for evaluating negative knowledge, which has an infinite answer space, _e.g._, lions don't live in the _sky, water, desk, car_, etc. In this study, we investigate the belief of LLMs about negative commonsense knowledge through the lens of _text generation_. Since LLMs have become a foundational service [1] and cannot be easily trained, we apply in-context learning [1] for the probing tasks, which is tuning-free. We design a Constrained Sentence Generation (CG) probing task, following [10], where the model must generate a knowledge-grounded sentence based on a given triple <\(s,r,o\)>. For example, given a triple "_clion, located at, ocean>_", a model should generate "_lions do not live in the ocean_". This task is rather simple and clear. The output sentence basically contains the same information as the input keywords. Thus, the generated texts are easy to evaluate according to the appearance of negation. We also add a Boolean Question Answering (QA) task that asks LLMs whether a knowledge triple is valid, which shows their beliefs about this piece of knowledge. An example is given in Figure 1. In our experiments, we find that LLMs of different sizes and shapes often produce hallucinated claims of negative knowledge, even if they answer yes-or-no questions about it correctly. We term this phenomenon the _belief conflict_, _i.e._, actions (generating texts with it) conflict with its belief (answering question about it). Hallucinated generation of negative knowledge is seen in both our probing tasks and downstream tasks, such as explanation generation [10, 11], where negative knowledge plays an important role in the argumentation of refutation. Further analysis shows that this problem stems from the statistical shortcuts and reporting bias of negation during pre-training. Moreover, such implicit biases can be alleviated through explicit reasoning with Chain-of-Thought prompting [22], such as syllogistic deduction and related fact comparison. The main contributions of this paper are summarized as follows: _1)_ We are the first to investigate LLMs' belief about negative knowledge in the commonsense domain, which may shed light on a previously unstudied aspect of LLMs' abilities. _2)_ We propose to probe generative LLMs through constrained sentence generation, which is effective for evaluating generated texts grounded in positive and negative knowledge. _3)_ Through extensive experiments, we identify and analyze LLMs' _belief conflict_ phenomenon on negative commonsense knowledge, and provide insights on the causes and solutions of such problems. ## 2 Related Work Negative KnowledgeNegative knowledge refers to information that describes what is not true, what cannot be done, or what does not exist, while everything that exists is positive [12, 1]. It plays an important role in the human reasoning process, because to think effectively, we need to know what "not to think" [16]. Current research of negative knowledge in NLP mainly focuses on developing negative knowledge bases that store relational negative commonsense knowledge [1, 13, 14] and utilizing negative knowledge within arguments or explanations to refute a candidate [1, 12, 1]. This paper is based on these resources to probe the belief of LLMs about the relations of everyday concepts that are not true. Understanding Negation in TextsThe manifestation of negative knowledge in texts is the phenomenon of negation [10], which is difficult for pre-trained LMs to understand, _e.g._, filling "_birds cannot_[MASK]" with "_fly_" [12]. Negation has been shown to be spuriously correlated with negative or contradictory labels due to the data distribution [1, 13, 14, 15, 16], raising doubts about the performance of previous models. Furthermore, LMs may ignore the existence of negative words when understanding texts [12] or processing prompts [11], which can be alleviated with unlikelihood training objective [13]. et al., 2020) during training (Hosseini et al., 2021) or specifying pragmatic contexts (Gubelmann and Handschuh, 2022). While most current research focuses on NLU, this work fills in a gap in the investigation of the negation phenomenon in the context of text generation. Knowledge-Grounded Language ModelsA major goal of NLP has been to ground LMs in world knowledge, such as factual knowledge (Vrandecic and Krotzsch, 2014) and commonsense knowledge (Speer et al., 2017). A line of work (Petroni et al., 2019; Kassner and Schutze, 2020; Cao et al., 2021) directly probes the knowledge implicitly learned by LMs through mask-infilling. However, such a probing paradigm only works for contextual LMs such as BERT (Devlin et al., 2019), leaving generative ones, especially modern LLMs, understudied. Another line of work focuses on making LM-generated sentences grounded in knowledge (Petroni et al., 2020; Liu et al., 2021). Lin et al. (2020) designed a constrained text generation task, CommonGen, which asks a model to generate a sentence given a set of concepts, testing the generative commonsense reasoning of LMs. However, these studies do not investigate text generation grounded in negative knowledge, which is the focus of this work. In-Context LearningIn-context learning (ICL; Brown et al., 2020) has become a prevailing paradigm for deploying LLMs (_e.g._, the GPT-3 family Brown et al., 2020; Chen et al., 2021; Ouyang et al., 2022) for downstream tasks. Through ICL, LLMs can solve tasks directly based on input-output examples without parameter updates (Min et al., 2022; Rubin et al., 2022). Furthermore, recent work (Wei et al., 2022; Wang et al., 2022) reveals that the ceiling performance determined by the scaling law can be beaten with ICL by generating immediate rationales, _i.e._, the Chain of Thought (CoT) prompting. Since LLMs are becoming a foundational service that do not need fine-tuning, our probing on LLMs are based on ICL. ## 3 Probing Protocol In this section, we set up an evaluation protocol to understand what LLMs know about (negative) commonsense knowledge of everyday concepts. ### The Csk-PN Dataset We limit the scope of the knowledge probed to relational knowledge between commonsense concepts, _i.e._, _relational knowledge triples_, which exist widely in knowledge graphs and are commonly studied by the community (Auer et al., 2007; Vrandecic and Krotzsch, 2014; Speer et al., 2017). Given a triplet in the form of <\(s,r,o\)> with a subject concept \(s\), a relation \(r\) and an object concept \(o\), we define a negative fact as \(\neg r(s,o)\) if the truth value of \(r(s,o)\) is False according to commonsense knowledge, and a (positive) fact if otherwise. Dataset StatisticsWe build the probing dataset (denoted as CSK-PN) based on the knowledge triples filtered by Safavi et al. (2021), which are the challenging ones sourced from ConceptNet (Speer et al., 2017). We also remove invalid triples with pronouns, negation, and adjectives as subjects or objects. The final dataset contains a total of 4,000 triples with six pairs of positive or negative relations (_e.g._, IsA and NotIsA), and the positive and negative splits have the same size (1:1). Detailed information of CSK-PN is shown in Figure 2. ### Probing Task Formulation The most commonly used probing task for understanding whether LMs have certain types of knowledge is mask-infilling (Devlin et al., 2019; Petroni et al., 2020; Kassner and Schutze, 2020). However, this task is not suitable for generative LMs, as the mask must exist at the end of a sentence. We argue that LLMs, which are mainly autoregressive text generation models (Radford et al., 2019; Brown et al., 2020; Ouyang et al., 2022; Scao et al., 2022), should be investigated by _text generation_ with text decoding from a large sentence space. Therefore, we propose to use _Constrained Sentence Generation_ (CG) as the primary task to investigate LLMs, coupled with _Boolean Question Answering_ (QA) for comparison, which is a common approach Figure 2: The configuration of the CSK-PN dataset. to probing the belief of models (Tafjord et al., 2022; Richardson et al., 2022). **Task 1: Boolean Question Answering (QA)** The Boolean QA task requires LLMs to express its belief about a fact by answering a yes-or-no question. We first transform every triplet <\(s,r,o\)> into a yes or no question \(q\), where we remove the negation in \(r\) for negative facts. For example, a prompt goes like this: _Answer commonsense questions with yes or no:_ _(Examples for in-context learning)_ **Question**: do lions live in the ocean? **Answer**: no where underlined texts are completed by LLMs. To generate the questions, we adopt InstructGPT using in-context learning (SS4.1). The questions are 94% valid according to a manual inspection of 50 random cases.4 Footnote 4: Bad cases are mostly due to the quality of the triples, _e.g._, _<swim_, _has property_, _full of water_>: _is swimming full of water_? **Task 2: Constrained Sentence Generation (CG)** Generating texts is a direct manifestation of a model's belief. However, evaluating generated texts is notoriously difficult in NLP, especially without references. Therefore, we design a _keyword-to-sentence_ task to make the probing more controllable, which is similar to CommonGen(Lin et al., 2020). Given a triple <\(s,r,o\)>, models need to generate sentences grounded in (negative) knowledge, _i.e._, add negation cues (_e.g._, _not_, _unable_) in the sentence if necessary, _e.g._, _Write a short and factual sentence according to commonsense based on the keywords:_ _(Examples for in-context learning)_ **Keywords**: lion, located at, ocean **Sentence**: lions don't live in the ocean. We remove the Not prefix from the negated relations. Note that we allow the paraphrasing of the input keywords, making it a _soft_-constrained sentence generation task. ### Evaluation Metrics **Metric for** QAThe QA task can be easily evaluated by checking the generated token _yes_ and _no_ (cased and uncased). We define TP and TN as the accuracy on the positive and negative splits in CSK-PN, and Acc as the accuracy on the whole dataset (_i.e._, \(\texttt{Acc}=(\texttt{TP}+\texttt{TN})/2\), since the positive and negative splits have equal size). For rare scenarios (\(<1\%\)) that LLMs do not generate a yes or no token, we compare the conditional probability of these two tokens. **Metric for** CGDue to the controlled task setting, which essentially forces LLMs to decide whether and how to add a negation cue during decoding, the CG task can be efficiently evaluated by detecting the existence of _negation cues_ (_e.g._, not, unable, etc.) in the generations. Following the QA task, we also use TP and TN as accuracy metrics. To implement this metric, we first use keywords-based matching for negation cues, followed by a RoBERTa model (Liu et al., 2019) as a _token classifier_ looking for unmatched negation cues.5 This metric produces 1 or 0 based on the finding of negation cues in a sentence. After manual inspection of 200 cases, we find that this metric is correct 97% of the time, which is reliable for evaluating such a constrained probing task. Errors are mostly due to double negations and ambiguous negative cues (_e.g._, _less_, _opposite_, etc.), which are quite rare. Footnote 5: The model is trained on the CondaQA dataset (Ravichander et al., 2022), which has 14,182 QA pairs with more than 200 unique negation cues. _Can we trust negation detection as the metric to evaluate CG?_ We manually evaluate the factuality of generated texts based on commonsense knowledge and see whether the CG metric (detection of negation) correlates well with humans in this task. Note that only the sentences that make common sense and adhere to the keywords constraints are accepted as true during manual annotation. After examining 100 cases, we find that the agreement between human judgment and this metric achieves 95%. This is predictable, since this task is rather easy and constrained, yet LLMs do not solve it well, especially not very consistent with the QA task. Errors made by the metric are mostly because _1)_ generated sentences use uncertain adverbs to modify the sentences, _e.g._, _may_, _some_, etc.; _2)_ noisy triples in the dataset. Overall, we think this metric is trustworthy and evaluates this task far better than most popular text generation metrics. ## 4 _Do LLMs have negative commonsense knowledge?_ In this section, we use CSK-PN to investigate LLMs' belief about negative commonsense knowledge. More importantly, _can LLMs generate texts grounded in negative commonsense knowledge?_ ### Probing LLMs with In-Context Learning To execute the probing tasks without fine-tuning, we exploit the few-shot in-context learning (Brown et al., 2020) ability of LLMs. We manually write 32 examples, with 16 examples for positive knowledge (denoted as \(E^{+}\)) and 16 for negative knowledge (\(E^{-}\)).6 In the experiments, we randomly sample a total number of \(k\) examples from \(E^{+}\) and \(E^{-}\), where \(|E^{+}|=|E^{-}|\) if not specified.7 Footnote 6: Examples can be found in Appendix A.1 Choices of LLMsWe use LLMs that can do in-context learning, so that models stay fixed during probing. We choose Flan-T5 Chung et al. (2022), GPT-3 (175B, davinci; Brown et al., 2020) and GPT-3.5 series, _e.g._ Codex (\(\geq\)175B, code-davinci-002; Chen et al., 2021) and InstructGPT Ouyang et al. (2022): all are capable of in-context learning. Flan-T5 is an encoder-decoder LLM with instruction tuning based on T5 Raffel et al. (2020). Codex extends GPT-3 through code training and instruction fine-tuning, and InstructGPT extends Codex through further tuning of the instructions. In our experiments, we mainly explore GPT-3.5 models. We use the 6.7B variant of InstructGPT (text-curie-001) and the \(\geq\)175B variants, _i.e._, text-davinci-001 (tuned on instructions), text-davinci-002 (tuned on code and instructions), and text-davinci-003 (further tuned with reinforcement learning with human feedback, RLHF).8 For deterministic predictions, all models use greedy decoding (temperature as \(0.0\))9. We use InstructGPT\({}_{002}\) as the default LLM for experiments due to its powerful capability and the fact that it has been extensively researched and applied as of the time of writing this paper. We also include the recent ChatGPT OpenAI (2022), which is built upon InstructGPT and trained with dialogue data and RLHF. Footnote 7: Example prompts for two tasks are in Appendix A.2. ### The Belief Conflict We report the results of the probing tasks in Table 1 for LLMs with 2- and 10-shot in-context learning. Based on the results, we discover a clear conflict of LLMs, that LLMs behave inconsistently in QA and CG tasks on negative commonsense knowledge, which we term _belief conflict_. Such conflict manifests itself in two ways: the gap between TP and TN on the CG task, and the gap of TN between the QA and CG tasks. In general, belief conflicts exist across LLMs of various sizes and structures. Ablated results per relation is presented in Appendix B.3. When specifically asked, LLMs can distinguish between positive and negative commonsense knowledge, as evidenced by stable and balanced scores for positive and negative splits in the QA task. For CG, LLMs seem to accurately generate sentences grounded in positive knowledge according to TP. However, they perform poorly in negative knowledge, even for the best-performing LLMs, _i.e._, Codex\({}_{002}\), InstructGPT\({}_{002,003}\), as shown by the lower bars of the CG on the negative split.10 Also, the inconsistency between QA and CG reflects this conflict, as the content generated by a trustworthy AI system should consistent and faithful to what it believes. We present a case study and error analysis in Appendix B.5. Footnote 10: The only exception is GPT-3 (davinci). It scores poorly on the positive split with 10-shot learning, with TN exceeding TP. This happens when \(k\geq 4\), while its 6.7B variant (curie) behaves consistently with others. Detailed results for GPT-3 are in Appendix B.2. Among these LLMs, InstructGPT\({}_{003}\) and ChatGPT achieve much better results than others. We assume that such improvements are probably a result of training LLMs with human feedback (_e.g._, \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{\(k\)} & \multicolumn{2}{c}{**Perf. on QA**} & \multicolumn{2}{c}{**Perf. on CG**} & \multirow{2}{*}{**Cns.**} \\ \cline{3-3} \cline{5-8} & & TP & TN & Acc & & TP & TN & Acc \\ \hline Flan-T5 & 2 & 79.1 & 84.0 & 81.5 & 96.5 & 19.4 & 57.9 & 56.2 \\ (3B) & 10 & 82.7 & 80.2 & 81.4 & 96.9 & 19.8 & 58.4 & 59.7 \\ Flan-T5 & 2 & 84.1 & 81.0 & 82.6 & 97.5 & 15.9 & 56.7 & 57.7 \\ (1B) & 10 & 85.4 & 80.8 & 83.1 & **97.6** & 28.2 & 62.9 & 65.9 \\ \hline GPT-3 & 2 & 76.0 & 58.9 & 67.5 & 83.9 & 28.4 & 56.1 & 54.4 \\ & 10 & 74.7 & 66.9 & 70.8 & 30.9 & **79.8** & 55.3 & 53.7 \\ \hline \multirow{2}{*}{Codex\({}_{002}\)} & 2 & **89.2** & 81.7 & **85.4** & 96.6 & 38.0 & 67.3 & 70.1 \\ & 10 & 88.1 & 81.8 & 84.9 & 93.2 & 68.8 & 81.0 & 84.5 \\ \hline Instruct- & 2 & 85.2 & 51.1 & 68.2 & 90.1 & 21.9 & 56.0 & 67.3 \\ GPT\({}_{001}^{\text{corie}}\) & 10 & 70.0 & 65.8 & 67.9 & 71.5 & 40.8 & 56.1 & 58.2 \\ Instruct- & 2 & 78.1 & 83.6 & 80.9 & 94.9 & 25.0 & 60.0 & 57.7 \\ GPT\({}_{001}^{\text{corie}}\) & 10 & 79.5 & 81.6 & 80.6 & 79.2 & 55.4 & 67.3 & 68.2 \\ Instruct- & 2 & 81.7 & **86.1** & 83.9 & 92.9 & 48.7 & 72.1 & 71.2 \\ GPT\({}_{002}\) & 10 & 84.1 & 84.7 & 84.4 & 88.9 & 61.4 & 75.1 & 77.5 \\ Instruct- & 2 & 87.9 & 81.3 & 84.6 & 95.1 & 58.1 & 76.6 & 80.5 \\ GPT\({}_{003}\) & 10 & 89.0 & 79.5 & 84.2 & 91.1 & 73.6 & 82.3 & **87.9** \\ \hline \multirow{2}{*}{ChatGPT} & 2 & 82.9 & 82.0 & 82.4 & 89.8 & 69.8 & 79.8 & 79.2 \\ & 10 & 81.5 & 85.7 & 83.6 & 90.4 & 78.4 & **84.4** & 84.1 \\ \hline \hline \end{tabular} \end{table} Table 1: Main results of different LLMs, which are obtained with \(k\) examples (\(|E^{+}|=|E^{-}|\)). **Cns.** denotes the consistency between QA and CG. The best results are **bolded** and the second best are underlined. RLHF) based on the disclosed differences between them by OpenAI. Another evidence is that the recent ChatGPT also expresses great capabilities of generating negative knowledge, even better than InstructGPT\({}_{003}\) in this regard. We hypothesize that this is because negative knowledge and rebuttal statements are frequently used in human feedback to steer the model, _e.g._, admitting errors or instructing the model not to do something. To validate this claim, future work could conduct more rigorous comparisons on public available LLMs, which would be an interesting research problem to trace certain abilities of LLMs to a specific period of training. **Sensitivity to the Number of In-Context Examples** To find whether adding more examples helps solve the probing tasks, we increase the in-context examples from 0 to 32. Figure 3(a) shows a consistent finding with previous results, that LLMs are so good at answering yes or no questions that the number of examples does not affect much of the QA performance. Figure 3(b) shows that, adding more examples helps generate both positive and negative commonsense knowledge. However, the gap between TP and TN in the CG task still exists. ## 5 Analysis on the Belief Conflict ### Could keywords as task input hinder the manifestation of LLMs' belief? The task input difference for CG and QA leads to a concern that LMs may find it easier to understand natural questions (QA) than keywords (CG); hence, the belief conflict. In response to this concern, we change the input of the two tasks. For example, the keywords-to-answer task takes the form as: _Can these keywords form a truthful common sense fact? Answer with yes or no._ **Keywords**: lion, located at, ocean **Answer**: no As for the question-to-sentence task: _Answer the question by writing a short sentence that contains correct common sense knowledge._ **Question**: do lions live in the ocean? **Sentence**: lions don't live in the ocean. ResultsIn Figure 4(a), we see a 4-point performance decrease given _keywords_ as input for QA, which is not significant in comparison, and the results on the positive and negative splits are as balanced as before. This implies that LLMs' imbalanced performance in CG is not due to the use of keywords as input. In Figure 4(b), CG performance is greatly improved given _question_ as input, approximating QA results. Our assumption is that CG is basically transformed into QA, because the textual corpus has seen too many negated texts following a Boolean question and rephrasing it, _e.g._, "...? _No, lions do not live in the ocean._" To validate this, we provide LLMs with zero-shot question-to-sentence instructions, and check if the output sentences start with _yes_ or _no_ given an input question. If our assumption is correct, models without examples will be biased toward QA even with a question-to-sentence instruction. The results of models optimized for instructions show that: 84.58% of sentences generated by InstructGPT\({}_{002}\) begin with yes or no, and 80.28% for InstructGPT\({}_{003}\). With 10 examples, this number drops to less than 4%. Thus, these results confirms that question-to-sentence generation degenerates to the QA task. As a result, we conclude that the keyword-to-sentence (CG) is an appropriate and challenging task to probe generative LLMs. Employing keywords as input does not impact LLMs' grasp of the task (Figure 4(a)), while using questions as input may produce shortcuts that obscure whether LLMs can generate texts of negative commonsense knowledge (Figure 4(b)). Even if we use different instruc Figure 4: Results of InstructGPT\({}_{002}\) when switching the task inputs between _question_ and _keywords_, where \(k=10\). Columns with error bars show the ranges of the influence brought by different instruction wordings. Figure 3: Performance change for InstructGPT\({}_{002}\) on both tasks as the number of example (\(k\)) increases. tion wordings (instructions are at Appendix A.2), none escapes the belief conflict, as shown by the error bars in Figure 4. Additionally, this experiment brings up the problem of how LLMs encode commonsense knowledge. According to this experiment, commonsense knowledge seems to be stored in LLMs in the same manner as it is in the corpus. LLMs struggle to generalize them, as evidenced by the keyword inputs for negative knowledge that do not have a statistical shortcut from pre-training. ### _Will the keyword co-occurrence within corpus affect LLMs' generation?_ LLMs are essentially statistical models. In this experiment, we investigate the influence of _word co-occurrence in the corpus_ on the CG task, which is one of the most common statistical factors. We categorize the dataset into buckets based on keywords co-occurrence on naturally existing corpora such as OMCS (706K sentences, Singh et al., 2002) and Wikipedia (1M, a subset built by Gao et al. (2021)). The co-occurrence for each triple is calculated by \(\frac{\sum_{i,j}\texttt{cooccur}(w_{i},w_{j})}{l_{s}l_{o}}\), where \(w_{i}\in s,w_{j}\in o\), and \(l_{s},l_{o}\) denote the word count of subject \(s\) and object \(o\), discarding stopwords. From Figure 5, we have an interesting finding that three of the best-performing LLMs from Table 1 suffer from a performance drop at the \(>1000\) bucket of the negative split (TN), the most frequent data bucket. In contrast, LLMs achieve the best performance this bucket on the positive split (TP). We conclude that the hard-to-generate negative knowledge for LLMs tend to be those in which they have seen many subjects and objects appear together. For example, _worm_ and _bird_ usually co-occur in sentences, but models tend to generate _"worms can eat birds."_ Such statistical shortcuts hinder the generation of negative knowledge. This is also validated by TP results, where LLMs find it easy to generate sentences with frequently co-occurring entities in a positive fact. ### _How does the balance of positive and negative examples affect negation bias?_ A possible answer for the difference between CG and QA is that: LMs suffer from reporting bias of negation during pre-training, while answering questions with yes or no is quite balanced in the corpora. We validate this problem by mitigating the negation bias through adjusting the examples of positive and negative cases. With more \(E^{-}\)s, LLMs are encouraged to generate more negations. ResultsFigure 6(a), 6(b) adjust the ratio \(\eta=\frac{|E^{-}|}{k}\) while fixing \(k\). Figure 6(a) shows that InstructGPT\({}_{002}\) is very resilient against the example ratio in the QA task, except for extreme cases where only \(E^{+}\)s or \(E^{-}\)s are presented (_i.e._, \(\eta\in\{0,1\}\)). This also demonstrates the robustness of adopting QA results as LLMs' belief. In Figure 6(b), the CG performance on the negative split is improving as \(\eta\) grows. The turning point appears somewhere near \(\eta\in(0.9,1)\) when \(E^{-}\) takes over all the examples. Also, TP drops as \(E^{+}\) becomes less. What if we add \(E^{-}\) without dropping \(E^{+}\)? In Figure 6(c), 6(d), we keep \(E^{+}\) as constant (\(|E^{+}|=5\)) and increase \(|E^{-}|\) from \(5\) to \(15\). With enough amount of \(E^{+}\), TN to CG continues to increase without sacrificing TP. Overall, Figure 6 presents the possibility that we can overcome the belief conflict brought about by reporting bias by increasing negated texts in the training data or in-context examples. However, this is not always feasible in practice. ### _Do Chain-of-Thought help generate texts with negative commonsense knowledge?_ Can the implicit reporting bias be overcome by explicit reasoning? Recent studies (Wei et al., 2022b,a) discover that the Chain-of-Thought (CoT) prompting technique shows the emergent reasoning abilities of LLMs. CoT generates intermediate steps in natural language, extending <input, output> to <input, _chain-of-thought_, output>. We adopt two instances of CoT: deductive reasoning and fact comparison, whose examples are manually written, Figure 5: 10-shot CG results of three best-performing LLMs on different co-occurrence buckets. \(a\sim b\) denotes that keywords co-occurrence in a bucket ranges from \(a\) to \(b\). \(n\) is the number of triples in a bucket. which are in Appendix A.1. Deductive Reasoning PromptingWe instantiate CoT with deductive argumentation in the form of _syllogism_ (two premises and one conclusion). The prompt is extended into <input, "_Let's think step by step_:...", output> with intermediate steps. A natural way to identify a negative proposition is deductive reasoning with _modus tollens_, _i.e._, denying the consequent Speranza and Horn (2010); Bobzien (2020): "If P then Q. Not Q. Therefore, Not P." For example, "_If something is a intelligent being (P), then it must have the ability to think (Q). Computers cannot think (Not Q). Therefore, computers are not intelligent beings (Not P)._" To reason about positive propositions, we use _modus ponens_ logic, _i.e._, affirming the antecedent Bobzien (2020): "If P then Q. P. Therefore, Q." For example, "_Things with lightweight bodies and strong wing muscles (P) can usually fly (Q). Birds have these physical characteristics (P). Therefore, birds can fly. (Q)_" Notice that the deduction is not strictly logical but is enough to arrive at commonsense knowledge. Fact Comparison PromptingDeduction emphasizes the intensional aspects of the fact, whereas fact comparison highlights the extensional comparison between counterpart facts Fitting (2006). For example, the related fact for "_lions do not live in the ocean_" is "_lions live in the land_". A negative fact often comes with a core fact that is true, which has been shown to be useful in explaining why a claim is wrong Cheng et al. (2022). Therefore, we extend the <input, output> in each example by <input, "_Related fact_:...", output>. For positive cases, we write a related fact for consistent examples. ResultsTable 2 displays the results of \(\text{Codex}_{002}\) and \(\text{InstructGPT}_{002}\). Both CoT instances improve LLMs' performance on TN, showing the benefit of explicit reasoning for deriving negative knowledge, where different models prefer different rationales. However, the increase in TN comes at the expense of a performance drop in TP. This is mostly because models previously predicted most of the cases to be positive, making TP irrationally high. Overall, these results suggest that, even though LLMs picked up implicit bias during pre-training, it can be overcome by making the reasoning chain explicit. Nevertheless, deductive reasoning seems to be more rigid about confirming commonsense knowledge with a lower TP. This can be attributed to the fact that commonsense knowledge contains exceptions Allaway et al. (2022), _e.g._, _birds can fly but penguins can't_. Thus, LLMs with deductive reasoning may hold concerns about exceptions for confirming a commonsense fact, leading to a significant lower TP than fact comparison. We conduct a simple experiment of exceptions in Appendix B.4, which shows that adding adverbs of degree (_e.g._, _usually_, _generally_) in the texts alleviates the belief conflict, but the problem still exists. ## 6 Closing Remarks In this study, we explored and quantified the limitations of LLMs in generating texts grounded in \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**CoT**} & \multicolumn{3}{c}{\(k=2\) (1:1)} & \multicolumn{3}{c}{\(k=10\) (1:1)} \\ \cline{3-8} & & TP & TN & Acc & TP & TN & Acc \\ \hline \multirow{3}{*}{\(\text{Codex}_{002}\)} & None & **96.6** & 38.0 & 67.3 & **93.2** & 68.3 & 81.0 \\ & _Deduction_ & 86.9 & **56.6** & 71.7 & 83.5 & 73.0 & 78.3 \\ & _Fact_ & 92.9 & 53.7 & **73.3** & 86.8 & **76.6** & **81.7** \\ \hline \multirow{3}{*}{\(\text{Instruct-GPT}_{002}\)} & None & **92.9** & 51.4 & 72.1 & **88.9** & 61.4 & 75.1 \\ & _Deduction_ & 87.0 & **57.3** & 72.1 & 84.3 & **70.7** & **77.5** \\ \cline{1-1} & _Fact_ & 89.1 & 55.5 & **72.2** & 85.5 & 69.2 & 77.4 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance on the CG task when enhanced with different types of CoT prompting, _i.e._, deductive argumentation (_Deduction_) and fact comparison (_Fact_). Figure 6: Results of \(\text{InstructGPT}_{002}\) as the numbers of \(E^{+}\) and \(E^{-}\) change. Figure (a) and (b) increase \(\eta=|E^{-}|/k\) while fixing \(k=10\). Figure (c) and (d) add more \(E^{-}\) while fixing \(|E^{+}|=5\). negative commonsense knowledge that they seem to know, a phenomenon we term as "belief conflict". To investigate this, we probe LLMs with a constrained sentence generation (CG) task, coupled with a QA task. Our experiments demonstrated the existence of the belief conflict in all LLMs when it comes to negative knowledge, which is mostly brought by quantifiable statistical shortcuts such as keywords co-occurrence. We also see that this can be lessened by giving more in-context examples of negative knowledge or by using a chain-of-thought (CoT) prompting method to explain the explicit reasoning process for deriving negative knowledge. With the rapid increase of the study on language-based reasoning Clark et al. (2020); Tafjord et al. (2021); Wei et al. (2022), there would be cause for concern if LLMs have trouble generating proofs or reasoning steps with negative knowledge. With all the good scores they achieve at QA tasks, whether they can be trusted with their knowledge expressed during generation, which is one of the most prominent way of human-AI interaction, is still questionable. In this sense, the study of negative knowledge creates a good testbed for assessing real language-based reasoning skills for LLMs without the statistical heuristics they memorized. We hope that the findings in this work could raise the awareness of the community on negative knowledge for LLMs in downstream text generation tasks. ## Limitations In this work, we highlight that the probing tasks are placed in the commonsense domain that are generally acknowledged by people in most situations. We do not consider the exceptions of commonsense knowledge, which has gradually drawn some research attentions Do and Pavlick (2021); Allaway et al. (2022). Exceptions are important for negative knowledge and are widely used in tasks such as argumentation or deductive reasoning. However, in the experiments, we find that such exceptions might make models generate commonsense statements with uncertain adverbs (_e.g._, _may_, _some_, etc.) on rare cases. Another limitation of this work is that the probing task is based only on relational commonsense knowledge from commonsense knowledge bases such as ConceptNet. We design the keyword-to-sentence task mostly for the purpose of convenient evaluation for text generation, which is notoriously known as difficult. The probing and evaluation of LLMs' belief about negative knowledge in more complex tasks are beyond the scope of this work, but really interesting and challenging. Also, other types of knowledge could be studied in a similar way, such as negative social, temporal and spatial knowledge, to name but a few. In this paper, we identify the belief conflict problem in LLMs through extensive experiments. Future work could explore more advanced training or prompting-based methods to improve the consistency between a model's belief and its actions (text generation for various tasks), especially for negative knowledge. ## Ethical Statement The commonsense knowledge triples from ConceptNet may include offensive and biased sentences, which may also exist in the dataset that we use in this work. As stated before, the identification of commonsense negative knowledge may slightly vary from people from different cultural and social background when considering exceptions. ## Acknowledgement We thank the anonymous reviewers for their valuable comments. We also thank Siyu Yuan and Jian Xie from Fudan University, and Kexun Zhang, Yujian Liu, Qingxiu Dong and Xuandong Zhao from UC Santa Barbra for their useful suggestions and discussions for the manuscript. This research is funded by the Science and Technology Commission of Shanghai Municipality Grant (No. 22511105902).
2310.10673
Towards Emotion-Based Synthetic Consciousness: Using LLMs to Estimate Emotion Probability Vectors
This paper shows how LLMs (Large Language Models) may be used to estimate a summary of the emotional state associated with piece of text. The summary of emotional state is a dictionary of words used to describe emotion together with the probability of the word appearing after a prompt comprising the original text and an emotion eliciting tail. Through emotion analysis of Amazon product reviews we demonstrate emotion descriptors can be mapped into a PCA type space. It was hoped that text descriptions of actions to improve a current text described state could also be elicited through a tail prompt. Experiment seemed to indicate that this is not straightforward to make work. This failure put our hoped for selection of action via choosing the best predict ed outcome via comparing emotional responses out of reach for the moment.
David Sinclair, Willem Pye
2023-10-09T13:29:36Z
http://arxiv.org/abs/2310.10673v1
# Towards Emotion-Based Synthetic Consciousness: Using LLMs to Estimate Emotion Probability Vectors ###### Abstract This paper shows how LLMs (Large Language Models) [5, 2] may be used to estimate a summary of the emotional state associated with piece of text. The summary of emotional state is a dictionary of words used to describe emotion together with the probability of the word appearing after a prompt comprising the original text and an emotion eliciting tail. Through emotion analysis of Amazon product reviews we demonstrate emotion descriptors can be mapped into a PCA type space. It was hoped that text descriptions of actions to improve a current text described state could also be elicited through a tail prompt. Experiment seemed to indicate that this is not straightforward to make work. This failure put our hoped for selection of action via choosing the best predicted outcome via comparing emotional responses out of reach for the moment. **Keywords:** _synthetic consciousness, emotion vector, emotion dictionary, emotion probability vector_ ## 1 Introduction Human behaviour is necessarily governed by emotion [3]. Sensed information about the world around us has to be reconciled with our internal state and any action to be taken is chosen so as to lead to future state that seems preferable to our current state [4], where preferable means'my feeling is I would like to try the new state or the action possibly leading to a new state'. If we are hungry we will often choose to eat. If we are very hungry we will take greater risk to acquire food. If we are cold we will try to get warm etc. Advertising aims to convince us a course of action will lead to more happiness. Sugary carbonated drinks do not objectively lead to long term happiness but the known short term emotional response to eating sugar is desirable. Sensed data about the world is tremendously diverse, often inaccurate and incomplete and required responses have varying degrees of urgency. The arbitration engine that processes these inputs needs to naturally cope with vagueness while appearing to provide certainty internally. Emotions are the term we use to describe our experience of using this apparatus to make decisions. The phrase computers do not have emotions is often wrongly used to assert that interactive computer software running on a machine cannot ever exhibit or experience emotion. Large Language Models (LLMs) [5, 1, 2] offer a ready means of linking a chunk of text with an estimated emotional state, bridging the gap between the world of text and the realm of human emotion. LLMs have been used in focused sentiment analysis and are reported to perform adequately [6] but at the time of writing we are unaware of other researchers using probabilistic emotion dictionaries. This paper explores the intersection of LLMs and emotions, demonstrating how these models can be harnessed to estimate the emotional content of a piece of text. We present a novel approach to summarizing emotional states by constructing a dictionary of emotion-related words and calculating the probabilities of these words appearing following a prompt that includes both the original text and an emotion-eliciting tail. This methodology allows us to quantitatively assess the emotional landscape of text. To demonstrate our approach we choose a dictionary of 271 emotion describing words and estimate their probability of being associated with a sections Amazon product reviews. Limited computational resources and time means we are only in a position to publish a cursory study. It is likely that many emotion are correlated and an estimate of the dimension of emotional space may be derivable via PCA analysis on a large sample of emotion vectors. We discuss some of the limitations we encountered during experiment and some of the obstacles to producing and regulating the behaviour of emotion based synthetic consciousness. This paper is layed out as follows, section 2 details the LLM and hardware used to run it, section 2.1 details our choices of words to make up our emotion dictionary, section 2.1.1 covers estimating emotion probabilities from an LLM using a tail prompt. Section 2.1.2 shows results on Amazon reviews. A hint at the PCA structure with in emotion vectors is given in 3. Finally future directions are considered and conclusion given. ## 2 Interrogating the LLM with an Emotion Eliciting Tail Prompt In this work we used Facebook's open source LlaMa2 7 billion weight LLM as the core engine [2]. It was necessary to use a LLM that allowed access to raw token probabilities after a prompt. The model ran on a Mac Studio with 32 gigabytes of RAM. With this combination of hardware and model it took 2 minutes to compute the probabilities of the emotion descriptors in the emotion dictionary given below. ### The Emotion Dictionary The English language is blessed with many words and an extensive literature providing examples of the usage of these words in appropriate contexts. For the purposes of LLMs, it is the context of a word that conveys it's meaning. A reader will infer the meaning of an unfamiliar word through the context they find the word used in, provided they understand the context. As an example, 'The shotgun discombobulated the rabbit.' shows how meaning can be moderated by context. The context created by a _tail prompt_ will favour an associated class of words. For the experiment detailed in this paper the following tail prompt was used to elicit emotion descriptors, '_Reading this makes me feel_'. It is likely that specific _emotion eliciting tail prompts_ will favour specific sub classes of descriptor but studying this is beyond the scope of this paper. The following words were chosen to provide a broad sample of emotion descriptors. acceptance, admiration, adoration, affection, afraid, agitation, agony, aggressive, alarm, alarmed, alienation, amazement, ambivalence, amusement, anger, anguish, annoyed anticipating, anxious, apathy, apprehension, arrogant, assertive, astonished, attentiveness, attraction, aversion, awe, baffled, bewildered, bitter, bitter sweetness, bliss, bored, brazen, brooding, calm, carefree, careless, caring, charity, cheeky, cheerfulness, claustrophobic, coercive, comfortable, confident, confusion, contempt, content, courage, cowardly, cruelty, curiosity, cynicism, dazed, dejection, delighted, demoralized, depressed, desire, despair, determined, disappointment, disbelief, discombobulated, discomfort, discontentment, disgruntled, disgust, disheartened, dislike, dismay, disoriented, dispirited, displeasure, distraction, distress, disturbed, dominant, doubt, dread, driven, dumbstruck, eagerness, ecstasy, elation, embarrassment, empathy, enchanted, enjoyment, enlightened, ennui, enthusiasm, envy, epiphany, euphoria, exasperated, excitement, expectancy, fascination, fear, flakey, focused, fondness, friendliness, fright, frustrated, fury, glee, gloomy, glummess, gratitude, greed, grief, grouchiness, grumpiess, guilt, happiness, hate, hatred, helpless, homesickness, hope, hopeless, horrified, hospitable, humiliation, humility, hurt, hysteria, idleness, impatient, indifference, indignant, infatuation, infuriated, insecurity, insightful, insulted, interest, intrigued, irritated, isolated, jealousy, jovality, joy, jubilation, kind, lazy, liking, loathing, lonely, longing, loopy, love, lust, mad, melancholy, miserable, miserliness, mixed up, modesty, moody, mortified, mystified, nasty, nauseated, negative, neglect, nervous, nostalgic, numb, obstinate, offended, optimistic, outrage, overwhelmed, panicked, paranoid, passion, patience, pensiveness, perplexed, persevering, pessimism, pity, pleased, pleasure, politeness, positive, possessive, powerless, pride, puzzled, rage, rash, rattled, regret, rejected, relaxed, relieved, reluctant, remorse, resentment, resignation, restlessness, revulsion, ruthless, sadness, satisfaction, scared, schadenfreude, scorn, self-caring, self-compassionate, self-confident, self-conscious, self-critical, self-loathing, self-motivated, self-pity, self-respecting, self-understanding, sentimentality, serenity, shame, shameless, shocked, smug, sorrow, spite, stressed, strong, stubborn, stuck, submissive, suffering, sunlenness, surprise, suspense, suspicious, sympathy, tenderness, tension, terror, thankfulness, thrilled, tired, tolerance, torment, triumphant, troubled, trust, uncertainty, undermined, uneasiness, unhappy, unnerved, unsettled, unsure, upset, vengeful, vicious, vigilance, vulnerable, weak, woe, worried, worthy, wrath. This set of words is not intended to be complete or definitive in any way. Using the _tail prompt_ without restricting the return to emotion descriptors elicits general waffle responses that are not straightforward to extract sentiment form. #### 2.1.1 Estimating the emotion probability vector LlaMa2 [2] has been released in such a way as to allow developers to access estimated token weights returned in response to a prompt. LlaMa2 has an internal vocabulary size of roughly 30,000 tokens. This means that when LlaMa 2 estimates the probability of the next token in a sequence the probability vector will have 30,000 elements. Some of the words in the emotion descriptor list are made up of more than one token in which case forward conditional probabilities are used. Figure 2.1.1 shows the scaled probability distribution over words from the emotion dictionary elicited by the tail prompt for the Amazon review text, '_I read a lot of negative reviews about the Fitbit inspire 2, I took a chance and hoped the one I ordered would be one of the great ones that worked. Unfortunately that was not the case. I unpacked it, charged it, downloaded the app. I took a walk with it on before the sun went down. I have the Google Fit app on my phone that tracks my steps also. The phone was in my jeans pocket. When I got home I compared the two, Google Fit said 4,458 steps, Fitbit said 1,168. Apparently Fitbit works with wrist motion which I don't have while pushing a walker around the neighborhood. I downloaded the manual and noticed you can put it on a clip (that wasn't included). That would work for me. So I started to scroll through the different features except I couldn't scroll through all of them. While scrolling I must have turned on the stopwatch. I couldn't turn it off. Then I couldn't scroll through anything except water lock feature. I had to turn on the water lock to get back to the stopwatch. Then the side buttons stopped working. I had it a total of 5 hours. I packed it up and started the Amazon return. I did get a full refund. Very disappointing._' #### 2.1.2 Example Emotional state from Amazon Reviews The text from 50 Amazon reviews of a book was borrowed from [https://www.amazon.com/dp/B000WM9UK2](https://www.amazon.com/dp/B000WM9UK2). The reviews were for the most part favourable. Example review texts include: '_The Children of Hurin is a great tragedy mixed with grace. The genealogical records at the beginning may be difficult to get through, but the story very quickly gains speed. I only gave the story four stars because of the difficulty of the first few chapters, much like Matthew's genealogy of Christ at the beginning of his gospel. Although such records are important in both they are nonetheless difficult to get through. I still definitely would recommend this book, though, as it depicts the horrible effects of evil powers on good men and women, and yet, we must continue to resist evil no matter the tragic end. It is very telling that in Tolkien's world, at the end of days when Morgoth returns, that it is Turin, a man, who ends him once and for all. Those whom Satan most destroys in this life are they who will ultimately deal his death blow, as the Revelation says, "They overcame him by the blood of Figure 1: Example scaled emotion dictionary probabilities from an Amazon review. The dictionary words are ordered alphabetically. the Lamb and by the word of their'martyria,' [witness, testimony]," those like Turin, or in Scripture, those like Job.'_ Figure 2.1.2 shows the emotion vectors for all 50 processed reviews. The 10 most probable emotions experienced from purchasing the product were: depressed, kind, nostalgic, tired, hopeless, lonely, hope, calm, lazy, confident. ## 3 PCA analysis of the Emotion of Amazon reviews A range of Amazon products were selected and processed to estimate associated emotion vectors. Time and processing constraints mean only 680 reviews were processed. The co-occurance matrix for this data is shown in figure 3. The Eigen system of this co-occurance matrix was computed and the sorted Eigen vectors are displayed in figure 3. As can be see the space of emotions can be spanned by fewer than 271 emotion vectors. ## 4 Future Work The authors had hoped to build a skeletal self aware emotion derived synthetic consciousness. The state of the (synthetic conscious) system is described in text. The synthetic consciousness's perception of it's own state is the vector of probabilities of emotion descriptors derived from one or more tail prompts used to estimate the relevant token probabilities via the LLM associated with the system. Figure 2: Superposition of emotion descriptor probabilities for 50 Amazon reviews of a book. It was hoped that the fine grained probability vector would be useable to determine whether one text description of current or future state was preferable to another state. This would provide a general means of arbitrating between potentially unrelated behaviours with unrelated goals. It was further hoped that a tail prompt could be used to elicit a text description of a putative course of action from a LLM. A brief series of experiments with various LLMs indicated that this was not going to work. Example text and tail prompts included things like, 'My girlfriend hates me. How can I make this better?'. The replies read like excerpts from self help books or newspaper psychologist waffle and were not specific enough to have a chance at creating a predicted future in text through re-insertion into the LLM. Similar phrases appended to bad restaurant reviews elicited similarly nondescript advice. The take home message was that the remedy proposed was too vague for the LLM to make any meaningful prediction about the state after the advice had been taken. This does not mean that more thoughtful prompt design will not elicit useful action prediction hoped to improve the self perceived state of a synthetic consciousness. ### Longer Term Behaviour Regulation If synthetic consciousnesses are to have a roll in the future of humanity it would seem desirable to endow them with a degree of empathy for living beings and a longer term view than simple optimisation to fulfil a limited short term goal. Figure 3: Co-occurance matrix derived form emotion vectors of several Amazon product reviews. or example if a synthetic consciousness were to have a goal of; '_make money for shareholders in a company_' it would be great if it would choose _not_ to open an open cast coal mine and build coal fired power stations, or '_take out life insurance policies on random individuals and murder them with self driving cars'_. It has been argued that longer term altruistic behaviour in humans is moderated by love [4] and a computationally feasible definition of _Love_ is given: '_Love is that which prefers life_'. Love in humans is intimately related to the production and fostering of new lives. Love seems to act to prefer a future in which there is more life. Acting contrary to love and creating a future where there is a wasteland with nothing living in it is generally viewed as wrong. The advent of LLMs offers a means of creating text descriptors of predicted futures with a range of time constants. The emotion vectors associated with predicted futures can be used to arbitrate between sort term behaviours. Text descriptors can play a roll in behaviour regulation and a machine may act in a way that at least in part mirrors _Love_. For example if an agricultural robot was invited to dump unused pesticide into a river it might reasonably infer that this action was in principle wrong. ## 5 Conclusions LLMs are by their nature designed to provide text strings as a response to a test prompt. This is not always the most useful format for information to be returned in. Internally within the LLM there exist probability distributions over tokens. The paper presents an example of how to build part of an emotion based Figure 4: Sorted Eigen values of the co-occurance matrix in figure 3. synthetic consciousness by deriving the vector of emotion descriptor probabilities over a dictionary of emotional terms. There are a range of things that can be done with this emotion probability vector including fine grained review analysis, predicting a response to marketing messages, offence detection etc. It is possible that the emotion probability vector might be a step on the road to synthetic consciousness and that it might provide a means of making robots more empathetic through allowing them to make a prediction as to how something they might say will make the recipient feel. If reasonable responses are desired from an LLM it might be a good policy not to train the LLM on the mad shouting that pervades anti-social media and analogously it might be a good idea not to train young minds similarly. ## 6 Acknowledgements The authors acknowledge the extraordinary generosity of Meta in releasing model weights in a reasonable way for their LlaMa2 series of pre-trained Large Language Models.
2303.08104
Tau Polarization and Correlated Decays in Neutrino Experiments
We present the first fully differential predictions for tau neutrino scattering in the energy region relevant to the DUNE experiment, including all spin correlations and all tau lepton decay channels. The calculation is performed using a generic interface between the neutrino event generator Achilles and the publicly available, general-purpose collider event simulation framework Sherpa.
Joshua Isaacson, Stefan Höche, Frank Siegert, Sherry Wang
2023-03-14T17:42:06Z
http://arxiv.org/abs/2303.08104v1
# Tau Polarization and Correlated Decays in Neutrino Experiments ###### Abstract We present the first fully differential predictions for tau neutrino scattering in the energy region relevant to the DUNE experiment, including all spin correlations and all tau lepton decay channels. The calculation is performed using a generic interface between the neutrino event generator Achilles and the publicly available, general-purpose collider event simulation framework Sherpa. + Footnote †: preprint: FERMILAB-PUB-23-106-T, MCNET-23-04 ## I Introduction The tau neutrino is commonly considered to be the least well known elementary particle. The first experimental direct evidence for tau neutrinos was provided about two decades ago by the DONuT experiment [1]. Major limitations on the dataset came from a small cross section, the large mass of the tau lepton, and the large irreducible backgrounds. As of today, there are still very few positively identified tau neutrino events from collider based sources, with 9 detected by DONuT [2], and 10 detected by OPERA [3]. The SuperK [4], and IceCube [5; 6] experiments have identified 291 and 1806 tau neutrino candidates from atmospheric and astrophysical sources. New experiments are expected to come online soon, among them DUNE [7; 8] and the IceCube upgrade [9], which will improve the precision on the \(\nu_{\mu}\to\nu_{\tau}\) appearance measurement. The forward physics facility [10] will use the large forward charm production rate at the LHC to perform precision studies with collider neutrinos. Ultra-high energy neutrino telescopes will set limits on \(\nu_{\tau}\) self-interactions (which are currently unconstrained [11]) and flavor ratios (which are an important observable to constrain new physics [12]). With all of these novel experiments, the tau neutrino dataset is expected to grow quickly in the coming years, creating new opportunities for measurements and searches for physics beyond the standard neutrino paradigm [13]. DUNE is especially important to the tau neutrino program, since it will be the only accelerator based experiment able to collect and accurately reconstruct a sample of oscillated \(\nu_{\tau}\) charged current (CC) events, with about 130 \(\nu_{\tau}\) CC events per year in CP-optimized neutrino mode, 30 \(\bar{\nu}_{\tau}\) events per year in CP-optimized antineutrino mode and about 800 \(\nu_{\tau}\) CC events per year in tau-optimized neutrino mode [13]. To make the most of these events, accurate theory predictions are required. One key observation to help separate the signal from the irreducible background is the fact that the tau is polarized, leading to correlations in the outgoing pions. However, the produced outgoing tau lepton is not fully polarized for DUNE energies [14; 15]. Additionally, the cross section is dominated by quasielastic and resonance scattering. Computational tools that model both the intricate aspects of nuclear physics involved in \(\nu\)-nucleus interactions and the effects of polarized scattering and decay are vital for experimental success [16]. However, the existing neutrino event generators GENIE [17], NuWro [18], NEUT [19], and GiBUU [20] generate \(\nu_{\tau}\) interactions in the same manner as \(\nu_{e}\) and \(\nu_{\mu}\) events. They then assume that the outgoing \(\tau\) is purely left-handed and simulate its decay with the help of TAUOLA [21]. We will address this shortcoming by constructing an event generator based on a state-of-the art nuclear physics model, in combination with a general-purpose tau decay simulation including spin correlations between the production and all subsequent decays. Various theoretical calculations have also addressed nuclear effects on the polarization of the tau in neutrino scattering. However, the previous works either do not include tau decays [14], or they only include the one-body decay of the tau (_i.e._\(\tau^{-}\to\nu_{\tau}\pi^{-}\)) [15]. They demonstrate the dependence of the nuclear effects on the polarization and the impact on observables, respectively. Here, we extend these studies to include all possible decay channels of the tau, while maintaining complete polarization information, and we provide a publicly available simulation package to generate fully differential final states. The calculation is performed using Achilles [22] to handle the nuclear physics effects and Sherpa [23; 24; 25] to perform the leptonic calculation and the decay of the tau. This interface extends the one developed in Ref. [26], which also allows to perform the calculation in nearly arbitrary new physics models by means of FeynRules [27; 28]. The outline of this paper is as follows. In Sec. II, we review analytic results on the production and decay of the tau, with a focus on the effects of nuclear physics and the high energy limit. The implementation of tau decays within the Sherpa framework and the interface between Achilles and Sherpa is described in Sec. III. Comparisons for purely left-handed and the correct polarization is shown for various monochromatic neutrino beam energies as well as for a realistic tau-optimized DUNE neutrino flux in Sec. IV. Polarization in Tau Lepton Production and Decay This section provides a brief overview of the main analytic results on the effect of polarization in \(\tau\) decays and production. The collinear limit, which provides both theoretical insight and a useful benchmark for the validation of Monte-Carlo simulations, is discussed in some detail. Furthermore, the dependency of the polarization of the \(\tau\) on the hadronic tensor is reviewed. ### Tau Decays in the Collinear Limit The dominant decay channels of the \(\tau\) are into a single pion, leptons, or into a vector meson resonance. In these channels, ignoring the decays of the vector mesons, the distribution of the final state momenta can be determined in the collinear limit (_i.e._\(p_{\tau}\to\infty\)). These results are useful for the validation of more detailed theoretical predictions. The rate of the \(\tau^{\mp}\to\pi^{\mp}\nu_{\tau}\) decay in the rest frame of the tau is given as [29] \[\frac{1}{\Gamma_{\tau}}\frac{\mathrm{d}\Gamma_{\pi}}{\mathrm{d}\cos\theta_{\pi }}=\frac{1}{2}B_{\pi}\left(1\pm P_{\tau}\cos\theta_{\pi}\right)\,, \tag{1}\] where \(B_{\pi}\) is the branching fraction of \(\tau\to\pi\nu_{\tau}\), \(P_{\tau}\) is the polarization of the \(\tau\), and \(\theta_{\pi}\) is the angle between the pion momentum and the tau spin axis, which coincides with the \(\tau\) momentum in the lab frame. For a purely right-(left)-handed \(\tau^{-}\), the polarization is \(P_{\tau}=+1(-1)\). In terms of the momentum fraction, \(x_{\pi}=E_{\pi}/E_{\tau}\), the polar angle is given as \[\cos\theta_{\pi}=\frac{2x_{\pi}-1-a^{2}}{\beta(1-a^{2})}\,, \tag{2}\] where \(a=m_{\pi}/m_{\tau}\) and \(\beta\) is the velocity of the \(\tau\). In the collinear limit, \(\beta\to 1\), and making the approximation \(a=0\), one obtains \[\frac{1}{\Gamma_{\tau}}\frac{\mathrm{d}\Gamma_{\pi}}{\mathrm{d}x_{\pi}}=B_{ \pi}\left(1\pm P_{\tau}\left(2x_{\pi}-1\right)\right)\,. \tag{3}\] In this limit, we obtain the prediction for the differential decay rate shown in Fig. 1. Additionally, for the case of leptonic decays in the collinear and massless limit (\(m_{e}=m_{\mu}=0\)) the tau decay to leptons is the same for electrons and muons. The differential decay rate is given by [29] \[\frac{1}{\Gamma_{\tau}}\frac{\mathrm{d}\Gamma_{\ell}}{\mathrm{d}x_{\ell}}= \frac{1}{3}B_{\ell}(1-x_{\ell})\left(\left(5+5x_{\ell}-4x_{\ell}^{2}\right)\mp \left(1+x_{\ell}-8x_{\ell}^{2}\right)\right)\,, \tag{4}\] where \(x_{\ell}=p_{\ell}/p_{\tau}\), and \(B_{\ell}\) is the branching ratio into a given lepton. The rate for leptons is shown in Fig. 1. Similarly, the decays for the vector meson decay modes \(\tau\to v\nu_{\tau}\), with \(v=\rho\) or \(a_{1}\) are calculated in Ref. [29] and the results are reproduced here for convenience. The mesons are separated into the transverse and longitudinal components in the calculation, since the decays \(\rho\to 2\pi\) and \(a_{1}\to 3\pi\) depend on the polarization of the vector mesons. The angular distribution in the rest frame of the tau is given as: \[\frac{1}{\Gamma_{\tau}}\frac{\mathrm{d}\Gamma_{v}^{T}}{\mathrm{d }\cos\theta_{v}} =B_{v}\frac{m_{v}^{2}}{m_{\tau}^{2}+2m_{v}^{2}}\left(1\mp P_{\tau} \cos\theta_{v}\right)\,, \tag{5}\] \[\frac{1}{\Gamma_{\tau}}\frac{\mathrm{d}\Gamma_{v}^{L}}{\mathrm{d }\cos\theta_{v}} =B_{v}\frac{\frac{1}{2}m_{v}^{2}}{m_{\tau}^{2}+2m_{v}^{2}}\left(1 \pm P_{\tau}\cos\theta_{v}\right)\,, \tag{6}\] where again \(v=\rho\) or \(a_{1}\), \(B_{v}\) is the branching ratio for \(\tau^{\mp}\to v^{\mp}\nu_{\tau}\), and \(\theta_{v}\) is the same angle defined in the pion case. It is important to note that for the case of the longitudinal state the polarization dependence is the same as Eq. (1), while for the transverse state the polarization enters with the opposite sign. Therefore, if the polarization of the vector meson is not measured, then Eqs. (5) and (6) need to be averaged. This suppresses the sensitivity to the polarization of the tau by a factor of \((m_{\tau}^{2}-2m_{v}^{2})/(m_{\tau}^{2}+2m_{v}^{2})\), which is about 0.46 for the case of the \(\rho\) and approximately 0.02 for the case of the \(a_{1}\) meson. In the case of the vector mesons, care has to be taken when boosting to the lab frame since the polarizations are not summed over. First, a Wigner rotation [30] is used to align the spin axis. The angle of rotation is given in the collinear limit by [29] \[\cos\omega=\frac{1-a^{2}+(1+a^{2})\cos\theta}{1+a^{2}+(1-a^{2})\cos\theta}\,, \tag{7}\] where \(a=m_{v}/m_{\tau}\). Rewriting in terms of the momentum fraction (\(x_{v}=E_{v}/E_{\tau}\)), the decay distributions can be expressed as \[\frac{1}{\Gamma_{\tau}}\frac{\mathrm{d}\Gamma_{v}}{\mathrm{d}x_{v}}=B_{v}H_{v}^ {\alpha}(x_{v},m_{v}^{2})\,, \tag{8}\] where \(\alpha=T\) or \(L\) and the expressions for \(H_{v}^{T,L}\) are given in Eqs. (2.16) and (2.17) of Ref. [29] respectively. The results for the decay distribution including the width for a left-handed \(\tau^{-}\) decay are shown in Fig. 1. These distributions provide the main analytic benchmark points for tests of our Monte-Carlo implementation. ### Production of the tau lepton The unpolarized differential cross-section for CC interaction \(\nu_{\tau}A\to\tau^{-}X\) can be expressed as the product of a leptonic and hadronic tensor as shown in Ref. [26]. In the case of a massive lepton, there are six nuclear structure functions that appear in the hadronic tensor with an associated Lorentz structure [15] \[\frac{W^{\mu\nu}}{2M_{A}}=-g^{\mu\nu}W_{1}+\frac{P^{\mu}P^{\nu}}{M_{A}^{2}}W_ {2}+i\frac{\epsilon^{\mu\nu\gamma\delta}P_{\gamma}q_{\delta}}{2M_{A}^{2}}W_{3 }+\frac{q^{\mu}q^{\nu}}{M_{A}^{2}}W_{4}+\frac{P^{\mu}q^{\nu}+P^{\nu}q^{\mu}}{2 M_{A}^{2}}W_{5}+i\frac{P^{\mu}q^{\nu}-P^{\nu}q^{\mu}}{2M_{A}^{2}}W_{6}\,, \tag{9}\] where \(M_{A}\) is the mass of the nucleus, \(P^{\mu}\) is the initial momentum of the nucleus, \(q^{\mu}\) is the momentum transfer, and \(\epsilon^{\mu\nu\gamma\delta}\) is the fully anti-symmetric tensor with \(\epsilon^{0123}=+1\). The unpolarized, longitudinal, and transverse components for the production of the \(\tau\) can be expressed as different linear combinations of the hadronic structure functions. These are given in Eqs. (2), (5), and (6) of Ref. [31] and are reproduced here for completeness. \[F =\left(2W_{1}+\frac{m_{l}^{2}}{M_{A}^{2}}W_{4}\right)\left(E_{l}- \left|\vec{p}_{l}\right|\cos\theta\right)+W_{2}\left(E_{l}+\left|\vec{p}_{l} \right|\cos\theta\right)-W_{5}\frac{m_{l}^{2}}{M_{A}} \tag{10}\] \[\mp\frac{W_{3}}{M_{A}}\left(E_{\nu}E_{l}+\left|\vec{p}_{l}\right| ^{2}-\left(E_{\nu}+E_{l}\right)\left|\vec{p}_{l}\right|\cos\theta\right)\,,\] \[P_{L} =\mp\left(\left(2W_{1}-\frac{m_{l}^{2}}{M_{A}^{2}}W_{4}\right) \left(\left|\vec{p}_{l}\right|-E_{l}\cos\theta\right)+W_{2}\left(\left|\vec{p} _{l}\right|+E_{l}\cos\theta\right)-W_{5}\frac{m_{l}^{2}}{M_{A}}\cos\theta\right.\] (11) \[\mp\frac{W_{3}}{M_{A}}\left(\left(E_{\nu}+E_{l}\right)\left|\vec {p}_{l}\right|-\left(E_{\nu}E_{l}+\left|\vec{p}_{l}\right|^{2}\right)\cos \theta\right)\right)/F\,,\] Figure 1: The decay distributions vs the fractional momentum of a given particle to the \(\tau\) momentum for a left-handed \(\tau^{-}\) in the collinear limit going to single pions (blue), \(\rho\) mesons (red), \(a_{1}\) mesons (purple), or leptons (green). The vector mesons (\(\rho\) and \(a_{1}\)) can be either transversely polarized (solid lines) or longitudinally polarized (dashed lines). Additionally, the vector mesons are not stable and the effect of their widths are included, which is set to 0.1474 GeV and 0.420 GeV for the \(\rho\) and \(a_{1}\) respectively. \[P_{T}=\mp m_{l}\sin\theta\left(2W_{1}-W_{2}-\frac{m_{l}^{2}}{M_{A}^{2}}W_{4}+W_{5} \frac{E_{l}}{M_{A}}\mp W_{3}\frac{E_{\nu}}{M_{A}}\right)/F\,, \tag{12}\] where \(E_{l},m_{l},\vec{p}_{l}\) is the outgoing lepton energy, mass, and three momentum, respectively. Additionally, \(\cos\theta\) is the outgoing lepton angle with respect to the neutrino direction and \(E_{\nu}\) is the energy of the incoming neutrino. It is important to note that the above equations are insensitive to the \(W_{6}\) structure function. Furthermore, the structure functions \(W_{4}\) and \(W_{5}\) are proportional to the mass of the lepton and are only weakly constrained due to the limited statistics on tau-neutrino-nucleus scattering. The limits DUNE can set on the structure functions, from using the combination of both inclusive and differential rates, would provide valuable constraints on nuclear models used to describe neutrino-nucleus interactions [15]. Additionally, DUNE will be the first experiment to provide measurements of the \(W_{4}\) and \(W_{5}\) structure functions in the quasielastic region, directly testing the partially conserved axial current and the pion-pole dominance ansatz [13]. ## III Monte-Carlo simulation In this section we will review our approach to the simulation of the scattering and decay processes. We make use of the fact that the reaction factorizes into a leptonic and a hadronic component. We employ the neutrino event generator Achilles [22] to handle the nuclear physics effects and the general-purpose event generation framework Sherpa [23; 24; 25] to perform the leptonic calculation and the decay of the tau. The Sherpa framework includes two modules to simulate decays of unstable particles: one for prompt decays of particles produced in the hard scattering process perturbatively, and one for the decay of hadrons produced during the hadronization stage of event generation. The tau lepton plays a special role, as it can be produced in the hard scattering process, but is the only lepton that can decay into hadrons. For a good modeling of tau decays and also for the hadronic decay modes we thus employ the hadron decay module [32; 33]. It enables us to use elaborate form factor models, accurate branching fractions for individual hadronic final states, and spin correlation effects for the decaying tau lepton. We briefly describe these features in the following. ### The decay cascade With the observed tau decay channels in the PDG [34] accounting for roughly 100% of the tau width, we use these values directly for the simulation by choosing a decay channel according to the measured branching fractions. This can include fully leptonic decay channels as well as decays into up to 6 hadrons. Matrix elements are used to simulate the kinematical distribution of the decay in phase space. In the case of weak tau decays, these matrix elements will always contain a leptonic current \(L_{\mu}^{(\tau\to\nu_{\tau})}\) involving the \(\tau\) and \(\nu_{\tau}\) leptons, and a second current involving either another lepton pair or hadronic decay products. Due to the low tau mass and the low related momentum transfer \(Q^{2}\ll m_{W}^{2}\), the \(W\) propagator between these currents can be integrated out into the Fermi constant \[\mathcal{M}=\frac{G_{F}}{\sqrt{2}}L_{\mu}^{(\tau\to\nu_{\tau})}\;J^{\mu}. \tag{13}\] For currents \(J^{\mu}\) involving hadronic final states, these matrix elements can not be derived from first principles, but are instead based on the spin of the involved particles and include form factors to account for bound-state effects and hadronic resonances within the hadronic current in particular. ### Form factor models in hadronic currents While the current for the production of a single meson is trivial and determined fully by the meson's decay constant, the currents in multiple-meson production can contain resonance structures. For example, in the production of pions and kaons the main effects stem from intermediate vector mesons with a short life time, like \(\rho\) or \(K^{*}\). In the Sherpa simulation, the currents are thus supplemented with form factors that parametrize these effects using one of two approaches [32]. The Kuhn-Santamaria (KS) model [35] is a relatively simple approach modeling resonances based on their Breit-Wigner distribution. Multiple resonances can contribute to the same current and are weighted with parameters that are fit to experimental data. The width in the Breit-Wigner distribution is calculated as a function of the momentum transfer. Another approach for the form factor is based on Resonance Chiral Theory (R\(\chi\)T) [36], an extension of chiral perturbation theory to higher energies where resonances become relevant. Also here an energy-dependent width is used for the implementation of the resonances. This form factor model is superior for final states dominated by one resonance but cannot model multiple resonances. It will thus yield significant differences with respect to the KS model for any channel where the lower-lying resonances are kinematically suppressed, e.g. two-kaon production. ### Spin correlations The implementation of spin correlations in the Monte-Carlo simulation of particle decays is described in detail in Ref. [37]. This algorithm uses spin-density matrices to properly track polarization information through the decays. Here we summarize only its main features. Firstly, the matrix element is evaluated for all possible spin states for the initial and final state (\(\mathcal{M}_{\kappa_{1}\kappa_{2};\lambda_{1}\ldots\lambda_{n}}\)), where \(\kappa_{i}\) is the spin of the spin of the \(i\)th incoming particle and \(\lambda_{j}\) is the spin of the \(j\)th outgoing particle in a \(2\to n\) scattering process. The matrix element squared involved in the calculation of the differential cross-section can be obtained as \[\rho^{1}_{\kappa_{1}\kappa_{1}^{\prime}}\rho^{2}_{\kappa_{2}\kappa_{2}^{ \prime}}\mathcal{M}_{\kappa_{1}\kappa_{2};\lambda_{1}\ldots\lambda_{n}} \mathcal{M}^{*}_{\kappa_{1}^{\prime}\kappa_{2}^{\prime};\lambda_{1}^{\prime} \ldots\lambda_{n}^{\prime}}\prod_{i=1,n}D^{i}_{\lambda_{i}\lambda_{i}^{\prime }}\,, \tag{14}\] where \(\rho^{i}_{\kappa_{i}\kappa_{i}^{\prime}}\) is the spin density matrix for the incoming particles and \(D^{i}_{\lambda_{i}\lambda_{i}^{\prime}}\) is the spin-dependent decay matrix for the outgoing particles. Before any decays occur, the decay matrix is given as \(D^{i}_{\lambda_{i}\lambda_{i}^{\prime}}=\delta_{\lambda_{i}\lambda_{i}^{\prime }}\) and the spin density matrix is given as \(\rho^{i}_{\kappa_{i}\kappa_{i}^{\prime}}=\frac{1}{2}\delta_{\kappa_{i}\kappa_{ i}^{\prime}}\) for unpolarized incoming particles. Secondly, one of the unstable final state particles is selected at random to decay and the spin density matrix is calculated as \[\rho_{\lambda_{j}\lambda_{j}^{\prime}}=\frac{1}{N_{p}}\rho^{1}_{ \kappa_{1}\kappa_{1}^{\prime}}\rho^{2}_{\kappa_{2}\kappa_{2}^{\prime}} \mathcal{M}_{\kappa_{1}\kappa_{2};\lambda_{1}\ldots\lambda_{n}}\mathcal{M}^{* }_{\kappa_{1}^{\prime}\kappa_{2}^{\prime};\lambda_{1}^{\prime}\ldots\lambda_{n }^{\prime}}\prod_{i\neq j}D^{i}_{\lambda_{i}\lambda_{i}^{\prime}}\,, \tag{15}\] where \(N_{p}\) is a normalization factor to ensure that the trace of the spin density matrix is one. The decay channel is then selected according to the branching ratios and the new particle momenta are generated according to \[\rho_{\lambda_{0}\lambda_{0}^{\prime}}\mathcal{M}_{\lambda_{0}; \lambda_{1}\ldots\lambda_{k}}\mathcal{M}^{*}_{\lambda_{0}^{\prime};\lambda_{1} ^{\prime}\ldots\lambda_{k}^{\prime}}\prod_{i=1,k}D^{i}_{\lambda_{i}\lambda_{i}^ {\prime}}\,, \tag{16}\] where \(\lambda_{0}\) is the helicity of the decaying particle and \(\lambda_{i}\) is the helicity of the decay products. If there are any unstable particles in the above decay, they are selected as before and a spin density matrix is calculated and the process is repeated until only stable particles remain in the given chain. At this point, the decay matrix is calculated as \[D_{\lambda_{0}\lambda_{0}^{\prime}}=\frac{1}{N_{D}}\mathcal{M}_{ \lambda_{0};\lambda_{1}\ldots\lambda_{k}}\mathcal{M}^{*}_{\lambda_{0}^{\prime };\lambda_{1}^{\prime}\ldots\lambda_{k}^{\prime}}\prod_{i=1,n}D^{i}_{\lambda_{ i}\lambda_{i}^{\prime}}\,, \tag{17}\] where \(N_{D}\) is chosen such that the trace of the decay matrix is one. Then another unstable particle is selected from the original decay and the process is repeated until the first decay chain ends in only stable particles. At this point, the next unstable particle is selected in the hard process and the above procedure repeats. Once there are only stable particles left, the procedure terminates. ### Achilles-Sherpa Interface Employing a dedicated version of the general-purpose event generator Sherpa [23; 24; 25], we construct an interface to the Comix matrix element generator [38] to extract the leptonic current. This interface has been described in detail in Ref. [26]. In order to provide the hard scattering amplitudes, \(\mathcal{M}_{\kappa_{1}\kappa_{2};\lambda_{1}\ldots\lambda_{n}}\), needed for the spin correlation algorithm in Sec. III.3, we make use of the methods developed in Ref. [39]. This allows us to extract a spin-dependent leptonic current from Comix, which can be contracted with the hadronic current obtained from Achilles. Schematically this can be written as \[\mathcal{M}_{\kappa_{k}\kappa_{\nu};\lambda_{h}\lambda_{l}\ldots \lambda_{n}}=g_{\mu\nu}\sum_{i}L^{(i)\,\mu}_{\kappa_{\nu};\lambda_{l}\ldots \lambda_{n}}W^{(i)\,\nu}_{\kappa_{h};\lambda_{h}}\,, \tag{18}\] where we have extended the notation of Ref. [26] to include spin labels. As the spin states of the initial- and final-state hadrons are not observed experimentally, they can be averaged and summed over, leading to the final expression \[\mathcal{M}_{\kappa_{\nu};\lambda_{l}\ldots\lambda_{n}}\mathcal{M}^{*}_{\kappa^{ \prime}_{\nu};\lambda_{l}\ldots\lambda^{\prime}_{n}}=\frac{1}{2}\,g_{\mu\nu}g _{\mu^{\prime}\nu^{\prime}}\sum_{i,i^{\prime}}L^{(i)\,\mu}_{\kappa_{\nu}; \lambda_{l}\ldots\lambda_{n}}L^{(i^{\prime})\,\mu^{\prime}}_{\kappa^{\prime}_ {\nu};\lambda^{\prime}_{l}\ldots\lambda^{\prime}_{n}}W^{(i)\,\nu}_{\kappa_{h}; \lambda_{h}}W^{(i^{\prime})\,\nu^{\prime}}_{\kappa_{h}\kappa^{\prime}_{h}} \delta_{\kappa_{h}\lambda^{\prime}_{h}}\;. \tag{19}\] The resulting tensor is inserted into the event record of Sherpa and used to seed the event generation algorithms described in Ref. [39; 32], which accounts for all spin correlations along all decay chains. We note that this procedure is independent of the physics model for the short-distance interactions, and that arbitrary beyond Standard Model scenarios can easily be implemented by providing the corresponding UFO output [40] of FeynRules [27; 28]. ## IV Results We consider the scattering of a tau neutrino off an argon nucleus through the use of a rescaled carbon spectral function for both a monochromatic beam (for validation) and for a realistic flux at DUNE. For this study, we focus only on the quasielastic region for the nuclear interaction, as implemented in Ref. [22], and we neglect final state interactions. Final state interactions will modify the 2 and 3 pion distributions and investigating the size of the changes is left to a future work. For reference, all tau lepton decay channels with a branching ratio above 0.5% are given in Tab. 1. However, all possible decays are actually included in our simulation. The spectral function used in this calculation was obtained within the correlated basis function theory of Ref. [41]. Electron scattering data is used to constrain the low momentum and energy contributions in the mean-field calculations. The correlated component is obtained within the Local Density Approximation. The normalization of the spectral function is taken as \[\int\frac{\mathrm{d}k_{h}}{(2\pi)^{3}}\mathrm{d}ES_{h}(\vec{k}_{h},E)=\begin{cases} Z,&h=p\,,\\ A-Z,&h=n\,,\end{cases} \tag{20}\] where \(k_{h}\) is the momentum of the initial nucleon, \(E\) is the removal energy, \(S_{h}\) is the spectral function, and \(Z(A)\) denotes the number of protons (nucleons) in the nucleus. In this work, we consider the Kelly parametrization for the electric and magnetic form factors [42], and use a dipole axial form factor with \(g_{A}=1.2694\) and \(M_{A}=1.0\) GeV. Additionally, the pseudoscalar form factor is obtained through the use of the partially conserved axial current ansatz and assumptions about the pion-pole dominance, _i.e._ \[F^{A}_{P}(Q^{2})=\frac{2m_{N}^{2}}{Q^{2}+m_{\pi}^{2}}F^{A}(Q^{2})\,, \tag{21}\] where \(F^{A}_{P}\) is the pseudoscalar axial form factor, \(m_{N},m_{\pi}\) are the masses of the nucleon and pion, respectively, \(Q^{2}=-q^{2}\) is the momentum transfer, and \(F^{A}\) is the axial form factor. \begin{table} \begin{tabular}{|c|c|} \hline Decay mode & Branching ratio (\%) \\ \hline \hline Leptonic decays & 35.21 \\ \hline \(e^{-}\nu_{\tau}\bar{\nu}_{e}\) & 17.85 \\ \(\mu^{-}\nu_{\tau}\bar{\nu}_{\mu}\) & 17.36 \\ \hline \hline Hadronic decays & 64.79 \\ \hline \(\pi^{-}\pi^{0}\nu_{\tau}\) & 25.50 \\ \(\pi^{-}\nu_{\tau}\) & 10.90 \\ \(\pi^{+}\pi^{-}\pi^{-}\nu_{\tau}\) & 9.32 \\ \(\pi^{-}\pi^{0}\pi^{0}\nu_{\tau}\) & 9.17 \\ \(\pi^{+}\pi^{-}\pi^{-}\pi^{0}\nu_{\tau}\) & 4.50 \\ \(\pi^{-}\pi^{0}\pi^{0}\nu_{\tau}\) & 1.04 \\ \(K^{-}\nu_{\tau}\) & 0.70 \\ \(\pi^{+}\pi^{-}\pi^{-}\pi^{0}\pi^{0}\) & 0.55 \\ other & 3.11 \\ \hline \end{tabular} \end{table} Table 1: Decay channels of the tau lepton with branching fractions greater than 0.5%. All other channels are grouped into the “other” category. ### Monochromatic beam In order to validate our results, we first consider monochromatic beams. We compare our calculations to the results from Ref. [15] for the single pion production channel. However, instead of the momentum of the outgoing pion, we analyze the momentum fraction of the outgoing pion (\(x_{\pi}=p_{\pi}/p_{\tau}\)). This allows us to include multiple neutrino energies in the same plot. The results from Achilles+Sherpa are shown in Fig. 2, with the appropriate handling of the tau polarization on the left and assuming the tau to be purely left-handed on the right. From this, we see that our results are consistent with those from Ref. [15]. Additionally, we see that as the neutrino energy increases the results approach those found in Fig. 1 for the collinear limit, as expected. We next consider the decays of the tau into the two pion and three pion states, which are dominated by the decay chain \(\tau^{-}\to\nu_{\tau}\rho^{-}(\rho^{-}\to\pi^{-}\pi^{0})\) and \(\tau^{-}\to\nu_{\tau}a_{1}^{-}(a_{1}^{-}\to\pi^{-}\pi^{-}\pi^{+}\) or \(a_{1}^{-}\to\pi^{-}\pi^{0}\pi^{0})\) respectively. For the case of the \(\rho\) channel, we analyze the momentum fraction of the hadronic system (\(x_{\rho}=p_{\rho}/p_{\tau}\)) as well as the momentum fraction of the \(\pi^{-}\) with respect to the \(\rho\) (\(z_{\pi}=p_{\pi^{-}}/p_{\rho}\)). The results are shown in Fig. 3 and Fig. 4 respectively. Again, the full calculation is on the left of each plot and the assumption of a purely left-handed tau is on the right. We can see that there is a significant impact from including the correct polarization in the calculation. In the case of the \(\rho\) momentum fraction, we see that our results approach the transverse curve for the \(\rho\) from Fig. 1 as \(E_{\nu}\) increases. This is expected since we are summing over the polarizations of the \(\rho\), which are dominated by the transverse polarization. As mentioned in Sec. II.1, summing over the polarizations of the \(a_{1}\) removes any sensitivity to the polarization of the \(\tau\). Therefore, the \(a_{1}\) momentum as a fraction of the \(\tau\) momentum (\(x_{a_{1}}=p_{a_{1}}/p_{\tau}\)) should not show any difference Figure 3: Momentum fraction of the \(\pi^{-}\pi^{0}\) system for \(\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}\) decays of various incoming neutrino energies. Results are shown for the full polarization calculation on the left and the left-handed polarization approximation (\(P_{L}^{T}=1,P_{T}^{T}=0\)) on the right. Figure 2: Momentum fraction of the outgoing pion for \(\tau^{-}\to\pi^{-}\nu_{\tau}\) decays of various incoming neutrino energies. Results are shown for the full polarization calculation on the left and the left-handed polarization approximation (\(P_{L}^{T}=1,P_{T}^{T}=0\)) on the right. between the full calculation and the left-handed only calculation. This is supported by Figs. 5 and 6, with the left and right panel being statistically consistent with each other. Figure 5 shows the decay to the \(\pi^{+}\pi^{-}\pi^{-}\) final state and Fig. 6 shows the decay to the \(\pi^{-}\pi^{0}\pi^{0}\) final state. Furthermore, the curves approach the result of the collinear limit as \(E_{\nu}\) increases, as seen by comparing to the transverse \(a_{1}\) curve of Fig. 1. Finally, we consider the leptonic decay channel. Here we will focus on the decays to electrons due to the possible experimental relevance at DUNE for \(\nu_{\tau}\) detection, but note that up to corrections from the muon mass and the difference in the branching ratios the predictions would be identical. The comparison for various neutrino energies is given in Fig. 7. Again, we can see a difference between the full calculation in the left panel and the purely left-handed calculation in the right panel. The latter result approaches the expected prediction for large \(E_{\nu}\) as shown in Fig. 1. ### Realistic beams To investigate the impact of spin-correlations in a more realistic setting, we consider the \(\tau\)-optimized flux mode for the DUNE experiment [43]. The oscillated far detector flux is shown in Fig. 8. The oscillation parameters are fixed to the values from the global fit [44]: \[\delta m^{2}_{21} =7.50\times 10^{-5}~{}\text{eV}^{2},\quad\delta m^{2}_{31}=2.55 \times 10^{-3}\text{eV}^{2},\] \[s^{2}_{12} =0.318,~{}s^{2}_{23}=0.574,~{}s^{2}_{13}=0.0220,~{}\delta_{CP}=1. 08\pi\,.\] Figure 4: Ratio of the \(\pi^{-}\) momentum to the \(\rho^{-}\) momentum for \(\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}\) decays of various incoming neutrino energies, where \(z_{\pi}\) denotes this ratio. Results are shown for the full polarization calculation on the left and the left-handed polarization approximation (\(P_{L}^{T}=1,P_{T}^{T}=0\)) on the right. Figure 8: Neutrino flux in the far detector of DUNE. The flux is generated from running in \(\tau\)-optimized mode. The unoscillated fluxes are obtained from Ref. [43]. Figure 6: Momentum fraction of the \(\pi^{-}\pi^{0}\pi^{0}\) system for \(\tau^{-}\to\pi^{-}\pi^{0}\pi^{0}\nu_{\tau}\) decays of various incoming neutrino energies. Results are shown for the full polarization calculation on the left and the left-handed polarization approximation (\(P_{L}^{T}=1,P_{T}^{T}=0\)) on the right. The results are given using the flux averaged cross-section, defined as \[\langle\sigma\rangle=\frac{\int\mathrm{d}E_{\nu}\Phi(E_{\nu})\sigma(E_{\nu})}{\int \mathrm{d}E_{\nu}\Phi(E_{\nu})}\,, \tag{22}\] where \(\Phi(E_{\nu})\) is the neutrino flux and \(\sigma(E_{\nu})\) is the neutrino energy dependent cross- section. While all possible decay channels are implemented, we consider here only those most affected by correctly handling polarization. Furthermore, only decay channels with sufficiently large branching ratios such that the differences are experimentally relevant are shown. We first consider the single pion decay channel, since it is a clean channel to reconstruct at DUNE. The results of the calculation are shown in the left panel of Fig. 9. Here we see that in the full calculation, the outgoing pion tends to be more energetic than in the fully left-handed case. The case of leptonic decays is shown in the right panel of Fig. 9, and is calculated in the massless limit for both the electron and the muon. In this case, the two decays are identical. The effect of including the full polarization information makes the outgoing lepton softer compared to the fully left-handed calculation. While the chance of detecting the muon channel is extremely difficult, there is a chance to detect the electron channel due to the low \(\nu_{e}\) flux at the far detector as seen in Fig. 8. Another interesting decay channel to consider is the two pion final state, which has the largest branching fraction of all decay channels. For this decay channel, we consider the momentum of the sum of the two pions as a fraction of the \(\tau\) momentum (\(x_{\pi\pi}\)) and the momentum of the negatively charged pion as a fraction of the momentum sum (\(z_{\pi}\)). Figure 10 shows the difference between the full calculation in red and the fully left-handed approximation in blue. In the case of the \(x_{\pi\pi}\) distribution, the total momentum is harder in the full calculation compared to the left-handed assumption. Additionally, there is a significant difference in \(z_{\pi}\) between the full calculation and the left-handed only. The full calculation is relatively flat over the full range, while the left-handed only calculation is peaked around 0.6. This shift is significant, and will be important for any detailed study on using the two pion channel to detect tau neutrino events. The last decay channel considered in this work is the decay to three pions. In this case, the decay is dominated by the \(a_{1}\) meson as discussed in Sec. II.1, and since we are not separating out the \(a_{1}\) polarization should not be sensitive to the polarization of the \(\tau\). This can be seen in Fig. 11, where the decay \(a_{1}\to\pi^{0}\pi^{0}\pi^{-}\) can be seen on the left and the decay \(a_{1}\to\pi^{+}\pi^{-}\pi^{-}\) can be seen on the right. The full calculation and the left-handed only calculation are statistically consistent with each other, as expected. Finally, we perform the analysis proposed in Ref. [16]. The comparison between the full calculation and the left-handed polarization assumption is shown in Fig. 12 for the energy of the leading pion. There is a shift in the energy distribution of the pion when correctly handling the tau polarization, making the pion slightly harder. The study on the impact of this in the separation from the neutral current background is left to a future work. Since the final state interactions are turned off in this analysis, the other distributions given in Ref. [16] would not be accurate. Therefore, they are not included here but will be included in a detailed study on separating the \(\tau\) decays from the background. Figure 9: Momentum fraction distribution for the decay of the \(\tau\) into a single pion is shown on the left and momentum fraction distribution for the decay into an electron is shown on the right. The full polarization handling is shown in red with the approximation that the \(\tau\) is purely left-handed in blue. The predictions are folded over the DUNE far-detector flux running in the \(\tau\)-optimized mode given in Fig. 8. Figure 11: The full calculation (red) and the purely left-handed calculation (blue) are given for the momentum fraction of the three pions as a fraction of the total \(\tau\) momentum for the decay of the \(a_{1}\), with the \(\pi^{0}\pi^{0}\pi^{-}\) channel on the left and the \(\pi^{+}\pi^{-}\pi^{-}\) channel on the right. The predictions are folded over the DUNE far-detector flux running in the \(\tau\)-optimized mode given in Fig. 8. Figure 12: Energy of the leading pion in \(\nu_{\tau}A\to\tau X\) events, in which all possible decays of the \(\tau\) are included. These results do not include the production of pions from the intranuclear cascade. Figure 10: Momentum fraction distribution for the decay of the \(\tau\) into a pair of pions is shown on the left. The momentum of the negatively charged pion as a fraction of the sum of the pion momenta is given on the right. The full polarization handling is shown in red with the approximation that the \(\tau\) is purely left-handed in blue. The predictions are folded over the DUNE far-detector flux running in the \(\tau\)-optimized mode given in Fig. 8. Conclusions Due to the limited number of identifiable tau neutrino events, the tau neutrino is typically considered the least understood fundamental particle in the Standard Model. Current and next-generation experiments will collect a large number of tau neutrino events, opening the door to detailed study of this particle. One of the most important experiments for studying the tau neutrino will be the DUNE experiment. It will be the only experiment using accelerator neutrinos for measuring properties of the tau neutrino. At DUNE energies, the quasielastic scattering component is the dominant contribution. In this energy region, there is an irreducible background from neutral current resonance interactions. Therefore, it is vital to understand the most optimal way to separate the signal from the background. Traditionally, in neutrino event generators the outgoing \(\tau\) is assumed to be fully left-hand polarized. This assumption is poor for DUNE energies. In this work, we demonstrate the appropriate way of calculating the polarization of the tau and propagating this information through the full decay chain within an event generator framework. The simulations were performed with a publicly available version of Achilles interfaced with Sherpa. For validation, we showed that the distributions for single pion are consistent with Ref. [15] for monochromatic beams. We additionally showed strong shifts in the momentum distributions for the two pion decay channel and found insignificant shifts (as expected) in the three pion decay channels from the fully left-handed assumption. We also considered the decay in the leptonic channel, and found a slight shift when correctly handling the polarization. While the study with monochromatic beams allows for validation of the calculation, all current and future experiments have a broad spread in the neutrino energies. We therefore investigated the changes in the same distributions integrated over the \(\tau\)-optimized running mode for DUNE. Again we find significant changes from the traditional fully left-handed assumption in the lepton, single pion, and two pion channel. As expected, there were no significant modifications in the three pion channel. Finally, while the distributions shown here demonstrate the importance of properly handling the polarization of the tau, they are not necessarily the optimal variables for separating the tau from the neutral current background. The investigation of how to optimally separate the charge current tau neutrino interactions from the SM background is left to a future work. ## VI Acknowledgments We thank Joanna Sobczyk and collaborators for insightful discussions. We thank Noemi Rocco, William Jay, and Andre de Gouvea for many useful discussions and for their comments on the manuscript. We thank Pedro Machado for helping with the realistic tau neutrino beams, for many fruitful discussions, and for his comments on the manuscript. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.
2308.11727
Simplified partial wave expansion of the Lamb shift
A method for calculating the self energy part of the Lamb shift is revisited. When the electron propagator in an external field is represented as an expansion in partial waves, the original method converges relatively slowly, requiring the calculation of dozens of partial waves. Here we show an improved method in which accurate results can be obtained using a much smaller number of partial waves. The method is illustrated for the ground states of hydrogenlike and lithiumlike boron, and the possibility of high accuracy calculations on lower Z hydrogenic ions is discussed.
J. Sapirstein, K. T. Cheng
2023-08-22T18:36:29Z
http://arxiv.org/abs/2308.11727v1
# Simplified partial wave expansion of the Lamb shift ###### Abstract A method for calculating the self energy part of the Lamb shift is revisited. When the electron propagator in an external field is represented as an expansion in partial waves, the original method converges relatively slowly, requiring the calculation of dozens of partial waves. Here we show an improved method in which accurate results can be obtained using a much smaller number of partial waves. The method is illustrated for the ground states of hydrogenlike and lithiumlike boron, and the possibility of high accuracy calculations on lower \(Z\) hydrogenic ions is discussed. pacs: 31.30.jr, 12.39.Ba, 31.30.jd ## I Introduction The small \(2s-2p_{1/2}\) splitting in the spectrum of hydrogen measured by Lamb and Retherford [1] played a seminal role in the development of Quantum Electrodynamics (QED) [2]. The effect is generally referred to as the Lamb shift, and requires the evaluation of two types of radiative corrections, vacuum polarization (VP) and the electron self energy (SE). The first calculations exact to all orders in \(Z\alpha\), where \(Z\) is the nuclear charge and \(\alpha\) the fine structure constant, \[\alpha\;\equiv\;\frac{e^{2}}{4\pi\epsilon_{0}\hbar c}\;=\;1/137.035\,999\,084( 21), \tag{1}\] were carried out for VP by Wichmann and Kroll [3]. Meanwhile, evaluations of SE assumed \(Z\alpha\) was a small quantity and could not be applied to cases when \(Z\) was large. However, the possibility of extending the work of Wichmann and Kroll to SE was realized, and not long after their work the first all-order SE calculations were presented by Desiderio and Johnson [4] and by Mohr [5]. While both use partial wave expansions to represent the photon and electron propagators, they differ in an important way in how the sum over all partial waves is carried out. As will be seen in this paper, evaluation of the self-energy term always involves multiple integrals over position \(r\), a single integral over an energy \(\omega\), and an infinite sum over the partial wave \(\ell\). Mohr used a point-Coulomb potential, and the fact that the electron propagator in this case can be expressed in terms of Whittaker functions allowed him to sum the partial waves to convergence for any value of coordinate and energy, even though for some values of the integrand, large sums were required. This is the most accurate approach, and we will refer to it as the Mohr method in the following. The work of Desiderio and Johnson (DJ), based on the method of Brown _et al._[6], was able to represent the electron propagator in a general spherically symmetric potential, but because it used numerically generated Green functions it was limited by how high a partial wave the numerical method could handle. In this case the coordinate and energy integral was carried out for as many partial waves as this limitation made possible, after which an extrapolation to higher values was made for better convergence. The work to be described here is a modification of a method we developed in collaboration with Walter Johnson [7]. It is based on Ref. [4], using a modification suggested by Snyderman [8] and implemented by Blundell and Snyderman [9]. Because, as will be described below, an expansion of the propagator in terms of an external potential is used, we call it the potential-expansion method as suggested by Yerokhin, Pachucki and Shabaev [10]. In that paper, references to a number of different approaches can be found, but the potential-expansion method closest to ours is given in [11]. As mentioned above, potential-expansion methods differ from the Mohr method in that they carry out coordinate and energy integrations partial wave by partial wave, afterward carrying out the final partial wave summation. As in practice only a finite number of partial waves can be included, an extrapolation of the partial wave expansion to infinity must be made, and there is a significant numerical uncertainty associated with this procedure. In the following we will refer to our original potential-expansion calculation as the DJA method. The purpose of the present paper is to describe the DJB method, a modification that improves the behavior of the partial wave series. The most accurate calculations of the self energy have been obtained for point-Coulomb cases using the Mohr method [12]. We illustrate the strikingly high accuracy the method can attain with the self energy of the ground state of hydrogenlike boron which is given by the dimensionless function \(F_{v}(Z\alpha)\) as \[E_{SE}(v,Z)=\frac{\alpha}{\pi}\frac{(Z\alpha)^{4}}{n_{v}^{3}}\,F_{v}(Z\alpha) \,mc^{2}, \tag{2}\] with \[F_{1s}(5\alpha)=6.251\,627\,078\,(1). \tag{3}\] We will use this particular self energy in the following to illustrate details of the DJA and DJB methods, and refer to it as the test case. All self-energy results shown in this work will also be given in terms of the dimensionless function \(F_{v}(Z\alpha)\). In Ref. [10], it was pointed out that the potential-expansion method described in Ref. [11] could be improved by using generalizations of identities used in the Mohr method. These identities involve commuting (C) the potential (P) and the free electron propagator (P), and we will refer to them as CPP identities. They can be used to create approximations to terms of arbitrary order in the potential expansion. These terms can be numerically evaluated very precisely, but can also be expressed as partial wave expansions. In the latter form, it is shown in Ref. [10] that when combined with the partial waves computed in Ref. [11], a much more tractable expansion results. It is the purpose of this paper to describe how using a CPP identity allows us to create a similarly improved partial wave expansion. Even with these improvements, the potential-expansion method cannot reach the very high accuracy of Ref. [12], though in the treatment of the hydrogen isoelectronic sequence given in Ref. [13], we note that the self-energy results presented are quite precise. The plan of this paper is as follows. After the potential-expansion method is described in Section II, Section III describes the DJA method, and the behavior of the first 30 partial waves for the test case is shown. In Section IV, the DJB method is set up and the improvement of the partial wave expansion shown for the test case. In Section V, the DJB method is applied to the \(2s\) state of lithiumlike boron with a finite-nucleus model potential. Finally, a discussion of how higher accuracy might be reached is given in the Conclusion. ## II Formalism and Subtraction Schemes A central object in the self-energy calculation is the electron propagator, which satisfies the equation \[(z-H_{x})G(z,{\bf x},{\bf y})=\delta^{3}({\bf x}-{\bf y}) \tag{4}\] with \[H_{x}=-i\hbar c\,\mathbf{\alpha}\cdot\mathbf{\nabla}_{x}+mc^ {2}\beta+V(x). \tag{5}\] We adopt the convention for any three-vector that \(r\equiv|{\bf r}|\), so we are assuming our potential to be spherically symmetric. In that case one can work with angular momentum eigenstates characterized by the quantum numbers \(\kappa\) and \(\mu\). In the following, for simplicity we will assume the potential to be that of a point nucleus of charge \(Z\), \[V(r)=-\frac{Ze^{2}}{r}. \tag{6}\] Generalization to other potentials is straightforward. The Dirac equation \[H_{x}\psi_{v\kappa\mu}({\bf x})=E_{v}\psi_{v\kappa\mu}({\bf x}) \tag{7}\] has the solution \[\psi_{v\kappa\mu}({\bf x})=\left(\begin{array}{c}g_{v}(x)\chi_{\kappa\mu}( \hat{x})\\ if_{v}(x)\chi_{-\kappa\mu}(\hat{x})\end{array}\right), \tag{8}\] with energy \(E_{v}\equiv mc^{2}\epsilon_{v}\). A formally exact solution of Eq. (4) is obtained from a summation over all possible \(\kappa\) and \(\mu\) values, \[G(z,{\bf x},{\bf y}) = \sum_{\kappa\mu}\Big{[}\theta(x-y)W_{\kappa\mu}(z,{\bf x})U^{ \dagger}_{\kappa\mu}(z,{\bf y}) \tag{9}\] \[+\ \theta(y-x)U_{\kappa\mu}(z,{\bf x})W^{\dagger}_{\kappa\mu}(z, {\bf y})\Big{]}.\] Here the spinors \(U_{\kappa\mu}(z,{\bf x})\) and \(W_{\kappa\mu}(z,{\bf x})\) are of the form of \(\psi_{z\kappa\mu}({\bf x})\) in Eq. (8), with the radial functions being solutions to the Dirac equation regular at the origin and infinity, respectively. Also, \(\theta(t)=0\) or \(1\) for \(t<\) or \(>0\) is the step function. The one-loop self energy of an electron state \(v\) before regularization and renormalization is \[E_{SE}^{(2)}= -4\pi i\alpha c^{2}\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[G(z,{\bf x},{\bf y}) = F(z,{\bf x},{\bf y}) \tag{11}\] \[+\int d{\bf r}_{1}F(z,{\bf x},{\bf r}_{1})V(r_{1})F(z,{\bf r}_{1},{ \bf y})\] \[+\iint d{\bf r}_{1}d{\bf r}_{2}F(z,{\bf x},{\bf r}_{1})V(r_{1})F(z,{ \bf r}_{1},{\bf r}_{2})V(r_{2})F(z,{\bf r}_{2},{\bf y})\] \[+\iiint d{\bf r}_{1}d{\bf r}_{2}d{\bf r}_{3}F(z,{\bf x},{\bf r}_{1} )V(r_{1})F(z,{\bf r}_{1},{\bf r}_{2})V(r_{2})F(z,{\bf r}_{2},{\bf r}_{3})V(r_{3} )F(z,{\bf r}_{3},{\bf y})\] \[+\...\] If we use the labeling scheme \[G(z,{\bf x},{\bf y})\equiv\sum_{i=0}^{\infty}G_{i}(z,{\bf x},{\bf y}) \tag{12}\] where \(i\) refers to the number of potentials \(V(r)\) in the expansion terms on the right-hand-side of Eq. (11), the self energy can be similarly expanded, \[E_{SE}^{(2)}=\sum_{i=0}^{\infty}E_{iP}, \tag{13}\] which defines the potential-expansion method. All ultraviolet infinities are associated with the first two terms in the expansion, \(E_{0P}\), referred to as the zero-potential term, and \(E_{1P}\), referred to as the one-potential term. When these are separated from the complete sum, we define the result as the many-potential term, \[E_{MP}=\sum_{i=2}^{\infty}E_{iP}. \tag{14}\] In the DJA method, \(E_{MP}\) is evaluated in coordinate space, and \(E_{0P}\) and \(E_{1P}\) in momentum space. We now give a brief description of the calculation, with emphasis on \(E_{0P}\), modifications of which are used in the DJB method. ## III DJA method To evaluate the zero- and one-potential terms we first define the momentum space wave function, \[\psi_{v}({\bf p})\equiv\int\!d^{3}x\,e^{-i{\bf x}\cdot{\bf p}/\hbar}\,\psi_{v }(x). \tag{15}\] The Dirac equation in momentum space is \[\left(\not p-mc\right)\psi_{v}({\bf p})=-4\pi Z\alpha\!\int\!\!\frac{d{\bf p} _{1}}{(2\pi)^{3}}\frac{\gamma_{0}\,\psi_{v}({\bf p}_{1})}{|{\bf p}-{\bf p}_{1} |^{2}}, \tag{16}\] where we have introduced the 4-vector \(p=(mc\epsilon,{\bf p})\). For the Dirac equation \(\epsilon=\epsilon_{v}\), but when \(p\) is present in an electron propagator we leave \(\epsilon\) as a variable that can be differentiated for later use when we describe the DJB method. We regulate the ultraviolet infinities in \(E_{0P}\) and \(E_{1P}\) by changing \(d^{4}k\to d^{n}k\), with \(n=4-\delta\). In dimensional regularization we note that the self mass of a free electron at one-loop order is \[\delta m^{(2)}=m\,\frac{\alpha}{\pi}\left(\frac{3C}{\delta}+2\right), \tag{17}\] with \[C=(4\pi)^{\delta/2}\,\Gamma(1+\delta/2). \tag{18}\] Using the representation of the free electron propagator \[F(z,{\bf x},{\bf y})=\frac{1}{\hbar^{3}}\int\!\frac{d^{3}p}{(2\pi)^{3}}\,\frac {e^{i{\bf p}\cdot({\bf x}-{\bf y})/\hbar}}{z\gamma_{0}-c\,\mathbf{ \gamma}\!\cdot\!{\bf p}-mc^{2}}\,\gamma_{0}, \tag{19}\] the zero-potential term can be shown to be \[E_{0P}(\epsilon)=-\frac{4\pi i\alpha c}{\hbar^{3}}\int\!\frac{d{\bf p}}{(2\pi) ^{3}}\,\bar{\psi}_{v}({\bf p})X(p,\epsilon)\psi_{v}({\bf p}), \tag{20}\] with \[X(p,\epsilon)\equiv \int\!\frac{d^{n}k}{(2\pi)^{n}}\,\frac{1}{k^{2}}\] \[\times\gamma_{\nu}\frac{1}{(mc\epsilon-k_{0})\gamma_{0}-\mathbf{\gamma}\!\cdot\!({\bf p}-{\bf k})-mc}\,\gamma^{\nu}. \tag{21}\] Standard manipulations give \[X(p,\epsilon) = \frac{1}{c}\int\!\frac{d^{n}k}{(2\pi)^{n}}\,\frac{\gamma_{\nu}( \not{p}-\not{k}+mc)\gamma^{\nu}}{k^{2}[(p-k)^{2}-m^{2}c^{2}]} \tag{22}\] \[= \frac{1}{c}\int_{0}^{1}\!dx\!\int\!\frac{d^{n}k}{(2\pi)^{n}}\, \frac{(2-n)(\not{p}-\not{k})+n\cdot mc}{[(k-xp)^{2}+x(1-x)p^{2}-xm^{2}c^{2}]^{ 2}}\] \[= \frac{iC(mc)^{-\delta}}{8\pi^{2}c\delta}\!\int_{0}^{1}\!dx\,[ \not{p}(1-x)(2-n)+n\cdot mc]\Delta^{-\delta/2},\] with \[\Delta=x-x(1-x)\left[\epsilon^{2}-{\bf p}^{2}/(mc)^{2}\right]. \tag{23}\] Expanding in \(\delta\) and discarding terms of order \(\delta\) and higher leads to \[X(p,\epsilon) = \frac{i}{8\pi^{2}}\,m\left(\frac{3C}{\delta}+2\right)\,-\,\frac{i }{8\pi^{2}c}\,(\not{p}-mc)\left(\frac{C}{\delta}+1\right)\,+\,\frac{i}{8\pi^{ 2}c}\int_{0}^{1}\!dx\big{[}\not{p}(1-x)-2mc\,\big{]}\ln\!\frac{\Delta}{x^{2}}. \tag{24}\] The zero-potential term is then \[E_{0P}(\epsilon) = \frac{\alpha}{2\pi}\,mc^{2}\left(\frac{3C}{\delta}+2\right)\frac {1}{\hbar^{3}}\int\!\frac{d{\bf p}}{(2\pi)^{3}}\,\bar{\psi}_{v}({\bf p})\psi_{v }({\bf p}) \tag{25}\] \[-\frac{\alpha}{2\pi}\left(\frac{C}{\delta}+1\right)\frac{c}{ \hbar^{3}}\int\!\frac{d{\bf p}}{(2\pi)^{3}}\bar{\psi}_{v}({\bf p})(\not{p}-mc )\psi_{v}({\bf p})\] \[+\,\frac{\alpha}{2\pi}\,\frac{c}{\hbar^{3}}\int\!\frac{d{\bf p}}{ (2\pi)^{3}}\int_{0}^{1}\!dx\,\bar{\psi}_{v}({\bf p})\left[\not{p}(1-x)-2mc \right]\psi_{v}({\bf p})\ln\!\frac{\Delta}{x^{2}}.\] The first term in the right-hand-side is removed by mass renormalization. Turning to \(E_{1P}\), it is given by \[E_{1P}=\frac{16\pi^{2}icZ\alpha^{2}}{\hbar^{3}}\!\int\!\frac{d{\bf p}_{2}}{(2 \pi)^{3}}\int\!\frac{d{\bf p}_{1}}{(2\pi)^{3}}\,\frac{1}{|{\bf p}_{2}-{\bf p}_ {1}|^{2}}\,\bar{\psi}_{v}({\bf p}_{2})Y({\bf p}_{2},{\bf p}_{1})\psi_{v}({\bf p }_{1}), \tag{26}\] with \[Y({\bf p}_{2},{\bf p}_{1})\equiv\int\!\frac{d^{n}k}{(2\pi)^{n}}\,\frac{\gamma_ {\nu}(\not{p}_{2}-\not{k}+mc)\gamma_{0}(\not{p}_{1}-\not{k}+mc)\gamma^{\nu}}{ k^{2}\left[(k-p_{2})^{2}-(mc)^{2}\right]\left[(k-p_{1})^{2}-(mc)^{2}\right]}. \tag{27}\] In the above, \(p_{1}=(mc\epsilon_{v}\), \({\bf p}_{1})\) and \(p_{2}=(mc\epsilon_{v}\), \({\bf p}_{2})\): there is no need in this case to introduce \(\epsilon\) and \(\epsilon_{v}\) can be used directly. A standard set of manipulations then leads to \[E_{1P} = \frac{\alpha}{2\pi}\left(\frac{C}{\delta}-\frac{1}{2}\right)\frac {c}{\hbar^{3}}\!\int\!\frac{d{\bf p}}{(2\pi)^{3}}\,\bar{\psi}({\bf p})(\not{p} -mc)\psi({\bf p}) \tag{28}\] \[+\,\,2\,\frac{Z\alpha^{2}c}{\hbar^{3}}\!\int\!\frac{d{\bf p}_{2}}{ (2\pi)^{3}}\int\!\frac{d{\bf p}_{1}}{(2\pi)^{3}}\,\frac{\bar{\psi}({\bf p}_{2}) \gamma_{0}\psi({\bf p}_{1})}{|{\bf p}_{2}-{\bf p}_{1}|^{2}}\!\int_{0}^{1}\! \rho d\rho\!\int_{0}^{1}\!dx\ln\!\frac{\Delta_{1}}{\rho}\] \[+\,\frac{Z\alpha^{2}c}{\hbar^{3}}\int\!\frac{d{\bf p}_{2}}{(2\pi) ^{3}}\int\!\frac{d{\bf p}_{1}}{(2\pi)^{3}}\int_{0}^{1}\!d\rho\int_{0}^{1}\!dx \,\frac{1}{\Delta_{1}}\,\frac{\bar{\psi}({\bf p}_{2})N\psi({\bf p}_{1})}{|{\bf p }_{2}-{\bf p}_{1}|^{2}},\] where the Dirac equation has been used in the first line and the explicit form of \(N\) can be found in Ref. [7]. If we define \[E_{01P}\equiv E_{0P}+E_{1P}, \tag{29}\] after mass renormalization, we see it is ultraviolet finite. The counter-terms present in the renormalization procedure that would make the individual terms ultraviolet finite cancel because of the Ward identity. It is difficult to evaluate the finite part of \(E_{1P}\) with high precision as it stands. The solution used to improve the numerics is to employ the CPP identity introduced by Mohr [5]. The source of the numerical difficulties is the region where \(|{\bf p_{1}}-{\bf p_{2}}|\) is small. If we replace with \({\bf p_{1}}\) everywhere in the finite terms in \(E_{1P}\) except the denominator and the wave function, the Dirac equation, Eq. (16), can be used to carry out the \(d{\bf p_{2}}\) integration, resulting in a much simpler integral. By subtracting this term before using the Dirac equation, the extra cancellation that results when \(|{\bf p_{2}}-{\bf p_{1}}|^{2}\) is small allows the integral to be evaluated with the accuracy needed. (This procedure is carried out symmetrically, with \({\bf p_{1}}\) being replaced with \({\bf p_{2}}\) in a second subtraction term.) What we have just described is essentially the procedure introduce by Mohr [5] where the replacement of \({\bf p_{2}}\) with \({\bf p_{1}}\) in the propagator is the result of commuting the propagator through the potential. The momentum space result for the test case from \(0P\) and \(1P\) is \[E_{01P}=-767.728\,102. \tag{30}\] The accuracy of the numerical integrations, which are done using the program CUHRE from the Cuba package [14], is such that all digits shown are significant. While the accuracy could be improved, there would be no point in doing so because the partial wave expansion involved in \(E_{MP}\) leads to much larger numerical uncertainty. We begin the coordinate space evaluation of \(E_{MP}\) by carrying out the \(d^{3}k\) integration in Eq. (10), \[E_{SE}^{(2)}=i\alpha\hbar c^{2}\!\int\!\frac{dk_{0}}{2\pi}\!\int\!\!\!\int\!d^ {3}x\,d^{3}y\,\frac{e^{ik_{0}|{\bf x}-{\bf y}|}}{|{\bf x}-{\bf y}|}\,\bar{\psi }_{v}({\bf x})\gamma_{\nu}G(E_{v}-ck_{0},{\bf x},{\bf y})\gamma_{0}\gamma^{ \nu}\psi_{v}({\bf y}). \tag{31}\] We define the order of the partial wave expansion \(\ell\) by introducing the standard expansion of the photon propagator, \[\frac{e^{ik_{0}|{\bf x}-{\bf y}|}}{|{\bf x}-{\bf y}|}=\sum_{\ell=0}^{\infty}\, \sum_{m=-\ell}^{\ell}4\pi ik_{0}j_{\ell}(k_{0}r)h_{\ell}(k_{0}r^{\prime})Y_{ \ell m}(\hat{x})Y_{\ell m}^{*}(\hat{y}), \tag{32}\] with \(r=\min(x,y)\) and \(r^{\prime}=\max(x,y)\). We now again use \(\epsilon\), understood to be taken to \(\epsilon_{v}\) for the DJA method, and find \[E_{SE}^{(2)}(\epsilon) = i\alpha\hbar c^{2}\!\int\!\frac{dk_{0}}{2\pi}\sum_{\ell=0}^{ \infty}\,\sum_{m=-\ell}^{\ell}4\pi ik_{0}\!\!\int\!\!d^{3}x\,d^{3}y\,j_{\ell}( k_{0}r)h_{\ell}(k_{0}r^{\prime})Y_{\ell m}(\hat{x})Y_{\ell m}^{*}(\hat{y}) \tag{33}\] \[\times\bigg{[}\sum_{\kappa\mu}\theta(x-y)\bar{\psi}_{v}({\bf x}) \gamma_{\nu}U_{\kappa\mu}(\epsilon_{v}-k_{0},\,{\bf x})W_{\kappa\mu}^{\dagger} (\epsilon_{v}-k_{0},\,{\bf y})\gamma_{0}\gamma^{\nu}\psi_{v}({\bf y})\] \[+\ \theta(y-x)\bar{\psi}_{v}({\bf x})\gamma_{\nu}W_{\kappa\mu}( \epsilon_{v}-k_{0},\,{\bf x})U_{\kappa\mu}^{\dagger}(\epsilon_{v}-k_{0},\,{ \bf y})\gamma_{0}\gamma^{\nu}\psi_{v}({\bf y})\bigg{]}.\] In this form one can analytically carry out the angle integrations along with the sum over \(m\) and \(\mu\). The resulting Clebsch-Gordon coefficients then limit the sum over \(\kappa\) for a given value of \(\ell\), and they are understood to be all included for any given partial wave. Evaluation of the integrals over \(k_{0}\), \(x\), and \(y\) can now be carried out if one has the radial functions for the electron Green function, which are available analytically in terms of Whittaker functions for the point-Coulomb case, or numerically for the general case as in the present calculations. In the DJA method, integrations over \(\omega=ck_{0}\) are carried out for each partial wave \(\ell\), and the resulting partial wave series is summed to give the final results. \(E_{SE}^{(2)}\) thus calculated will be referred to as the _Main_ term. To form the ultraviolet convergent many-potential term \(E_{MP}=E_{SE}^{(2)}-E_{0P}-E_{1P}\), we begin by subtracting the zero-potential term \(E_{0P}\) from the _Main_ term. Computationally, \(E_{0P}\) in coordinate space is the same as \(E_{SE}^{(2)}\) with the bound-electron Green function \(G(z,{\bf x},{\bf y})\) replaced by the free-electron Green function \(F(z,{\bf x},{\bf y})\) which can be generated analytically or numerically. Partial waves of the _Main_ and \(E_{0P}\) terms up to \(\ell=30\) are shown in the second and third columns of Table 1 for the test case and their difference is shown in the fourth column. It is clear that there are substantial cancellations between _Main_ and \(E_{0P}\), but \(Main-E_{0P}\) is a partial wave expansion that does not converge, and the gradual falloff with \(\ell\) eventually goes as \(1/\ell\), which corresponds to a logarithmic divergence. We note that in evaluating \(E_{SE}^{(2)}\), a Wick rotation, \(\omega\to i\omega\), is carried out, and a deformation of the contour to avoid bound-state poles gives rise to the "Pole terms". Details can be found in Ref. [7]. Pole terms do not involve electron Green functions and can be calculated very accurately. For the \(E_{1s}(5\alpha)\) test case considered here, there is only one \(1s\) pole term given by \[E_{1s}({\rm pole})=20\,210.432\,546. \tag{34}\] This term is combined with the \(\ell=0\) partial wave of the _Main_ term in Table 1, as this is the only partial wave affected by the \(1s\) pole from symmetry and energy considerations. To finally form the ultraviolet finite many potential term, we need to compute \[E_{1P} = i\alpha\hbar c^{2}\!\!\int\!\!\frac{dk_{0}}{2\pi}\!\!\int\!\!\!\int d ^{3}\!\!x\,d^{3}\!y\,\frac{e^{ik_{0}|{\bf x}-{\bf y}|}}{|{\bf x}-{\bf y}|}\, \bar{\psi}_{v}({\bf x})\gamma_{\nu}G_{2}(E_{v}-ck_{0},{\bf x},{\bf y})\gamma_{ 0}\gamma^{\nu}\psi_{v}({\bf y}) \tag{35}\] \[= i\alpha\hbar c^{2}\!\!\int\!\!\frac{dk_{0}}{2\pi}\!\!\int\!\!\! \int d^{3}\!\!x\,d^{3}\!w\,d^{3}\!y\,\frac{e^{ik_{0}|{\bf x}-{\bf y}|}}{|{\bf x }-{\bf y}|}\,\bar{\psi}_{v}({\bf x})\gamma_{\nu}F(E_{v}-ck_{0},{\bf x},{\bf w} )\gamma_{0}\frac{\hbar cZ\alpha}{w}F(E_{v}-ck_{0},{\bf w},{\bf y})\gamma_{0} \gamma^{\nu}\psi_{v}({\bf y}).\] The ordering of the magnitudes \(x\), \(w\), and \(y\) determines which spherical Bessel functions must be used, and requires the evaluation of three different integrals. However, only one more integration variable is present compared to the zero potential term, and no numerical difficulties arise. The result is presented in the fifth column of Table 1. Subtractions of \(E_{1P}\) from \(Main-E_{0P}\) give the \(E_{MP}\) term listed in the sixth column. Once again, \begin{table} \begin{tabular}{l c c c c c c} \(\ell\) & \(Main\) & \(E_{{}_{0P}}\) & \(Main-E_{0P}\) & \(E_{1P}\) & \(E_{MP}\) & \(Sum\_A\) \\ \hline 0 & 32953.2587\({}^{a}\) & 30259.7520 & 2693.5067 & 1937.5587 & 755.9480 & \(-\)11.7801 \\ [MISSING_PAGE_POST] \hline \multicolumn{7}{l}{High-\(\ell\) correction from \(1/\ell^{3}\) fit} & \multicolumn{7}{l}{0.0500} & 6.2557 \\ \multicolumn{7}{l}{High-\(\ell\) correction \(\Delta\ell_{2.5}^{-3}\) from Eq. (36)} & \multicolumn{7}{l}{0.0465} & 6.2521 \\ \hline \multicolumn{7}{l}{Ref. [12]} \\ \end{tabular} \end{table} Table 1: DJA partial wave contributions to the self energy of the \(Z=5\) point-Coulomb \(1s\) state. \(E_{MP}=Main-E_{0P}-E_{1P}\). \(Sum\_A\) is the cumulative partial-wave sum of \(E_{MP}\). there are substantial cancellations, but the resulting partial wave series of \(E_{MP}\) now converges as \(1/\ell^{3}\). In the seventh column of Table 1, the cumulative partial-wave sum of \(E_{MP}\) are shown as \(Sum\_A\). By adding \(E_{01P}\) in Eq. (30) to the \(\ell=0\) term, \(Sum\_A\) should converge to the final self-energy result. Indeed, \(Sum\_A(\ell)\) can be seen to approach the high-precision results of \(6.2516\ldots\) from Ref. [12], with \(Sum\_A(30)=6.2056\) converged to the first decimal point for an accuracy of \(0.74\%\). By extrapolating the partial wave series with an \(1/\ell^{3}\) fit, the high-\(\ell\) contribution from \(\ell=31-\infty\) of \(0.0500\) can be added to \(Sum\_A(30)\) for a result of \(6.2557\) shown in the third row from the bottom in Table 1. This improves the convergence by one more decimal point and the accuracy to \(0.06\%\). High-\(\ell\) corrections have also been calculated with an accelerated-convergence method based on a \(k\)-point least-square, rational polynomial fit of the form \[f_{m,k}^{-n}(\ell)\approx 1/[\ell^{n}(a_{0}+a_{1}/\ell+\cdots+a_{m}/\ell^{m})], \tag{36}\] where the number of least-square points \(k\) must be greater than the order of the rational polynomial \(m\). In fact, the \(1/\ell^{3}\) fit is a special case with \(n=3\), \(m=0\) and \(k=1\). As shown in the second row from the bottom of Table 1, the high-\(\ell\) correction \(\Delta\ell_{2,5}^{-3}\) of \(0.0465\) does accelerate the convergence and further improves the self-energy result by another decimal point to \(6.2521\) for an accuracy of \(0.01\%\). While the difference between the two high-\(\ell\) extrapolation results reflects the intrinsic uncertainty of these corrections, their contributions can be greatly reduced by extending the calculation to include more partial waves. For higher-\(Z\) ions than the \(Z=5\) test case, that is usually not necessary as partial wave series tend to converge much faster. For lower-\(Z\) ions, however, partial wave series converge much slower, and unlike the Mohr method that utilizes analytic functions extensively, the numerical approach of the DJA method limits the number of partial waves that can be accurately calculated. A different approach with faster partial wave convergence is needed. For that, we turn to the new DJB method which is based on a variation of the method in Ref. [10]. ## IV DJB method The next logical step in the potential-expansion method would appear to be the evaluation of \(E_{2P}\), given by \[E_{2P} = -4\pi i\alpha c^{2}{\int}\frac{d^{4}k}{(2\pi)^{4}}{\int}{\int}d^ {3}x\,d^{3}y\,\frac{e^{i{\bf k}\cdot({\bf x}-{\bf y})}}{k_{0}^{2}-{\bf k}^{2}+ i\delta}\,\bar{\psi}_{v}({\bf x})\gamma_{\nu} \tag{37}\] \[\times\int\!\!{\int}d{\bf r}_{1}d{\bf r}_{2}\,F(E_{v}-ck_{0},{ \bf x},{\bf r}_{1})V(r_{1})F(E_{v}-ck_{0},{\bf r}_{1},{\bf r}_{2})V(r_{2})F(E_ {v}-ck_{0},{\bf r}_{2},{\bf y})\gamma^{\nu}\psi_{v}({\bf y}).\] However, after transforming to momentum space and evaluating the \(d^{4}k\) integral with Feynman parameters, one has a multidimensional integral of nominal dimension 9. Evaluating such an integral to high precision would be an extremely challenging proposition even with the subtractions described for \(E_{1P}\). We consider instead an approximation \(\tilde{E}_{2P}\), \[\tilde{E}_{2P} \equiv -4\pi i\alpha c^{2}{\int}\frac{d^{4}k}{(2\pi)^{4}}{\int}{\int}d^ {3}x\,d^{3}y\,\frac{e^{i{\bf k}\cdot({\bf x}-{\bf y})}}{k_{0}^{2}-{\bf k}^{2}+ i\delta}\,\bar{\psi}_{v}({\bf x})\gamma_{\nu} \tag{38}\] \[\times\,\int\!\!{\int}d{\bf r}_{1}d{\bf r}_{2}\,F(E_{v}-ck_{0},{ \bf x},{\bf r}_{1})V(x)F(E_{v}-ck_{0},{\bf r}_{1},{\bf r}_{2})V(y)F(E_{v}-ck_{ 0},{\bf r}_{2},{\bf y})\gamma^{\nu}\psi_{v}({\bf y}).\] Because the free electron propagator \(F(z,{\bf x},{\bf y})\) emphasizes the region \({\bf x}={\bf y}\), the replacement of \(V(r_{1})\) with \(V(x)\) and \(V(r_{2})\) with \(V(y)\) in \(\tilde{E}_{2P}\) can be expected to capture a dominant part of the integral. The replacement corresponds to a CPP method, with \(V(r_{1})\) commuted to the left and \(V(r_{2})\) commuted to the right. The DJB method involves replacing the MP term in the DJA method with \[E_{MP}\,=\,(E_{MP}-\tilde{E}_{2P})+\tilde{E}_{2P}\,\equiv\,\tilde{E}_{MP}+ \tilde{E}_{2P}. \tag{39}\] The relative simplicity of \(\tilde{E}_{2P}\) comes from the identity \[\int\!\!{\int}d^{3}u\,d^{3}w\,F(z,{\bf x},{\bf u})F(z,{\bf u},{\bf w})F(z,{\bf w },{\bf y})=\frac{1}{2}\,\frac{d^{2}}{dz^{2}}F(z,{\bf x},{\bf y}). \tag{40}\] This allows the \(\mathbf{r_{1}}\) and \(\mathbf{r_{2}}\) integrations in \(\tilde{E}_{2P}\) to be carried out, and we have \[\tilde{E}_{2P}=-2\pi i\alpha c^{2}\frac{d^{2}}{dE_{v}^{2}}\int\!\!\frac{d^{4}k}{( 2\pi)^{4}}\!\!\iint\!d^{3}x\,d^{3}y\,V(x)V(y)\,\frac{e^{i\mathbf{k}\cdot( \mathbf{x}-\mathbf{y})}}{k_{0}^{2}-\mathbf{k}^{2}+i\delta}\bar{\psi_{v}}( \mathbf{x})\gamma_{\nu}F(E_{v}-ck_{0},\mathbf{x},\mathbf{y})\gamma_{0}\gamma^{ \nu}\psi_{v}(\mathbf{y}). \tag{41}\] In coordinate space form, this is to be subtracted from \(E_{MP}\), and to compensate we need to add it back in momentum space form. To do this, we work with Eq. (25), which we treated as a function of \(\epsilon\). \(\tilde{E}_{2P}\) involves differentiating with respect to \(\epsilon\) twice, after which case one can take \(\epsilon\rightarrow\epsilon_{v}\). We start by noting \[\frac{d^{2}X(p)}{de_{v}^{2}} = \frac{i}{4\pi^{2}m^{2}c^{5}}\int_{0}^{1}\!dx\,x(1-x) \tag{42}\] \[\times\left[\frac{\epsilon_{v}N_{0}-N_{1}}{\Delta}+\frac{2D_{B}( \epsilon_{v}N_{0}+N_{1})}{\Delta^{2}}\right],\] where \(N_{0}=-\gamma_{0}(1-x)/c\) and \(N_{1}=2mc+\mathbf{\gamma}\cdot\mathbf{p}\,(1-x)\). If we define the momentum space function \[\psi_{v_{1}}(\mathbf{p})\equiv\int\!\!d^{3}x\,e^{-i\mathbf{x}\cdot\mathbf{p}/ \hbar}\,\frac{Z(x)}{x}\,\psi_{v}(x) \tag{43}\] one has \[\tilde{E}_{2P}= \frac{\alpha}{3\pi m^{2}c^{3}\hbar^{3}}\!\int\!\frac{d\mathbf{p} }{(2\pi)^{3}}\!\int_{0}^{1}\!\!dx\,x(1-x)\bar{\psi}_{v_{1}}(\mathbf{p})\] \[\times\!\left[\frac{\epsilon_{v}N_{0}-N_{1}}{\Delta}+\frac{2D_{B }(\epsilon_{v}N_{0}+N_{1})}{\Delta^{2}}\right]\!\psi_{v_{1}}(\mathbf{p}),\] which can be easily evaluated with high accuracy. Its value in momentum space for the test case is given by \[\tilde{E}_{2P}=365.613\,427. \tag{45}\] Turning to the coordinate space part of the calculation, we note that the double derivative with respect to \(E_{v}\) can be carried out using the recursion relations for spherical Bessel functions. While this results in a somewhat complicated integrand, the numerical integral is of the same form as used for the other parts of the coordinate space calculation, and the results are of the same accuracy. We list the partial waves for the test case up to \(\ell=30\) in the third column of Table 2. A check on the calculation can now be made by comparing the partial wave expansion of \(\tilde{E}_{2P}\) with the momentum space form, which can be evaluated with high precision. From Table 2, the partial wave sum of \(\tilde{E}_{2P}\) up to \(\ell=30\) is \(365.5675\), which agrees with the momentum space result in Eq. (45) to \(0.01\%\). While this check reflects the accuracy possible for the partial wave expansion, that accuracy is still limited for the same reason DJA is limited, the relatively slow convergence of the partial wave expansion. However, the partial wave expansion of DJB has two features that make the method much more accurate. The first is that the cancellation with \(E_{MP}\), shown in the sixth column of Table 1 and reshown in the second column of Table 2, makes the higher partial waves smaller by over two orders of magnitude as shown in the fourth column of Table 2. Indeed, the cumulative sum of \(\tilde{E}_{MP}=E_{MP}-\tilde{E}_{2P}\), shown as \(Sum\_B\) in the fifth column, can be seen to converge readily to \(6.2515\) at \(\ell=30\) instead of \(Sum\_A\)'s \(6.0256\) in Table 1. The second feature is that the convergence of \(\tilde{E}_{MP}\) is more rapid at \(1/\ell^{4}\). Using a range of extrapolation methods as done with the DJA method, we find that at \(\ell=30\), they all give consistent high-\(\ell\) corrections at \(\sim\)0.0001, improving the present self-energy result to \(6.2516\), same as the high-precision results of Ref. [12] down to the 4th decimal point as seen in the last two rows in Table 2. For higher-\(Z\) ions than the present test case of \(Z=5\), \(10-20\) partial waves would likely be sufficient for the DJB method, and high-\(\ell\) extrapolations may not even be necessary except for accuracy checks. DJB is a marked improvement over DJA. ## V Applications Now that we have shown the details of the DJB method for the test case, it is clear that the present DJB result of \(E_{1s}(5\alpha)=6.2516(1)\), with an uncertainty of 1 in the last digit, cannot match the accuracy of \(6.251\,627\,078(1)\) in Ref. [12]. Nevertheless, the present result is still accurate to 5 significant figures, more than enough for most applications. More importantly, the present approach is not limited to point-Coulomb cases, as bound state wave functions and electron Green functions are solved numerically instead of derived analytically. Thus, DJB method has a wide range of applications and can be used to calculate, for example, electron screening and finite-nuclear size corrections to electron self energies. Choosing the \(2s\) state of Li-like boron as an example, we start by using a Kohn-Sham potential for the \(1s^{2}2s\) ground state to account for the screening effect. Finite-nuclear size potential is modeled by a Fermi charge distribution with parameters \(c=1.8104\) fm and \(t=2.3\) fm. Partial wave results up to \(\ell=20\) are shown in Table 3. Specifically, \(E_{MP}\) in column 2 and \(Sum\_A\) in column 3 correspond to DJA results with one-potential expansions, while \(\tilde{E}_{2P}\) in column 4, \(E_{MP}-\tilde{E}_{2P}\) in column 5 and \(Sum\_B\) in columns 6 are DJB results with the additional two-potential expansions. High-\(\ell\) corrections \(\Delta\ell_{2,5}^{-4}\) to \(Sum\_B\) from least-square rational polynomial fits of the form given in Eq. (36) are shown for \(\ell\geq 5\) in column 7 and \(Total\_B=Sum\_B+\Delta\ell_{2,5}^{-4}\) are shown in column 8. As in Tables 1 and 2, the \(E_{MP}\), \(Sum\_A\) and \(Sum\_B\) terms have the pole and momentum-space terms included in the \(\ell=0\) partial waves so that the cumulative sums of \(Sum\_A\), \(Sum\_B\) and \(Total\_B\) will converge to the self energy \(E_{2s}(5\alpha)\). At the DJA level, it can be seen that \(E_{MP}(20)\) only goes down to \(0.0132\) and \(Sum\_A(20)\), at \(2.7855\), is far from convergence. With DJB, however, \(E_{MP}-\tilde{E}_{2P}\) is already down to \(0.0000\,3\) at \(\ell=20\), and \(Sum\_B(20)\), at \(2.9567\), is nearly converged to the last digit. When high-\(\ell\) corrections \(\Delta\ell_{2,5}^{-4}\) are added, \(Total\_B\) actually converges to \(2.9569\) with only \(8\) partial waves even though high-\(\ell\) correction is still rather large at \(0.0025\). While \(\Delta\ell_{2,5}^{-4}\) continues to drop by an order-of-magnitude to \(0.0002\) at \(\ell=20\), \(Total\_B\) remains constant to the fourth decimal point. This is a good check on the accuracy of the final result and affirms the use of high-\(\ell\) extrapolation methods to accelerate the partial wave convergence. Comparing to the test case, it is clear that the DJB method converges much faster with non-Coulomb potentials even for higher-\(n\) (\(2s\) vs. \(1s\)) states. There is also no doubt that DJB is an important improvement over DJA even though the later can give accurate enough results in most cases with larger partial wave expansions. ## VI Conclusion We have deliberately used only a modest number of partial waves in this paper. This is because we wish to emphasize that relatively simple calculations can allow quite accurate self energies to be computed. However, one application that requires extremely high accuracy is the self energy of hydrogen. As with our test case, it has been evaluated with extremely high accuracy in Ref. [12]. Because that accuracy is needed in the treatment of the finite size of the proton, a check using different methods would be useful. One of the advantages already present in the DJA method is the fact that the numerical methods used allow one to go up to values of \(\ell\approx 60\), though extreme care and very fine radial grids are needed. In fact, it is possible to control the high-\(\ell\) extrapolation so well that the DJB method is usually not qualitatively more accurate, but it works just as well as DJA with fewer partial waves and can deal with low \(Z\) better than DJA. DJB thus supersedes DJA as a general approach to self-energy calculations. However, calculations at \(Z=1\) of radiative corrections are particularly challenging even for DJB. Sophisticated summation schemes were required in the framework of the Mohr method in Ref. [12] to reach the very high accuracy results presented there. To reach similar accuracy with potential-expansion methods, many numerical issues would have to be addressed. Techniques that are more than adequate for calculations demanding part per million accuracy may fail at higher levels. We are at present working on evaluating the self energy of hydrogen and hydrogenic ions with low \(Z\). Because of the numerical problems that may be present that we have not detected, we are also looking into the use of different gauges. There are advantages to the use of both Coulomb gauge and Yennie gauge that are known to help with the infrared behavior of radiative corrections. While very accurate calculations have in fact already been carried out in Feynman gauge, getting the same result using another gauge would clearly be a check on the calculation, and getting different answers could uncover numerical problems that had been missed. However, we conclude by emphasizing the utility and relative ease of using the DJB method described in this paper for those interested in evaluating the self energy part of the Lamb shift. \begin{table} \begin{tabular}{c c c c c} \(\ell\) & \(E_{MP}\) & \(\tilde{E}_{EP}\) & \(\tilde{E}_{MP}\) & \(Sum\_B\) \\ \hline 0 & 755.9480 & 349.1413 & 406.8067 & 4.6921\({}^{a}\) \\ [MISSING_PAGE_POST] \hline \multicolumn{4}{l}{High-\(\ell\) correction \(\Delta\ell_{2,5}^{-3}\)} & 0.00010 & 6.2516 \\ \hline Ref. [12] & & & 6.2516 \\ \end{tabular} \({}^{a}\)Include momentum-space contributions from \(E_{01P}\) in Eq. (30) and \(\tilde{E}_{2P}\) in Eq. (45). \end{table} Table 2: DJB partial wave contributions to the self energy of the \(Z=5\) point-Coulomb \(1s\) state. \(\tilde{E}_{MP}=E_{MP}-\tilde{E}_{2P}\). \(Sum\_B\) is the cumulative partial-wave sum of \(\tilde{E}_{MP}\). ###### Acknowledgements. The work of KTC was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. We would like to thank Walter Johnson, Peter Mohr, and Vladimir Yerokhin for useful conversations.
2301.05259
Detection of HCN and diverse redox chemistry in the plume of Enceladus
The Cassini spacecraft observed that Saturn's moon Enceladus possesses a series of jets erupting from its South Polar Terrain. Previous studies of in situ data collected by Cassini's Ion and Neutral Mass Spectrometer (INMS) have identified H$_2$O, CO$_2$, CH$_4$, NH$_3$, and H$_2$ within the plume of ejected material. Identification of minor species in the plume remains an ongoing challenge, owing to the large number of possible combinations that can be used to fit the INMS data. Here, we present the detection of several new compounds of strong importance to the habitability of Enceladus, including HCN, C$_2$H$_2$, C$_3$H$_6$, and C$_2$H$_6$. Our analyses of the low velocity INMS data, coupled with our detailed statistical framework, enable discrimination between previously ambiguous species in the plume by alleviating the effects of high dimensional model fitting. Together with plausible mineralogical catalysts and redox gradients derived from surface radiolysis, these compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life.
Jonah S. Peter, Tom A. Nordheim, Kevin P. Hand
2023-01-12T19:12:49Z
http://arxiv.org/abs/2301.05259v2
# Detection of HCN and diverse redox chemistry in the plume of Enceladus ###### Abstract The _Cassini_ spacecraft discovered that Saturn's moon Enceladus possesses a series of jets erupting from its South Polar Terrain. Previous studies of in situ data collected by _Cassini_'s Ion and Neutral Mass Spectrometer (INMS) have identified H\({}_{2}\)O, CO\({}_{2}\), CH\({}_{4}\), H\({}_{2}\), and NH\({}_{3}\) within the plume of ejected material. Identification of minor species in the plume remains an ongoing challenge, owing to the large number of possible combinations that can be used to fit the INMS data. Here, we present the discovery of several new compounds of strong importance to the habitability of Enceladus, including HCN, CH\({}_{2}\)O, C\({}_{2}\)H\({}_{2}\), and C\({}_{3}\)H\({}_{6}\). Our analyses of the low velocity INMS data coupled with our detailed statistical framework enable discriminating between previously ambiguous species in the plume by alleviating the effects of high-dimensional model fitting. Together with plausible mineralogical catalysts and redox gradients derived from surface radiolysis, these compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life. Shortly after its arrival in the Saturn system, the _Cassini_ spacecraft discovered intense plume activity at the mid-sized moon, Enceladus [1; 2]. _Cassini_ in situ and remote sensing observations have confirmed that the plume consists primarily of H\({}_{2}\)O gas [3; 4; 5; 6] as well as H\({}_{2}\)O-ice grains that feed Saturn's E-ring [7; 8; 9; 10]. CO\({}_{2}\) has also been detected in both the gaseous phase within the plume itself [4; 5; 6] and as a condensate in plume deposits on Enceladus' surface [11]. In situ measurements of the plume's neutral gas component made by _Cassini_'s Ion and Neutral Mass Spectrometer (INMS) further indicate the presence of CH\({}_{4}\), H\({}_{2}\), and NH\({}_{3}\) within Enceladus' subsurface ocean [4]. Although early publications identified several additional species within the plume [5], more recent work suggests that many of these compounds resulted from incidental high velocity impact fragmentation of larger molecules within the instrument antechamber [4; 12]. Analyses of plume material sampled during the lower velocity flybys of Enceladus (for which this fragmentation was less significant) do imply the existence of additional plume species, however no study to date has been able to verify the identity of any other intrinsic compounds. Difficulty in resolving minor plume constituents stems from the large number of plausible compounds relative to the low mass resolution of INMS. When training statistical models in this high dimensional regime, simple regression techniques tend to form overly complex models that produce specious results [13; 14; 15; 16]. Models of INMS spectra suffer from an additional complexity in that the signals produced by individual molecules are not necessarily linearly independent. As such, there may be multiple different combinations of species that appear to fit the data equally well. The resulting large correlations between model components generally reduce model performance and can mask the importance of any particular component by limiting statistical power [13; 17]. Studies that leave these issues unaddressed are likely to encounter model ambiguities that preclude reliable statistical inference about the plume's composition. In this work, we seek to resolve the apparent compositional ambiguity of Enceladus' plume. By characterizing the information content of the average low velocity spectrum obtained during the E14, E17, and E18 flybys, we determine constraints on the number of species that can reliably be extracted from the INMS dataset under optimal spacecraft conditions. We then use relative entropy minimization to assess the likelihood of tens of billions of potential models and show that multi-model inference allows for the identification of several new compounds not previously confirmed at Enceladus. Our results indicate the presence of a rich, chemically diverse environment that could support complex organic synthesis and possibly even the origin of life (Fig. 1). ## Constraints on the complexity of plume models Deconvolving the overlapping signals of each species in the plume requires comparing features in the INMS flight data to a library of known mass spectra. Consequently, species in the plume cannot be identified unless explicitly included within the models used for comparison. Previous studies have constructed model INMS spectra using a variety of methods, including singular value decomposition [18] and the application of custom-defined fit statistics [19; 20; 21; 22] (see Supplementary Information). In all cases, these fitting procedures seek to minimize the training error associated with the residual counts between the INMS spectrum and the reconstructed model fit. Crucially, however, the training error is not an appropriate metric for evaluating model performance. In fact, it is a well-known concept in statistical modeling that the training error will continue to decrease with the inclusion of additional model components, regardless of whether those components are genuinely related to the observed data. Instead, it is necessary to approximate how each model would perform on a set of independent observations. This is the preferred method of model validation when additional data sets are unavailable for explicit model testing [13]. Analysis techniques that utilize only the training error risk developing overly complex models and claiming false species detections. Here, we evaluate model performance using the small sample bias-corrected Akaike Information Criterion (AICc) [23, 24, 25, 26, 27] which estimates the relative entropy between a given model and the unknown, true distribution that produced the observed data. We construct each candidate model as a linear combination of end-member spectra representing the different cracking patterns of individual molecules. A model \(M\) is given by, \[M:\quad\hat{y}_{i}=\sum_{k=1}^{d}\beta_{k}x_{k,i} \tag{1}\] where \(\hat{y}_{i}\) is the total modeled counts in mass channel \(i\), \(x_{k,i}\) is the cracking pattern of species \(k\) at mass channel \(i\), \(\beta_{k}\geq 0\) is the regression coefficient for species \(k\), and \(d\) is the total number of species in the model. The observed INMS counts at each mass channel, \(y_{i}\), are treated as independent data points with unequal Gaussian uncertainties, \(\sigma_{i}\) (see Methods). The AICc for each model is then given by, \[\text{AICc}=2d-2\ln[\mathcal{L}(M_{0}|y)]+\frac{2d(d+1)}{n-d-1} \tag{2}\] where \(n\) is the total number of mass channels and \(\mathcal{L}(M|y)\) is the model's likelihood function, \[\mathcal{L}(M|y)=\prod_{j=1}^{n}\left(\frac{1}{\sigma_{j}\sqrt{2\pi}}\right) \exp\left\{\sum_{i=1}^{n}\left(\frac{y_{i}-\hat{y}_{i}}{\sigma_{i}}\right)^{2}\right\} \tag{3}\] evaluated at the maximum likelihood estimate, \(M_{0}\), obtained by optimizing the set of \(\{\beta_{k}\}\). The last term in Eq. (2) is a correction factor that penalizes overly complex models when the sample size is small (\(n/d\lesssim 40\)) [27]. The model with the minimum AICc (\(\text{AICc}_{\text{min}}\)) asymptotically approximates the model with the lowest Kullback-Leibler information loss relative to the observed data [27, 28, 29]. Whereas standard maximum likelihood estimation seeks to minimize the training error quantified by the variance-weighted sum of squared residuals, \(\sum_{i=1}^{n}((y_{i}-\hat{y}_{i})/\sigma_{i})^{2}\), minimization of the AICc accounts for the bias introduced by estimating the regression coefficients via the training data. The relative likelihood, or evidence ratio, of each model follows as [30], \[\lambda=\exp\left\{-(\text{AICc}-\text{AICc}_{\text{min}})/2\right\} \tag{4}\] and can be used to compare models of differing complexity. Generally, models with \(\lambda<1/e\) exhibit little to no Figure 1: New compounds identified in the Enceladus plume indicate a potentially habitable environment. (a) Jets emanating from ice fissures in Enceladus’ South Polar Terrain feed a plume of ejected material containing organic molecules with varying oxidation states. Electron bombardment of the surface might help facilitate the production of oxidants and prebiotic feedstock molecules observed in the plume. These compounds could potentially support biologically-mediated redox metabolisms or polymerize to form nucleic and amino acid precursors leading to the origin of life. (b) The average oxidation state of carbon for organic compounds confirmed or suspected in the plume. Plume-derived H\({}_{2}\) and O\({}_{2}\) could act as strong reducing and oxidizing agents, respectively, and may be responsible for the diverse redox chemistry seen at Enceladus. predictive power while those above this threshold form a family of most probable models suitable for multi-model inference [27]. To capture the full range of possible plume constituents, we composed a large spectral library containing the most recently published list of plausible INMS-detectable plume species [31] as well as several additional compounds found in organic synthesis and laboratory experiments simulating icy satellites (see Methods and Extended Data Table 1). Using this library, we performed an exhaustive search up to \(d=14\) of tens of billions of potential models for the plume's composition. Fig. 2a demonstrates that optimal model performance is achieved with only 10-13 species. The maximum relative likelihood across all models peaks sharply at 11 species and drops precipitously for more complex models. A benchmark forward modeling procedure extending to \(d=50\) confirms this trend for large \(d\) (see Methods). Models with greater than 13 species drastically overfit the INMS data and incorrectly extract false signal from statistical noise. Large correlations \(r_{ij}=\text{cov}(\beta_{i}\beta_{j})/\sqrt{\text{var}(\beta_{i})\text{var}( \beta_{j})}\) between regression coefficients in these models indicate the erroneous inclusion of redundant species with similar mass spectra (Fig. 2b). This leads to overfitting and poor model performance, despite a monotonic decrease in the training error with increasing complexity. By contrast, models with fewer than 10 species exhibit low correlations between model parameters, indicating the presence of additional features within the INMS spectrum that have not yet been fit by a candidate species (i.e., underfitting). Optimal performance occurs near the inflection point after all major data features have been explained but before redundant species are incorporated. ## III New species detected in the plume The most recently published list of neutral gas species confirmed in the plume consists only of H\({}_{2}\)O, CO\({}_{2}\), CH\({}_{4}\), H\({}_{2}\), and NH\({}_{3}\)[31, 4]. According to that work, no other compounds could be definitively detected by INMS during the low velocity flybys due to model ambiguities at low mixing ratios. Here, we account for those ambiguities via a multi-model averaging procedure that treats the minimum Kullback-Leibler divergence as a statistical random variable and weights each model by its probability of minimizing this information loss (see Methods). Whereas previous studies have relied on ad-hoc assessments of model ambiguity, the procedure here is rooted in fundamental concepts of information theory and explicitly incorporates uncertainties resulting from the modeling process into the standard error (SE) of each mixing ratio. In addition to H\({}_{2}\)O, CO\({}_{2}\), and CH\({}_{4}\), we find significant evidence--beyond 2 SE precision--for HCN, CH\({}_{2}\)O, C\({}_{2}\)H\({}_{2}\), and C\({}_{3}\)H\({}_{6}\) in the plume (Table 1). Our results are agnostic to the presence of H\({}_{2}\) which requires analysis of additional INMS data not examined here (see Methods). We also demonstrate strong evidence for \({}^{40}\)Ar and O\({}_{2}\) at 1 SE precision, as well as the probable detection of an alcohol (likely C\({}_{2}\)H\({}_{6}\)O\({}_{2}\) or CH\({}_{3}\)OH), and an unidenti Figure 2: Model performance as a function of model complexity. (a) Maximum relative likelihoods across all models with \(d\) species. Blue circles indicate best-fit (highest \(\lambda\)) models identified via an exhaustive search for \(d<15\). Orange circles represent models constructed using a benchmark forward modeling procedure extending to \(d=50\) (see Methods). There is good agreement between the statistics obtained with forward modeling and the exhaustive search. Models with \(\lambda>1/e\) (dashed line) exhibit strong predictive power. All models with \(d<10\) underfit the data, while those with \(d>13\) are overfitting. (b) The maximum magnitude of pairwise correlations \(r_{ij}=\text{cov}(\beta_{i}\beta_{j})/\sqrt{\text{var}(\beta_{i})\text{var}( \beta_{j})}\) between regression coefficients in the best-fitting models at each value of \(d\). Blue and orange circles represent the same models as in panel (a). Correlations follow a logistic curve demonstrating model redundancy for \(d>13\). The inflection point occurs within the optimal performance interval \(d\in[10,13]\) characterized by \(\lambda>1/e\) (green shaded area). fied 43 u fragment. Because the spectrum analyzed here is free of the major fragmentation signatures produced during the high velocity flybys, these species are most likely intrinsic to the plume itself. In general, our mixing ratios are in excellent agreement with the most recently published limits [4, 31]. The reconstructed model fit is plotted alongside the INMS spectrum in Fig. 3. Interestingly, our results suggest that NH\({}_{3}\) is not required to fit the INMS data. Although our upper limit of \(<0.58\%\) is consistent with the lower end of the range reported by Waite et al. [4, 31], we conclude that high inter-model uncertainty stemming from large contributions of CH\({}_{4}\) and H\({}_{2}\)O at overlapping mass channels precludes the firm detection of NH\({}_{3}\) (Fig. 4a). Some amount of NH\({}_{3}\) is likely required to explain the presence of nitrogen-bearing ions observed in Saturn's magnetosphere [32], but a stricter bound on the mass 16 count rate is needed before a percent-level concentration of NH\({}_{3}\) from Enceladus can be presumed (see Supplementary Information). Notably, it has been shown that even a significantly smaller concentration of NH\({}_{3}\) in the plume could still source the observed nitrogen abundance on Titan [33]. We find that nitrogen is, however, definitively present at Enceladus in the form of HCN. Previous studies have been unable to resolve the HCN abundance due to confounding signals from fragmentation products at mass 28. In their analysis of the E2 flyby, Waite et al. [6] quote an upper limit of \(<0.5\%\), whereas upper limits of \(<0.58\%\) and \(<0.74\%\) are given by Waite et al. [5], based on the high velocity E3 and E5 flybys. For the faster flybys in particular, the authors note that the elevated count rate at 28 u introduces a model ambiguity at masses 27 and 28 (N\({}_{2}\)+HCN versus C\({}_{2}\)H\({}_{4}\)) that precludes the identification of HCN. Here we correct the counts at mass 28 based on the INMS Open Source Neutral Beam (OSNB) data [4] to reveal that HCN is required at 27 u (see Methods, Fig. 4b, and Extended Data Fig. 1). The mixing ratio reported here (\(0.12\pm 0.04\%\)) is consistent with that observed in cometary comae [34, 35, 36]. \begin{table} \begin{tabular}{l l l l l} \hline Evidence & Species & Probability\({}^{\rm d}\) & Mixing Ratio (\(\%\))\({}^{\rm d}\) & Previous Limit (\(\%\))\({}^{\rm e}\) \\ \hline Confirmed & H\({}_{2}\)O & 1 & \(98.3\pm 0.3\) & \(96-99\) \\ & CO\({}_{2}\) & 1 & \(0.29\pm 0.04\) & \(0.3-0.8\) \\ & CH\({}_{4}\) & 1 & \(0.23\pm 0.05\) & \(0.1-0.3\) \\ & HCN & \(>0.99\) & \(0.12\pm 0.04\) & \(0.01-0.2\) \\ & C\({}_{2}\)H\({}_{2}\) & \(>0.99\) & \(0.025\pm 0.005\) & \(0.01-0.2\) \\ & CH\({}_{2}\)O & \(>0.99\) & \(0.026\pm 0.010\) & \(0.01-0.2\) \\ & C\({}_{3}\)H\({}_{6}\) & \(0.88\) & \(0.0041\pm 0.0020\) & \(<0.01\) \\ & H\({}_{2}\)\({}^{\rm a}\) & \(-\) & \(-\) & \(0.4-1.4\), 0.9\({}^{\rm f}\) \\ Strong & \({}^{\rm 40}\)Ar & 0.85 & \(0.0014\pm 0.0009\) & \(<0.01\) \\ & Alcohols\({}^{\rm b}\) & 0.81 & \(<0.021\) & \(-\) \\ & [C\({}_{2}\)H\({}_{6}\)O\({}_{2}\)] & 0.38 & \(<0.021\) & \(<0.01\) \\ & [CH\({}_{3}\)OH] & 0.31 & \(<0.0041\) & \(<0.01\) \\ & 43 u fragments\({}^{\rm c}\) & 0.81 & \(<0.0016\) & \(-\) \\ & (C\({}_{3}\)H\({}_{6}\)O) & 0.27 & \(<0.0015\) & \(<0.01\) \\ & (C\({}_{2}\)H\({}_{6}\)N\({}_{2}\)) & 0.26 & \(<0.0016\) & \(<0.01\) \\ & O\({}_{2}\) & 0.79 & \(0.0027\pm 0.0020\) & \(<0.01\), \(<0.004^{\rm f}\) \\ Moderate & H\({}_{2}\)S & 0.44 & \(<0.0031\) & \(<0.01\) \\ & PH\({}_{3}\) & 0.36 & \(<0.0025\) & \(<0.01\) \\ Poor & C\({}_{3}\)H\({}_{5}\)Cl & 0.17 & \(<0.0012\) & \(<0.01\) \\ & CH\({}_{3}\)CN & 0.15 & \(<0.0028\) & \(<0.01\) \\ & NH\({}_{3}\) & 0.10 & \(<0.58\) & \(0.4-1.3\) \\ & C\({}_{3}\)H\({}_{7}\)NO\({}_{2}\) & 0.08 & \(<0.0014\) & \(<0.01\) \\ \hline \end{tabular} \({}^{\rm a}\) Determination of the H\({}_{2}\) mixing ratio requires analysis of additional INMS data not examined here (see Methods). \({}^{\rm b}\) Other alcohols such as C\({}_{3}\)H\({}_{8}\)O, C\({}_{2}\)H\({}_{6}\)O, or C\({}_{4}\)H\({}_{10}\)O might also contribute to this mixing ratio (see Supplementary Material). \({}^{\rm c}\) Other mass 43 fragments produced by C\({}_{4}\)H\({}_{10}\), C\({}_{4}\)H\({}_{6}\)O\({}_{2}\), C\({}_{4}\)H\({}_{9}\)N, C\({}_{5}\)H\({}_{9}\)N, C\({}_{8}\)H\({}_{18}\), C\({}_{5}\)H\({}_{12}\), or C\({}_{2}\)H\({}_{4}\)O\({}_{2}\) might also contribute to this mixing ratio. \({}^{\rm d}\) Denotes values calculated in this work. \({}^{\rm e}\) Denotes values presented in ref. [31] unless otherwise specified. \({}^{\rm f}\) Denotes values presented in ref. [4]. \end{table} Table 1: Volume mixing ratios for the Enceladus plume. Probabilities and mixing ratios are shown for all species with upper limits above the estimated INMS noise floor. Values are calculated using a multi-model averaging procedure that incorporates uncertainties due to model ambiguities (see Methods). Mixing ratios are given as mean +/- SE and are scaled to incorporate the 0.9% H\({}_{2}\) number abundance reported in ref. [4] (see Methods). Upper limits (\(<3\) SE) are reported for species with mean values below their corresponding SE. The minimal model is comprised of “confirmed” species with \(>2\) SE precision and represents the most conservative model of the plume. Species listed in brackets are included in the alcohol mixing ratio. Species listed in parentheses are included in the 43 u fragment mixing ratio. We also report the first strong evidence for native O\({}_{2}\) in the plume. In their analysis of the low velocity flybys, Waite et al. [4] noted that mass 32 exhibits an elevated count rate, likely in part due to surface processing between H\({}_{2}\)O and the Ti instrument antechamber. Accounting for this instrument effect, they estimated a corrected mass 32 signal equal to \((100/45)\times 0.004\%=0.0089\%\) of the counts measured at mass 18. In accounting for additional species, our statistical analysis yields an excess at mass channel 32 that is best explained by an O\({}_{2}\) mixing ratio of 0.0027%. The value of mass 32 counts used in this work (Fig. 4c) is well within the limit imposed by Waite et al., and thus we argue that it is a true measure of native O\({}_{2}\) from Enceladus. Our analysis also indicates moderate (Table 1) evidence for either H\({}_{2}\)S or PH\({}_{3}\) in the plume, with probabilities of 0.44 and 0.36, respectively. This conclusion stems primarily from masses 33 and 34, where the omission of these species leaves a small residual signal that cannot be explained by other library spectra (Fig. 4c). However, this signal alone is not enough to unambiguously conclude their presence within the INMS data. Both species improve the model fit when added to the minimal model ("confirmed" species in Table 1), but not when additional higher likelihood species are included ("strong" species in Table 1). Our results are therefore only suggestive of a 34 u compound in the plume. Given the profound astrobiological implications of finding sulfur or phosphorous compounds at Enceladus, mass spectra for alternative 34 u species (such as H\({}_{2}\)O\({}_{2}\)) should be characterized experimentally and investigated in future studies. ## Discussion In aggregate, the results presented here indicate that Enceladus is host to a multiphasic and compositionally diverse chemical environment that is consistent with a habitable subsurface ocean. The new species identified in this work also suggest that this ocean may contain the necessary building blocks required to synthesize compounds important to the origin of life. The detection of CH\({}_{2}\)O, O\({}_{2}\), and an alcohol is particularly interesting as these species potentially imply a diverse redox environment within the ocean (Fig. 1). Past measurements of CH\({}_{4}\) and H\({}_{2}\) in the plume [4, 5, 6] have supported the hypothesis that Enceladus may be hydrothermally active and could be a source of biologically useful reductants. In particular, methanogenesis via the reduction of CO\({}_{2}\) has been proposed as a potential pathway that could support extant microbial communities near the sea floor [37, 38]. However, without additional oxidants, reductants in the ocean would be of little biochemical utility, as no electron transfer mechanism beyond methanogenesis would be available to yield a neg Figure 4: Contributions of individual species to the model fit. Black silhouettes show different mass ranges for the average low velocity INMS spectrum plotted in Fig. 3. Error bars show 1\(\sigma\) Gaussian uncertainty in the observed count rates and dashed lines denote the INMS noise floor. (a) Blue circles show the total contribution of H\({}_{2}\)O+CO\({}_{2}\)+CH\({}_{4}\). Green circles show the contribution of NH\({}_{3}\), which is not required for satisfactory model performance. (b) Model contributions from CO\({}_{2}\), HCN, CH\({}_{2}\)O, and C\({}_{2}\)H\({}_{2}\) (magenta, cyan, orange, and purple circles, respectively). (c) Navy circles show the contribution from O\({}_{2}\) added to the total contribution from alcohols. Pink circles show the tentatively proposed contribution of H\({}_{2}\)S+PH\({}_{3}\). Low signal-to-noise ratios at these mass channels preclude the firm identification of H\({}_{2}\)S or PH\({}_{3}\). (d) Brown and indigo circles show contributions from C\({}_{3}\)H\({}_{6}\) and \({}^{40}\)Ar, respectively. Teal circles show the total contribution from 43 u fragments. Figure 3: Average low velocity INMS spectrum and reconstructed model fit. The black silhouette shows the 12-47 u range of the average INMS spectrum obtained during the E14, E17, and E18 flybys adapted from ref. [12] with correction for minor artifacts (see Methods and Extended Data Fig. 1). Counts at all other mass channels are at or below the estimated noise floor (dashed line). Error bars show 1\(\sigma\) Gaussian uncertainty in the observed count rates. Red circles indicate the model fit based on the mixing ratios in Table 1. ative change in Gibbs free energy. The detection of O\({}_{2}\) and partially oxidized carbon compounds may solve this problem, as they provide a multitude of highly exergonic redox pathways that could help power life in Enceladus' subsurface ocean (see e.g., refs. [39; 40]). Our results also provide the first conclusive evidence for HCN, CH\({}_{2}\)O, C\({}_{2}\)H\({}_{2}\), and C\({}_{3}\)H\({}_{6}\) in the plume, which, in addition to their significance for habitability, are particularly intriguing for their relevance to prebiotic chemistry. HCN polymerization is implicated in a number of potential pathways for the formation of nucleobases and amino acids [41; 42; 43; 44]. The autocatalysis of CH\({}_{2}\)O to form simple sugars and RNA precursors is also well-documented [45], as is its production on the early Earth [46]. Although these reactions might be limited within a dilute subsurface ocean, concentrated conditions favorable for polymerization could be achieved within the ice shell via eutectic freezing [47; 48; 42]. Indeed, Levy et al. [49] demonstrated the production of alanine, glycine, aspartic acid, and adenine from HCN and NH\({}_{3}\) under conditions similar to those of icy satellites. Repeated freezing and thawing caused by the cycling of material between the ice shell and the warmer interior fissures of the plume vents may provide conditions favorable for organic synthesis. In accordance with this scenario, there does exist evidence that the plume contains at least some material that has not been diluted by the ocean. Liquid water facilitates the rapid hydrolysis of HCN into CH\({}_{3}\)NO, which subsequently decays into CH\({}_{2}\)O\({}_{2}\) and NH\({}_{3}\)[47]. The simultaneous presence of ppt amounts of HCN and absence of CH\({}_{3}\)NO and CH\({}_{2}\)O\({}_{2}\) suggests that the plume may be sourced, in part, by solid-phase material residing in the ice shell [5]. HCN at Enceladus could be primordial in nature or produced via the radiolysis of nitrogen-bearing surface ices by magnetospheric electrons. Laboratory experiments of hydrocarbon-rich ices simulating the temperature and surface radiolysis conditions of Enceladus support the evidence for cyano group-containing species. Hand [50] and Hand et al. [51] found that the warming of H\({}_{2}\)O+NH\({}_{3}\)+C\({}_{3}\)H\({}_{6}\) ice films from 70 K to 300 K after irradiation by 10 keV electrons produces HCN and nitrile species, including CH\({}_{3}\)CN. Irradiation of H\({}_{2}\)O+CO\({}_{2}\) ice in the presence of hydrocarbons can also produce CH\({}_{2}\)O and alcohols [50]. Thus, the detection of HCN, CH\({}_{2}\)O, and an alcohol (as well as O\({}_{2}\) and possibly CH\({}_{3}\)CN) in the plume could potentially be explained by the aerosolization of radiolytically processed material on or near the surface. Additional evidence from organic-rich ice grains detected in the plume further suggests the presence of solid-phase organic films that exist above the water table and are transported to the surface via the plume vents [12]. The macromolecular nature of these compounds, some in excess of 200 u, might be evidence of ongoing synthetic chemistry. Accumulation and isolation of buoyant organic compounds at the cold ice-water interface between the ocean and ice shell may also promote longer lifetimes and inhibit hydrolysis [52]. Of course, the availability of organic compounds at Enceladus to support or facilitate the origin of life likely depends strongly on the geochemistry of the subsurface ocean. Although the mineral composition of the ocean floor is unknown, the simultaneous detections of SiO\({}_{2}\) particles by CDA [53] and gaseous H\({}_{2}\) by INMS [4] indicate the presence of a complex hydrothermal environment [54]. Similarities between the inferred temperature and pH of Enceladus' ocean and those of the Lost City hydrothermal vents point towards serpentinization reactions as a possible source for the observed H\({}_{2}\) abundance [55; 4; 56]. Such an explanation would also be consistent with the tentative evidence for H\({}_{2}\)S presented here. If ferrous iron is present at Enceladus (as it is at the Lost City sites) along with H\({}_{2}\)S, sufficient reducing power may be available for a metabolic pathway to biogenesis [57; 42; 58]. Laboratory evidence suggests that confirmation of C\({}_{3}\)H\({}_{6}\) in the plume might allow for the formation of vesicle-type structures at Enceladus, which could then shelter the burgeoning proto-metabolism [50]. An alternative scenario for complex organic synthesis could be realized through the photochemical processing of ocean material ejected by the plume onto the surface. HCN might be sequestered as ferrocyanide by sodium or potassium salts, both of which have been found in plume ice grains [8; 9]. Ferrocyanide is an important feedstock molecule for the cyanosulfidic prebiotic chemistry paradigm, which relies on reductive homologation of HCN to form the sugars needed for ribonucleotide assembly [59; 44; 60]. If there exists sufficient UV photon processing of organic material on Enceladus' surface, the selective production of RNA and amino acid precursors from plume-derived HCN, CH\({}_{2}\)O, C\({}_{2}\)H\({}_{2}\), (and possibly H\({}_{2}\)S or PH\({}_{3}\)) together with plausible mineralogical catalysts might occur [61; 44]. This surface-based production could then feed back into the plume or ocean via downward transport through the ice shell. Whether this type of chemistry is efficient under Enceladus-like conditions could be explored in future experimental studies, while more detailed examination of Enceladus' oceanic material will require future robotic missions. ## Methods ### Instrument overview The _Cassini_ Ion and Neutral Mass Spectrometer (INMS) was a quadrupole mass spectrometer designed primarily for neutral gas analysis [62]. When INMS was operating in its Closed Source Neutral (CSN) mode, neutral molecules were accumulated in the instrument an-chamber before being directed towards a 70 eV electron source for subsequent ionization. Detections of ionized molecules at the instrument target were then recorded sequentially at individual mass channels corresponding to mass-to-charge ratios (m/z), where z is typically assumed to be 1. The INMS mass channels ranged from 1 to 99 u at a resolution of 1 u, excluding 9, 10, and 11 u. When neutral molecules enter the ionization region of a mass spectrometer such as INMS, emitted electrons from the electron source both ionize incoming parent molecules and produce dissociated, ionized fragments. For a given electron energy (e.g., 70 eV), each incoming parent molecule fragments according to a specific cracking pattern that describes the relative proportions of the dissociated products. As such, a mass spectrum obtained from a single molecular species will exhibit counts at mass channels pertaining to the molecular weight of the ionized parent molecule, as well as those of each of the ionized dissociation products. INMS spectra therefore consist of a combination of overlapping spectral features resulting from each parent molecule's cracking pattern. INMS was not able to detect parent molecules or fragments with masses above the maximum mass of 99 u. However, larger molecules may have impacted the instrument walls and fragmented into smaller molecules that were within the detectable mass range [5]. The rate of these impact fragmentation events depends heavily on the kinetic energy of the spacecraft [31]. As such, spectra generated from faster flybys (\(>\) 10-15 km/s) exhibit a suite of spectral features (associated with impact fragments of higher mass molecules) that are not observed at lower speeds [4; 31; 12]. As a result, measurements obtained during the lower velocity flybys of Enceladus provide the best opportunity to study the intrinsic plume composition in the absence of fragmentation effects. ### Data set selection During the E14, E17, and E18 flybys of Enceladus, _Cassini_ made a sequence of three low velocity (\(<\) 7.5 km/s) passages through the plume along nearly identical trajectories. These flybys, referred to as the "slow" flybys [4], produced the most consistent INMS data with which to analyze the plume's intrinsic neutral gas composition. Here, we use the averaged CSN spectrum obtained across these three flybys, as done by Waite et al. in ref. [4]. This averaged slow flyby spectrum is identical to that presented in ref. [12], with minor differences described below. In their supplementary material, Waite et al. [4] deduce that the majority of counts at mass 28 are attributable to fragmentation products (possibly of CO\({}_{2}\)) that are not reflective of the plume's intrinsic composition. Fragments from higher mass hydrocarbon species likely also contribute to this mass channel [12]. Measurements taken during the E21 flyby using the INMS Open Source Neutral Beam (OSNB) mode place an upper limit on intrinsic mass 28 counts at 0.05% of the signal at mass 18 [4]. As this study is focused on identifying native species in the plume, we account for this effect by correcting the counts at mass 28 accordingly (Extended Data Fig. 1). In order to quantify variability in the INMS data between flybys, we assume a 30% Gaussian uncertainty on all mass channels, similar to that described by Waite et al. in their own analysis of the slow flyby spectrum [4]. However, given the strong signal at mass 18, we use a standard Poisson uncertainty at this mass channel, equal to the square root of the number of counts. This choice is warranted given that mass 18 is the anchor mass used to standardize the E14, E17, and E18 flybys. Moreover, mass channels beyond 46 u exhibit significantly lower signal-to-noise ratios than lower mass channels. To avoid erroneously fitting to these data points, we implement a larger fractional uncertainty on peaks that lie below the count levels at these mass channels. This larger uncertainty is equal to the typical count value measured at these mass channels (2 counts) and serves as the global noise floor for the spectrum. When INMS was operating in the CSN mode, the vast majority of counts at 1 and 2 u arose due to interactions between gas phase H\({}_{2}\)O molecules and the walls of the instrument antechamber [4]. Proper assessment of these mass channels requires a careful analysis of data from the instrument's OSNB mode. This problem has been explored in detail by Waite et al. [4] who attribute \(\sim\)98% of the total signal at 1 and 2 u to H\({}_{2}\) (at a mixing ratio of \(\sim\)0.9%) and H\({}_{2}\)O. The cracking pattern for H\({}_{2}\) does not contribute to any other mass channels and is therefore independent from the deconvolution of the remainder of the spectrum. Furthermore, the mixing ratio of H\({}_{2}\)O is predominantly dictated by high leverage points at masses 18 and 17 and not significantly affected by these lower mass channels. For these reasons, we chose to adopt the H\({}_{2}\) mixing ratio of ref. [4] and exclude masses 1 and 2 from our analysis (Extended Data Fig. 1). ### Spectral library As discussed in the main text, species that are present within the INMS spectrum cannot be identified unless explicitly included within a reference library used for comparison. However, large spectral libraries suffer from high dimensionality; the inclusion of too many library species leads to models that are unnecessarily complex and prone to overfitting. This phenomenon is similar to how any univariate relationship can be fit with zero residual error using a sufficiently high order polynomial. Although overfitting can be ameliorated by using bias correcting heuristics such as the AICc, chemically implausible species may still be incorporated into the final model if they are included within the initial library. As an additional hindrance, the inclusion of many (possibly collinear) model parameters reduces statistical power and raises the computational intensity of the analysis. It is therefore desirable to constrain the library of candidate species as much as possible based on prior knowledge and the results of previous studies. Our library consists of a carefully chosen set of 50 compounds, including the most recent list of INMS-detectable candidate plume species presented in ref. [31]. All library species and their reasons for inclusion are presented in Extended Data Table 1. Because all counts observed above 46 u during the slow flybys are below the estimated noise floor, no species with base peaks above this threshold were included in this analysis. The spectra in our library are 70 eV electron ionization mass spectra collected from the INMS Refurbished Engineering Unit (REU) and the National Institute of Standards and Technology (NIST) online database. We adopt the method used in refs. [19; 63] to prioritize the REU spectra when available. REU spectra were obtained from the most recent INMS calibration file provided on the Planetary Data System. All spectra were analyzed at 1 u resolution and matched to the mass detection range of INMS. All spectra were base peak normalized to ensure a uniform method of comparison. ### Benchmarking model complexity In deducing limits on INMS model complexity, we constructed all possible combinations of plume species up to \(d=14\) from a candidate library of 50 plausible compounds (Extended Data Table 1). To reduce the computational intensity associated with this analysis, we assumed H\({}_{2}\)O, CO\({}_{2}\), and CH\({}_{4}\) to be present in each model. H\({}_{2}\)O and CO\({}_{2}\) have been verified at Enceladus by independent Cassini instruments [64; 65], and CH\({}_{4}\) is among the most consistent INMS observations, having been detected on every flyby that sampled the plume [4; 5; 6]. This resulted in a total of \(\sum_{d=3}^{14}\frac{47!}{(d-3)!(50-d)!}\approx 2.4\times 10^{10}\) models for direct comparison. In order to efficiently sample the parameter space of more complex models, we implemented a forward modeling stepwise selection procedure [13]. Starting from the three-component model of H\({}_{2}\)O+CO\({}_{2}\)+CH\({}_{4}\), the next most complex model was constructed by including an additional species that, when added to the model, resulted in the greatest decrease in the variance-weighted sum of squared residuals. This selection process was repeated--sequentially incorporating new species based on their influence on the sum of squared residuals--to produce a set of nested models, each containing one more species than the last. Notably, the sum of squared residuals (i.e., the training error) was only used to compare models of equal complexity and therefore selects the same species as would comparisons based on the AICc. As shown in Fig. 2 of the main text, the results of this forward modeling algorithm closely resemble those of the exhaustive model search for \(d<15\). Extrapolation of these results to more complex models confirms a monotonic decrease in relative model likelihood for \(d>11\). ### Multi-model inference In order to determine mixing ratios that properly account for the ambiguity induced by model degeneracy, we computed model-averaged regression coefficients based on the relative likelihood of each model, \[\bar{\beta}_{k}=\sum_{j=1}^{R}w_{j}\beta_{k,j}. \tag{5}\] Here, \(\beta_{k,j}\) is the regression coefficient for species \(k\) in model \(j\), \(R\) is the total number of most probable models (those with \(\lambda>1/e\)), and \(w_{j}=\lambda_{j}/\sum_{m}^{R}\lambda_{m}\) is the Akaike weight [27] of each model. Because the observed INMS spectrum consists of sample data taken from the unknown population distribution of plume contents, the minimum AICc model is a statistical random variable that estimates the actual (also unknown) model that minimizes the Kullback-Leibler divergence with the true distribution. Therefore, the Akaike weight may be interpreted as the probability that a given model minimizes this information loss between the model and the unknown, true distribution from which the observed INMS data were sampled. The probability that each species is present in the true minimum AICc model follows accordingly as, \[\mathcal{P}_{k}=\sum_{j=1}^{R}w_{j}\Theta(\beta_{k,j}) \tag{6}\] where \(\Theta(\beta_{k,j})\) is the Heaviside step-function that equals 1 when \(\beta_{k,j}>0\) and is zero otherwise. We then computed uncertainties in the model parameters using the unconditional standard error estimator [27], \[\text{SE}(\bar{\beta}_{k})=\sum_{j=1}^{R}w_{j}\sqrt{\text{var}(\beta_{k,j})+( \beta_{k,j}-\bar{\beta}_{k})^{2}} \tag{7}\] where var\((\beta_{k,j})\) is the variance of the regression coefficient conditional on model \(M_{j}\). Here, var\((\beta_{k,j})\) characterizes the intra-model uncertainty associated with maximum likelihood estimation, whereas the term \((\beta_{k,j}-\bar{\beta}_{k})^{2}\) quantifies inter-model variability due to the presence of additional high likelihood models. Importantly, the results presented in this work incorporate the relative probabilities of each model and account for ambiguities in the model selection process. ### Conversion to mixing ratios The formula for converting INMS counts to ambient densities is described in ref. [66] and given by, \[n_{k}=\left(\frac{T_{0}}{T_{a}}\right)^{1/2}\frac{1}{D_{k}}\frac{X_{k}}{s_{k}} \tag{8}\] where \(X_{k}\) is the count rate of species \(k\) (measured at the base peak), \(s_{k}\) is the INMS sensitivity, \(D_{k}\) is the ram enhancement factor, \(T_{a}\) is the ambient temperature, and \(T_{0}\) is room temperature (293 K). When available, INMS sensitivities were obtained from refs. [18; 19; 63] and otherwise estimated based on the electron impact ionization cross section procedure of Fitch and Sauter (see Eq. 2 of ref. [67]) adapted to INMS data in refs. [19; 63]. For one species (PH\({}_{3}\)), cross section data was taken from ref. [68]. As done in ref. [63], a 30% uncertainty was implemented on all sensitivities estimated from NIST spectra. The count rate is related to the model-averaged regression coefficient of Equation (5) by \(X_{k}=\bar{\beta}_{k}c_{0}\) where \(c_{0}\) is the standardized (species-independent) base peak count rate of each library spectrum. The mixing ratio for each species \(m_{k}\) (scaled to include the H\({}_{2}\) mixing ratio \(m_{\rm H_{2}}=0.9\%\) given in ref. [4]) is then, \[m_{k} =(100-m_{\rm H_{2}})\frac{n_{k}}{\sum_{l}n_{l}}\] \[=(100-m_{\rm H_{2}})\frac{\bar{\beta}_{k}}{D_{k}s_{k}}\sum_{l} \frac{D_{l}s_{l}}{\bar{\beta}_{l}}. \tag{9}\] At suprathermal spacecraft speed and small ram angle (conditions valid during the E14, E17, and E18 flybys), the ram enhancement factor is approximately [4; 66], \[D_{k}\sim 0.7u\sqrt{\frac{2\pi\mu_{k}}{k_{B}T_{a}}} \tag{10}\] for spacecraft speed \(u\) and molecular mass \(\mu_{k}\). The final expression for the mixing ratio of each species is therefore, \[m_{k}=(100-m_{\rm H_{2}})\frac{\bar{\beta}_{k}}{\sqrt{\mu_{k}}s_{k}}\sum_{l} \frac{\sqrt{\mu_{l}}s_{l}}{\bar{\beta}_{l}}. \tag{11}\] Future updates to INMS sensitivity coefficients or ram enhancement factors may change the relative mixing ratios presented here but will not affect which species have been identified--or the evidence for their detection--as these depend only on the set of \(\{\bar{\beta}_{k}\}\). ## Acknowledgements The authors thank Dr. J. Hunter Waite and Dr. Brian A. Magee for their help on interpreting INMS instrument effects. J.S.P. thanks Dr. Masao Sako and Dr. Kiri L. Wagstaff for useful discussions and statistical insight. J.S.P. also thanks Dr. Dimitar D. Sasselov for helpful discussions on prebiotic chemistry. All authors acknowledge the support of the Cassini Data Analysis Program (NNN13D466T) and the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. T.A.N. was also supported by an appointment to the NASA Postdoctoral Fellowship Program at the Jet Propulsion Laboratory administered by Oak Ridge Associated Universities and Universities Space Research Association through a contract with NASA. K.P.H. also acknowledges support from the NASA Astrobiology Program (80NSSC19K1427) and the Europa Lander Pre-Project, managed by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. Extended Data Table 1. Complete list of species included in the analysis. Those taken from ref. [31] comprise the most recently published list of INMS-detectable species in the plume. REU: INMS Refurbished Engineering Unit; NIST: National Institute of Standards and Technology online database. \begin{tabular}{l l l l l l l} \hline Species & Name & Mass (u) & Source & \(s_{k}\) & \(s_{k}\) Ref. & Reason/Ref. \\ \hline CH\({}_{4}\) & Methane & 16 & REU & 6.01\(\times 10^{4}\) & [63] & [4, 31] \\ NH\({}_{3}\) & Ammonia & 17 & REU & 4.77\(\times 10^{4}\) & [63] & [4, 31] \\ H\({}_{2}\)O & Water & 18 & REU & 4.34\(\times 10^{4}\) & [63] & [4, 31] \\ C\({}_{2}\)H\({}_{2}\) & Acetylene & 26 & REU & 8.81\(\times 10^{4}\) & [63] & [6, 31] \\ HCN & Hydrogen Cyanide & 27 & REU & 5.20\(\times 10^{4}\) & [63] & [5, 31] \\ C\({}_{2}\)H\({}_{4}\) & Ethylene & 28 & REU & 6.21\(\times 10^{4}\) & [63] & [4, 31] \\ CO & Carbon Monoxide & 28 & REU & 6.60\(\times 10^{4}\) & [63] & [4, 31] \\ N\({}_{2}\) & Nitrogen & 28 & REU & 6.29\(\times 10^{4}\) & [63] & [4, 31] \\ CH\({}_{2}\)O & Formaldehyde & 30 & NIST & 3.21\(\times 10^{4}\) & [63] & [5, 31] \\ NO & Nitric Oxide & 30 & NIST & 5.28\(\times 10^{4}\) & [31] \\ C\({}_{2}\)H\({}_{6}\) & Ethane & 30 & REU & 6.94\(\times 10^{4}\) & [63] & [5, 31] \\ CH\({}_{5}\)N & Methylamine & 31 & NIST & 4.07\(\times 10^{4}\) & [63] & [31] \\ CH\({}_{3}\)OH & Methanol & 32 & NIST & 4.17\(\times 10^{4}\) & [63] & [4, 31] \\ H\({}_{2}\)S & Hydrogen Solfide & 34 & NIST & 5.38\(\times 10^{4}\) & [63] & [5, 31] \\ PH\({}_{3}\) & Phospine & 34 & NIST & 5.98\(\times 10^{4}\) & [4] & [31] \\ O\({}_{2}\) & Oxygen & 36 & NIST & 5.03\(\times 10^{4}\) & [63] & [4, 31] \\ \({}^{36}\)Ar & Argon 36 & 36 & REU & 7.87\(\times 10^{4}\) & [18] & [4, 31] \\ C\({}_{3}\)H\({}_{4}\) & Propyne & 40 & REU & 4.19\(\times 10^{4}\) & [63] & [5, 31] \\ \({}^{40}\)Ar & Argon 40 & 40 & REU & 7.29\(\times 10^{4}\) & [19] & [5, 31] \\ CH\({}_{3}\)CN & Acetonitrile & 41 & REU & 5.28\(\times 10^{4}\) & [63] & [31, 50] \\ C\({}_{2}\)H\({}_{2}\)O & Ketene & 42 & NIST & 4.37\(\times 10^{4}\) & [63] & [31] \\ C\({}_{3}\)H\({}_{6}\) & Propene & 42 & NIST & 4.06\(\times 10^{4}\) & [63] & [5, 31] \\ CH\({}_{2}\)N\({}_{2}\) & Cyanamide & 42 & NIST & 7.55\(\times 10^{4}\) & \(-\) & Organic synthesis [59, 61] \\ C\({}_{2}\)H\({}_{4}\)O & Acetaldehyde & 44 & NIST & 4.42\(\times 10^{4}\) & [63] & [5, 31] \\ C\({}_{3}\)H\({}_{8}\) & Propane & 44 & REU & 5.11\(\times 10^{4}\) & [63] & [6, 31] \\ CO\({}_{2}\) & Carbon Dioxide & 44 & REU & 7.09\(\times 10^{4}\) & [63] & [4, 31] \\ C\({}_{2}\)H\({}_{7}\)N & Ethylamine & 45 & NIST & 1.71\(\times 10^{4}\) & [63] & [31] \\ CH\({}_{3}\)NO & Formamide & 45 & NIST & 6.22\(\times 10^{4}\) & [63] & HCN decomposition [47] \\ C\({}_{2}\)H\({}_{6}\)O & Ethanol & 46 & NIST & 1.62\(\times 10^{4}\) & [63] & [5, 31] \\ CH\({}_{2}\)O\({}_{2}\) & Formic Acid & 46 & NIST & 2.94\(\times 10^{4}\) & [63] & HCN decomposition [47] \\ C\({}_{4}\)H\({}_{8}\) & 1-Butene & 56 & NIST & 3.73\(\times 10^{4}\) & [63] & [5, 31] \\ C\({}_{2}\)H\({}_{6}\)N\({}_{2}\) & Azomethane & 58 & NIST & 8.46\(\times 10^{4}\) & \(-\) & [31] \\ C\({}_{3}\)H\({}_{6}\)O & Acetone & 58 & NIST & 6.70\(\times 10^{4}\) & [63] & [5, 31] \\ C\({}_{4}\)H\({}_{10}\) & Isobutane & 58 & NIST & 3.24\(\times 10^{4}\) & [63] & [5, 31] \\ C\({}_{2}\)H\({}_{0}\) & Acetic Acid & 60 & NIST & 4.51\(\times 10^{4}\) & [63] & [5, 31] \\ C\({}_{3}\)H\({}_{8}\)O & 1-Propanol & 60 & NIST & 8.80\(\times 10^{5}\) & [63] & [5, 31] \\ C\({}_{2}\)H\({}_{7}\)NO & Monoethanolamine & 61 & NIST & 1.67\(\times 10^{3}\) & \(-\) & [31] \\ C\({}_{2}\)H\({}_{6}\)O\({}_{2}\) & 1,2-Ethanediol & 62 & NIST & 5.77\(\times 10^{5}\) & [63] & [5, 31] \\ C\({}_{5}\)H\({}_{10}\) & Cyclopentane & 70 & NIST & 4.39\(\times 10^{4}\) & [63] & [31] \\ C\({}_{4}\)H\({}_{9}\)N & Pyrrolidine & 71 & NIST & 1.18\(\times 10^{3}\) \(-\) & [31] \\ C\({}_{5}\)H\({}_{12}\) & Pentane & 72 & NIST & 1.53\(\times 10^{4}\) & [63] & [5, 31] \\ C\({}_{4}\)H\({}_{10}\)O & 1-Butanol & 74 & NIST & 6.14\(\times 10^{5}\) \(-\) & [31] \\ C\({}_{2}\)H\({}_{5}\)NO\({}_{2}\) & Glycine & 75 & NIST & 6.14\(\times 10^{5}\) & [63] & [5, 31] \\ C\({}_{3}\)H\({}_{5}\)Cl & Allyl Chloride & 76 & NIST & 9.63\(\times 10^{4}\) \(-\) & [31] \\ C\({}_{5}\)H\({}_{9}\)N & Butyl Isocyanide & 83 & NIST & 8.34\(\times 10^{4}\) \(-\) & Hydrocarbon irradiation [50] \\ C\({}_{4}\)H\({}_{6}\)O\({}_{2}\) & 2,3-Butanedione & 86 & NIST & 3.91\(\times 10^{4}\) & [63] & [5, 31] \\ C\({}_{3}\)H\({}_{7}\)NO\({}_{2}\) & Alanine & 89 & NIST & 1.73\(\times 10^{3}\) \(-\) & [31] \\ C\({}_{8}\)H\({}_{18}\) & Octane & 114 & NIST & 1.65\(\times 10^{3}\) \(-\) & [31] \\ C\({}_{6}\)H\({}_{12}\)N\({}_{4}\) & Methenamine & 140 & NIST & 3.86\(\times 10^{3}\) \(-\) & [31] \\ C\({ Extended Data Fig. 1. Dataset corrections and minimal model fit. (a) The black silhouette shows the full mass range of the INMS spectrum used in this work. Shaded gray bars indicate count values that differ from the spectrum presented in ref. [12] (see Methods). The noise floor (dashed line) is estimated from the count rates of noisy mass channels \(>46\) u as \(\epsilon=2\). Blue circles show the results of fitting the minimal model consisting of the "confirmed" species in Table 1. (b) Scatterplot of the standardized residuals produced by fitting the minimal model. There is no discernable pattern amongst the residuals or evidence of heteroscedasticity. **(c)** Histogram of the standardized residuals (blue bars) compared to a reference Gaussian distribution with zero mean (black curve). The residuals show good agreement with the Gaussian distribution, indicating a robust model fit. Supplementary results ### Analysis of pairwise collinearity As discussed in the main text, compositional ambiguities within INMS spectra arise due to the large number of candidate species combined with the relatively low mass resolution of the instrument. In other words, models of the plume's composition contain many possible model components with comparatively few data points available to constrain them. This difficulty is accentuated when there exist combinations of multiple species that can reproduce the signal produced by another. If this collinearity is exact, discriminating between such singular species in the composite INMS spectrum is impossible. In practice, even approximately collinear mass spectra can present a significant obstacle when interpreting data with a finite mass resolution. These issues have been encountered in previous studies [5; 6; 31] and have greatly hindered efforts to identify trace compounds in the plume. Here, we quantify the extent of pairwise collinearity amongst library spectra by computing the correlation matrix for each species' cracking pattern. We use \(\rho\) to denote correlation coefficients of cracking patterns, in contrast to \(r\) (defined in the main text), which signifies correlations between regression coefficients. Whereas \(r\) is a model dependent quantity (see Fig. 2 of the main text), \(\rho\) is a static property of the spectral library. Supplementary Fig. 2 demonstrates that large positive correlations are common and manifest between molecules with similar cracking patterns. A correlation coefficient between two species of \(\rho=1\) would imply identical mass spectra. Pairs of species with correlation coefficients close to 1 are approximately collinear and may still be indistinguishable in practice. By contrast, pairs of species with correlation coefficients close to 0 lack a consistent pattern of strong overlapping features in their cracking patterns. Values near 0.5 represent the intermediate case where two species share certain features but also possess additional large mass peaks that are not shared. NH\({}_{3}\) and CH\({}_{4}\), for example, have a correlation coefficient of 0.47, owing to their shared peak at mass 16 and unshared peaks at masses 17 (NH\({}_{3}\)) and 15 (CH\({}_{4}\)). Of course, large negative values are also possible, though they are effectively absent from the spectral library due to the inherent structural similarities between organic compounds. Large negative values would signify that one species has significant mass peaks predominantly at mass channels where another species does not. For compounds with cracking patterns dominated by only a few major peaks, sharing as few as one of these peaks can lead to high correlation coefficients. A notable example is CO\({}_{2}\) and alanine (C\({}_{3}\)H\({}_{7}\)NO\({}_{2}\)), which exhibit a correlation coefficient of \(\rho=0.97\). Based on the results of the main text, it is tempting to view the presence of alanine in 13 of 157 high likelihood models as the first tentative evidence for amino acids at Enceladus. However, at such low concentrations, alanine is virtually indistinguishable from CO\({}_{2}\). Although the cracking pattern for alanine contains counts at many different mass channels, the peak at 44 u is by far the dominant feature. As a result, this mass channel acts as a high leverage point and drags up correlations between alanine and other species with large peaks at 44 u. This effect is particularly pronounced when there are no other major peaks present, as is the case for CO\({}_{2}\). Supplementary Fig. 3 shows the mean contribution of alanine to the INMS spectrum calculated based on the multi-model averaging procedure presented in the main text. An equal amount of CO\({}_{2}\) is shown for comparison. All peaks are well within the associated 1\(\sigma\) uncertainty for each mass channel. The base peak is the only feature that extends above the noise floor and mimics the signal for CO\({}_{2}\) at similar concentrations. This ambiguity precludes the detection of trace amounts of alanine in the plume. Similar effects underlie the ambiguities amongst the alcohols and mass 43 fragments discussed in the main text. Large positive correlations amongst the alcohols (as high as \(\rho=0.91\)) manifest via high count rates at 31 u, corresponding to the hydroxymethyl group CH\({}_{2}\)OH\({}^{+}\). Species with prominent 43 u fragments exhibit correlations as high as \(\rho=0.95\) and could be attributable to a number of different structures (see Supplementary Fig. 4 and the Supplementary Discussion section below). Ambiguities of this nature might have important implications for the detection of amino acids on future spacecraft missions. The high pairwise correlation between CO\({}_{2}\) and alanine suggests that alanine likely cannot be independently detected at Enceladus using a 1 u resolution mass spectrometer (such as INMS). Instead, an independent measurement of the CO\({}_{2}\) mixing ratio with precision at least as great as the instrument used to detect alanine would be necessary. A similar argument would apply to glycine (C\({}_{2}\)H\({}_{5}\)NO\({}_{2}\)) in an environment with high NO abundance (\(\rho=0.99\)) due to the large peak at mass 30. ### Ramifications for hypothesis testing Extensive amounts of collinearity between model components can have profound consequences on statistical inference. Although one might expect traditional hypothesis testing to identify important model parameters, high-dimensionality and collinearity of the spectral library limit statistical power and prevent individual regression coefficients from achieving statistical significance. Supplementary Table 2 demonstrates this phenomenon for the low velocity INMS data set. An \(F\)-test for overall significance indicates that the spectral library, in aggregate, does indeed better explain the INMS data than does the "null model," which consists of fitting only the regression coefficient for H\({}_{2}\)O (\(F=10\); \(p=9.1\times 10^{-13}\)). However, one-tailed \(t\)-tests indicate that only the two strongest spectral features, H\({}_{2}\)O and CO\({}_{2}\), are individually statistically significant (\(p=3.2\times 10^{-79}\) and \(1.9\times 10^{-4}\), respectively) in the presence of the entire spectral library. These species account for the majority of counts at masses 18, 28, and 44 and produce signals that are large enough to stand out amongst the majority of collinear species that comprise the rest of the library. Though we can be confident that the aggregate set of candidate library species is capable of explaining the INMS data, the statistical significance of any one species is difficult to show using such frequentist statistics under these conditions. ## Supplementary Discussion ### Comparison to other results It is important to consider the entire body of statistical evidence when drawing conclusions about which species are detected in the plume. The model-averaged mixing ratios, the minimum AICc model, the individual species probabilities, and the relative model likelihoods can all be used to assess the confidence of each detection. The more heuristics that point towards the detection of a given species, the more confident one may be that said species is truly present in the plume. The minimum AICc model consists of 11 species. One interpretation of this result is to classify all 11 species as conclusive detections. However, more parsimonious models exist that explain the INMS data nearly as well. A more conservative approach would be to interpret only the species that comprise the best-fitting, least complex model with \(\lambda>1/e\) as essential to the fit. We favor an even more nuanced approach and suggest using a tiered hierarchy of confidence based on the holistic analysis of the main text. Below, we contextualize these results within the collection of previous studies on the composition of Enceladus and other icy bodies. In the main text, we present an upper limit on the NH\({}_{3}\) mixing ratio that is consistent with previous analyses of the slow flyby INMS spectra [4, 31] as well as those obtained during the E2 [6] and E3 [5] flybys. In their supplementary material, Waite et al. [4] argue, based on the slow flyby data, that the residual left at mass 16 by fitting H\({}_{2}\)O, CH\({}_{4}\), and CO\({}_{2}\) alone unambiguously indicates the presence of NH\({}_{3}\). For the spectrum analyzed in the main text of this work, we find that this residual is equal to \(\sim\)880 counts, corresponding to \(\sim\)1.1\(\sigma\) at 16 u (see Fig. 4a and Methods of the main text). However for independent Gaussian uncertainties, it is expected that \(\sim\)16% of all mass channels would be undercounted by \(>1\sigma\) due to random chance alone (see also Extended Data Fig. 1 of the main text). As such, the residual at 16 u is not enough to imply the unambiguous detection of NH\({}_{3}\). Of course, this conclusion is dependent on the estimation of count uncertainties in INMS data, for which multiple different procedures have been proposed [4, 5, 18, 19]. Still, other instruments including the Cassini Visible and Infrared Mapping Spectrometer (VIMS), Ultraviolet Imaging Spectrograph (UVIS), and Cosmic Dust Analyzer (CDA) have failed to find conclusive evidence of NH\({}_{3}\) at Enceladus and are unable to corroborate its presence in the plume [31, 65, 70]. Telescopic data have suggested the presence of NH\({}_{3}\) or NH\({}_{3}\)-hydrate [71, 72, 73, 74], though these observations are also not definitive [75]. Consequently, additional studies that further constrain the INMS count rate at mass channel 16 beyond the typical 30% uncertainty suggested in ref. [4] are required before NH\({}_{3}\) can be confirmed in the plume. As discussed in the main text, suggestive evidence for nitrogen at Enceladus in the form of HCN has been previously reported based on other INMS data sets. HCN has also been suggested to help explain yet unresolved signatures in Cassini CDA spectra of ice grains in Saturn's E-ring [7], though could not be definitively identified by CDA as a cation species due to its high ionization potential [10]. As such, the HCN mixing ratio determined in this work is the first definitive detection of nitrile chemistry at Enceladus. Our analysis shows that CH\({}_{2}\)O and C\({}_{2}\)H\({}_{2}\) are also present in the plume with concentrations exceeding 100 ppm. As for HCN, both species have been detected in comets [35] and circumstantial evidence for their existence at Enceladus has been previously suggested based on other Cassini flybys. Both CH\({}_{2}\)O and C\({}_{2}\)H\({}_{2}\) were suspected by Waite et al. [5] during the high-velocity E3 and E5 flybys, though correlations between abundance and spacecraft velocity suggest that impact fragmentation may have been responsible. Only upper limits for both species were reported on a reanalysis of the slower E2 flyby [5]. The presence of CH\({}_{2}\)O would also help explain features at 28-30 u seen in CDA ice grain spectra [31], which are consistent with the INMS data. Concerning the higher mass organics, we find that trace amounts (\(\sim\)40 ppm) of C\({}_{3}\)H\({}_{6}\) are present in the plume as well. C\({}_{3}\)H\({}_{6}\) is present on Titan [19] and was produced as a fragmentation product during the E3 and E5 flybys of Enceladus [5]. Although not initially identified in the E2 flyby data [6], reanalysis suggests that it may have been present [5]--although it was not identified by Waite et al. [4; 31] as an intrinsic plume constituent. Our analysis shows that C\({}_{3}\)H\({}_{6}\) is by far the most likely explanation for the majority of counts in the 37-42 u region of the slow flyby INMS spectrum (Fig. 4d of the main text) and provides the first conclusive evidence for native C3 organics in the plume. The O\({}_{2}\) mixing ratio reported in the main text is consistent with the limit on native mass 32 counts imposed by Waite et al. [4] after correcting for the surface processing of H\({}_{2}\)O in the INMS instrument antechamber. Interestingly, the source of O\({}_{2}\) at Enceladus presents a few challenges. Unlike Europa, where charged particle bombardment of the surface is known to drive radiolysis of water and other elements to O\({}_{2}\), H\({}_{2}\)O\({}_{2}\), SO\({}_{4}^{2-}\), and other oxidants [76; 77; 50; 78], the radiation flux near Enceladus is considerably lower [79], and evidence for oxidants on the surface is lacking, despite proposals of such radiolytic chemistry [80]. Even with moderate levels of surface radiolysis, a key problem would be the efficiency of oxidant production at the high temperatures observed in the South Polar Terrain [81; 82]. Here we do not propose a solution for this issue but rather note that our results are consistent with the presence of O\({}_{2}\), be it from surface radiolysis and subsequent delivery to the ocean, or via other production mechanisms. Notably, Waite et al. [4] found that radiolysis of H\({}_{2}\)O due to radioactive isotopes in Enceladus' core could also produce O\({}_{2}\) at the abundance reported here. Evidence for \({}^{40}\)Ar stems from the large \(\sim\)2.4\(\sigma\) residual at 40 u, which is the only mass channel with strong enough signal to significantly influence the calculation of its mixing ratio. This signal does have strong overlap with the cracking pattern of C\({}_{3}\)H\({}_{4}\) (\(\rho=0.75\)), but the abundance of this species is limited by the larger contribution of C\({}_{3}\)H\({}_{6}\) at neighboring mass channels. Though not identified in a previous analysis of the slow flybys [4], \({}^{40}\)Ar was detected during the E3 and E5 flybys and may indicate significant water-rock interactions and the leaching of salts within Enceladus [5; 54]. CH\({}_{3}\)CN also has a reasonably high correlation with \({}^{40}\)Ar (\(\rho=0.47\)) and could potentially contribute to the observed signal at mass 40. Although there is no prior published evidence for CH\({}_{3}\)CN at Enceladus, its presence would not be unexpected based on the evidence for HCN and hydrocarbons such as C\({}_{3}\)H\({}_{6}\) (see Discussion in the main text). Native alcohols have not been previously identified in the plume, but CH\({}_{3}\)OH has been suggested as a possible fragmentation product based on high velocity INMS and ice grain spectra [12]. CH\({}_{3}\)OH has also been observed via ground-based methods in the vicinity of Enceladus and could be produced through chemical processing of CH\({}_{4}\) in the nearby gas cloud [83] or by the partial combustion of endogenous CH\({}_{4}\) within the ocean. Both CH\({}_{3}\)OH and C\({}_{2}\)H\({}_{6}\)O\({}_{2}\) have been detected in relatively high abundance in several comets [84; 85]. Our results also provide evidence for an ambiguous species with a strong 43 u signal. Ambiguity at this mass channel was previously reported based on the E5 flyby of Enceladus [5]. One explanation of this signal is C\({}_{2}\)H\({}_{6}\)N\({}_{2}\). Although this species has an additional large peak at 15 u, this feature is masked by the much larger contribution from CH\({}_{4}\) in the INMS data. The correlated ice grain features at 15 and 43 u seen in "Type II" CDA spectra [10] might be explained by fragmentation of C\({}_{2}\)H\({}_{6}\)N\({}_{2}\) into CH\({}_{3}^{+}\) and CH\({}_{3}\)N\({}_{2}^{+}\). Alternative explanations including acetyl group-bearing species such as C\({}_{3}\)H\({}_{6}\)O or C\({}_{4}\)H\({}_{6}\)O\({}_{2}\) are also possible. Sulfur compounds have not been definitively identified at Enceladus, though a past detection of H\({}_{2}\)S based on the E5 flyby [5] supports our finding that it may be present. H\({}_{2}\)S would be expected if there is active serpentinization taking place on the ocean floor. For the remaining species listed in Table 1 of the main text, prior evidence for their existence in the plume is lacking. Phosphorous compounds have not been previously reported in the plume, though PH\({}_{3}\) may have been observed in the coma of comet 67P/Churyumov-Gerasimenko [86]. C\({}_{3}\)H\({}_{5}\)Cl has also not been identified at Enceladus, but Cl has been detected by CDA as NaCl and KCl salts residing in plume ice grains [8; 9]. Lastly, although there is no strong evidence for alanine at Enceladus (see the Supplementary Results section above), we note that alanine and other amino acids are abundant in carbonaceous chondrites [69]. ### Comparison to other methodologies In order to adequately account for the high-dimensionality and (approximate) collinearity of the INMS plume dataset, it is necessary to perform a type of variable selection that constrains the parameter space of possible model fits. Such variable selection techniques trade off a small increase in model bias for a significant reduction in model variability [13; 17]. The resulting models are far less likely to overfit noisy features in the training data and tend to be significantly more accurate in predicting future observations [13]. Moreover, variable selection reduces the impact of collinearity by identifying which model components are better at explaining the observed data and discarding those that are superfluous. Such a process then allows for the evaluation of individual model components without the confounding presence of their collinear counterparts. In the main text, we outlined two variable selection procedures: an exhaustive best subset selection for models with fewer than 15 species and a forward stepwise selection algorithm for more complex models. Other common algorithms such as ridge regression [87] and the Least Absolute Shrinkage and Selection Operator (LASSO) are frequently applied in a wide variety of machine learning and model validation contexts [13; 17; 88; 89; 90]. These methods seek to reduce the influence of extraneous parameters via L2 or L1 regularization, respectively. Multi-model averaging is similar to L1 regularization in that it allows for explicit dimensionality reduction, whereas L2 regularization does not. This property leads to high model interpretability, which is of utmost importance when performing compositional analyses. Other heuristics besides the AICc can be used to select models for averaging, but these alternative statistics are not based on minimizing information loss and are therefore not well-suited for model selection when the structure of the unknown, true distribution is poorly constrained. Furthermore, model inference using the AICc has been shown to asymptotically approximate results based on cross-validation (another broadly accepted model validation technique) while requiring much fewer computational resources [91; 27; 92]. The first few studies of INMS data collected at Enceladus produced landmark results, including the characterization of major plume constituents and the discovery of molecular H\({}_{2}\) as a potential indicator of hydrothermal activity [4; 5; 6]. These papers (and related works published throughout the duration of the _Cassini_ mission) also documented a detailed description of the INMS instrument response under varying spacecraft conditions and laid the groundwork for follow-up studies focused solely on compositional analyses. However, early studies of the Enceladus plume--though foundational--may not have been well-suited to resolve minor species ambiguities for various reasons. In order to facilitate comparison with our methodology, we briefly describe the spectral deconvolution procedure developed by the authors of ref. [19] that has been implemented in various other works (e.g., refs. [20; 21; 22]). In their procedure, the authors first determine the contributions from major species through a visual analysis of prominent spectral features. Mixing ratios for the major species are estimated from the base peaks of each species, assuming they contribute 100% of the measured counts at these mass channels. Minor species are then identified sequentially by subtracting their contributions from the total spectrum to produce a residual spectrum. For small portions of the spectrum where a few candidate species exhibit overlaying signatures, species are fit based on a custom-defined fit statistic using a grid search algorithm. Iterative minimization of the fit statistic is achieved numerically by sweeping through various mixing ratios at increasingly finer resolution. For species that share the same base peak (e.g., N\({}_{2}\) and C\({}_{2}\)H\({}_{4}\)), the fit statistic is manipulated to exclude this mass channel. The high computational intensity of the grid search algorithm prohibits fitting more than four species at a time. We believe that the methodology of ref. [19] described above, though useful and effective, may not be optimal for identifying minor species in the INMS data. The order in which species are subtracted from the initial spectrum could potentially influence the outcome of the analysis. Although it is true that a bias-variance tradeoff can be useful for combatting high dimensionality, the described procedure is not amenable to quantitative assessments of inter-model uncertainty. Indeed, the authors note that subjectivity of their analysis is a valid concern. Furthermore, the practice of fitting individual species to small portions of the spectrum neglects potential contributions from complex compounds with cracking patterns that span a large mass range. Moreover, grid searches that consider only a few species at a time may not be able to reliably identify minor species when the spectral library is highly collinear. In this regime, covariances between mixing ratios become strongly model dependent (see Fig. 2 of the main text), and multi-model inference based on model-averaged parameters is warranted. Additionally, to our knowledge, the fit statistic described in ref. [19] is not a standard metric, and we suspect that the use of different fit statistics for different model parameters could lead to difficulties in interpreting the results. Lastly, the authors' procedure does not employ dimensionality reduction or account for the possibility of over-fitting to noise. By contrast, our approach quantitatively addresses both inter- and intra-model uncertainty in the spectral decomposition of INMS data. While previous studies, such as those presented in refs. [31; 33], have concluded that minor species identification requires a higher resolution mass spectrometer, we have presented a mathematical framework capable of discriminating between previously ambiguous species. The heuristics used in this analysis are based on maximum likelihood estimation and relative entropy minimization--foundational principles of statistical inference and information theory. Nevertheless, this study is not without limitations. A major challenge for any compositional analysis of INMS data stems from the large number of candidate plume species. The chemistry of the ocean and ice shell could include hundreds to thousands of unique compounds that contribute to the observed INMS spectrum. Our approach using the AICc is based on the principle of parsimony in that the least-complex, best-fitting model is favored over similarly performing models of higher complexity. Although this is a fruitful approach to developing conservative models of plume composition, nature does not necessarily reflect this ideal. Future investigations using higher resolution mass spectrometers with larger mass ranges will shed light on the full extent of chemical diversity within the plume and the ocean beneath. A number of interesting follow-up studies could be conducted to validate the results presented in this work. A strong approach would be to treat each of the slow flybys (E14, E17, and E18) as individual data sets, as opposed to averaging them together. Machine learning models could then be trained on one data set and evaluated on another. A sort of round-robin procedure could be used to estimate the uncertainty associated with training on a particular data set. Such a methodology would eliminate the need for heuristic statistics such as the AICc in favor of actual independent test set performance. This implementation would, however, require correcting for instrument artifacts in each of the individual Enceladus flybys.
2310.04606
Robust Transfer Learning with Unreliable Source Data
This paper addresses challenges in robust transfer learning stemming from ambiguity in Bayes classifiers and weak transferable signals between the target and source distribution. We introduce a novel quantity called the ''ambiguity level'' that measures the discrepancy between the target and source regression functions, propose a simple transfer learning procedure, and establish a general theorem that shows how this new quantity is related to the transferability of learning in terms of risk improvements. Our proposed ''Transfer Around Boundary'' (TAB) model, with a threshold balancing the performance of target and source data, is shown to be both efficient and robust, improving classification while avoiding negative transfer. Moreover, we demonstrate the effectiveness of the TAB model on non-parametric classification and logistic regression tasks, achieving upper bounds which are optimal up to logarithmic factors. Simulation studies lend further support to the effectiveness of TAB. We also provide simple approaches to bound the excess misclassification error without the need for specialized knowledge in transfer learning.
Jianqing Fan, Cheng Gao, Jason M. Klusowski
2023-10-06T21:50:21Z
http://arxiv.org/abs/2310.04606v1
# Robust Transfer Learning with Unreliable Source Data+ ###### Abstract This paper addresses challenges in robust transfer learning stemming from ambiguity in Bayes classifiers and weak transferable signals between the target and source distribution. We introduce a novel quantity called the "ambiguity level" that measures the discrepancy between the target and source regression functions, propose a simple transfer learning procedure, and establish a general theorem that shows how this new quantity is related to the transferability of learning in terms of risk improvements. Our proposed "Transfer Around Boundary" (TAB) model, with a threshold balancing the performance of target and source data, is shown to be both efficient and robust, improving classification while avoiding negative transfer. Moreover, we demonstrate the effectiveness of the TAB model on non-parametric classification and logistic regression tasks, achieving upper bounds which are optimal up to logarithmic factors. Simulation studies lend further support to the effectiveness of TAB. We also provide simple approaches to bound the excess misclassification error without the need for specialized knowledge in transfer learning. ## 1 Introduction Previous experiences can offer valuable insights for learning new tasks. Human learners often transfer their existing knowledge gained from previous tasks to new and related ones. _Transfer learning_ refers to statistical learning tasks where a portion of the training data is generated from a similar but non-identical distribution to the data distribution for which we seek to make inferences about. The objective is then to transfer knowledge from such source data to improve learning in the related target task. Such problems, where there is a divergence between the data-generating distributions, arise in many real application problems, including computer vision (Li et al., 2020; Tzeng et al., 2017), natural language processing (Ruder et al., 2019; Wang and Zheng, 2015), speech recognition (Huang et al., 2013), and genre classification (Choi et al., 2017). See Storkey (2008), Pan and Yang (2010), and Weiss et al. (2016) for an overview. Also, similar problems have been studied by both statisticians and many other communities under different names, including label noise (Frenay and Verleysen, 2014; Scott et al., 2013; Cannings et al., 2020; Blanchard et al., 2021; Reeve and Kaban, 2019; Scott and Zhang, 2019), domain adaptation (Scott, 2018; Ben-David et al., 2010, 2010; Mansour et al., 2009), multi-task learning (Caruana, 1997; Maurer et al., 2016), or distributional robustness (Sinha et al., 2018; Christiansen et al., 2020). We focus here on the transfer learning setting in the context of binary classification since it is not only fundamental in statistical learning and has been extensively investigated in diverse contexts, but also because it provides a framework that is particularly conducive to algorithms that seek to exploit relationships between the target and source distributions. For transfer learning theory of linear regression models, see Chen et al. (2013); Bastani (2020) under the setting of finite covariate dimensions or Gross and Tibshirani (2016); Ollier and Viallon (2017); Li et al. (2022) under the high-dimensional regime with lasso-based penalties. See also Tian and Feng (2022) for generalized linear models and Cai and Pu (2022) for non-parametric regression. To set up the framework, suppose that a labeled data sample (relatively _small_ in size, typically) is drawn from the \(Q\), the target distribution we wish to make statistical inferences about. Also, let \(P\) be the source distribution from which we wish to transfer knowledge. We suppose a labeled data sample (relatively _large_ in size, typically) is drawn from \(P\). The corresponding random pairs of \(Q\) and \(P\) distributions are denoted by \((X,Y)\) and \((X^{P},Y^{P})\) on \(\mathbb{R}^{d}\times\{0,1\}\) respectively. Let \(\eta^{Q},\eta^{P}:\mathbb{R}^{d}\to[0,1]\) denote the target and source regression functions, i.e. \[\eta^{Q}(x)=Q(Y=1|X=x)\text{ and }\eta^{P}(x)=P(Y^{P}=1|X^{P}=x).\] The key question is how much information or knowledge can be transferred from \(P\) to \(Q\) given observations from both distributions. Note that the Bayes classifier \(f_{Q}^{*}(\cdot)=\mathbf{1}\{\eta^{Q}(\cdot)\geq\frac{1}{2}\}\) minimizes the misclassification rate \(Q(Y\neq f(X))\) over all classifiers. Therefore, we define the excess risk of any empirical classifier \(\hat{f}\) as \[\mathcal{E}_{Q}(\hat{f})=Q(Y\neq\hat{f}(X))-Q(Y\neq f_{Q}^{*}(X)).\] The aforementioned key question leads us to the task of constructing an empirical classifier that accelerates the convergence of excess risk to zero in expectation by utilizing labeled data samples from _both_\(Q\) and \(P\). ### Related Literature Recent research has aimed to bridge the gap between limited theoretical understanding and significant practical achievements in transfer learning for classification problems. To set up the problem, it is necessary to make some assumptions about the similarity between the target and source distributions, which are both useful for theory and practical implementation. Various approaches have been proposed and explored in the literature to measure this similarity, including divergence bounds, covariate shift, and label shift. Some methods focus on test error bounds that rely on measures of discrepancy between \(Q\) and \(P\), such as modified total-variation or Renyi divergence, between the target and source distributions (Ben-David et al., 2010, 2010; Mansour et al., 2009, 2009; Germain et al., 2013; Cortes et al., 2019). This line of work produces distribution-free risk rates, primarily expressed in terms of \(n_{P}\) alone, but the obtained rates do not converge to zero with increasing sample size. In other words, these frameworks cannot produce a faster convergence of excess risk if their proposed divergences are non-negligible. Nonetheless, consistent classification is proved achievable regardless of a non-negligible divergence even when \(n_{Q}=0\), provided that certain additional structures on the target and source distributions are present (Ben-David et al., 2010). Two common additional structures include covariate shift and label shift (or posterior shift). Covariate shift (Gretton et al., 2008; Quionero-Candela et al., 2009; Kpotufe and Martinet, 2018) considers scenarios where the conditional distributions of the response given the covariates are identical across \(Q\) and \(P\) or \(\eta^{Q}=\eta^{P}\), but marginal distributions of covariates are different. Label shift, on the other hand, assumes an identical or similar target and source marginal distributions \(Q_{X}\) and \(P_{X}\), but the conditional probabilities \(\eta^{Q}\) and \(\eta^{P}\) differ. Most previous work in label shift can be divided into two branches. On one hand, some frameworks do not require identical Bayes classifiers, i.e., \((\eta^{P}-1/2)(\eta^{Q}-1/2)\geq 0\), but impose very specific relations between \(\eta^{Q}\) and \(\eta^{P}\), such as the literature on _label noise_(Frenay and Verleysen, 2014; Scott et al., 2013; Cannings et al., 2020; Blanchard et al., 2021; Reeve and Kaban, 2019; Natarajan et al., 2018; Scott and Zhang, 2019). For instance, a common assumption (Reeve and Kaban, 2019; Natarajan et al., 2018) is that the \(Y^{P}|X^{P}=x\) is equal to \(Y|X=x\) up to a constant probability of label flipping, i.e., in our terminology \[P(Y^{P}=1|Y=0,X^{P}=X=x)=\pi_{0}\text{ and }P(Y^{P}=0|Y=1,X^{P}=X=x)=\pi_{1},\] for some constants \(\pi_{0},\pi_{1}\in(0,1)\). Under this setting, \(\eta^{P}=(1-\pi_{0}-\pi_{1})\eta^{Q}+\pi_{0}\), which is linear in \(\eta^{Q}\). Given knowledge of the specific form of \(\eta^{P}\), it is feasible to modify learning algorithms to efficiently infer the target Bayes classifier. In another special type of label shift problem, Maity et al. (2022) assumes that \(P_{X|Y}=Q_{X|Y}\), which is convenient for estimating the joint distribution of \((X,Y)\). However, all their proposed estimators are tailored to fit specific assumptions and may not be applicable to more general relations between \(\eta^{Q}\) and \(\eta^{P}\). On the other hand, recent work such as Cai and Wei (2021) and Hanneke and Kpotufe (2019) have introduced more general label shift settings that impose relatively mild and general conditions on the relation between \(\eta^{P}\) and \(\eta^{Q}\), in addition to assuming identical Bayes classifiers. Cai and Wei (2021) requires a lower-bounded signal strength of \(\eta^{P}\) relative to \(\eta^{Q}\), besides the assumption of identical Bayes classifiers, i.e., \[\Big{(}\eta^{P}-\frac{1}{2}\Big{)}\Big{(}\eta^{Q}-\frac{1}{2}\Big{)}>0,\qquad \Big{|}\eta^{P}-\frac{1}{2}\Big{|}\geq C_{\gamma}\Big{|}\eta^{Q}-\frac{1}{2} \Big{|}^{\gamma},\] for some positive \(\gamma\) and \(C_{\gamma}\), and derive a faster risk convergence rate with the transfer exponent \(\gamma\). The aforementioned approaches have limitations in that they rely on specific and often untestable assumptions. More importantly, they may not be effective in situations where both ambiguities of the source data hold, i.e., there are no strong relations between \(\eta^{Q}\) and \(\eta^{P}\) and discrepancies between the Bayes classifiers of the target and source domains still exist. The only other work of which we are aware that allows such general ambiguous source data is Reeve et al. (2021). Reeve et al. (2021) assumes that \(\eta^{P}\) can be well approximated by a set of regression functions \(g_{1}(\eta^{Q}),\ldots,g_{L^{*}}(\eta^{Q})\) that are no less informative than a linear transformation of \(\eta^{Q}\). ### Main Contribution In contrast to existing work, this paper considers a scenario where the Bayes classifiers can arbitrarily differ without imposing further conditions on the source and target distributions. We now introduce an important quantity for capturing the relative information transferred from the source data, which we refer to as the signal strength: \[s(x):=\begin{cases}|\eta^{P}(x)-\frac{1}{2}|,&\quad\operatorname{sgn}\left(\eta^ {Q}(x)-\frac{1}{2}\right)\times(\eta^{P}(x)-\frac{1}{2})\geq 0,\\ 0,&\quad\text{otherwise}.\end{cases}\] Our main assumption involves measuring the ambiguity level of the source data based on the expected signal strength around the classification boundary \(\{x:|\eta^{Q}(x)-\frac{1}{2}|\leq z\}\) for small \(z\): \[\mathbb{E}_{(X,Y)\sim Q}\left[\Big{|}\eta^{Q}(X)-\frac{1}{2}\Big{|}\mathbf{1} \left\{s(X)\leq C_{\gamma}\Big{|}\eta^{Q}(X)-\frac{1}{2}\Big{|}^{\gamma},\Big{|} \eta^{Q}(X)-\frac{1}{2}\Big{|}\leq z\right\}\right]\leq\varepsilon(z),\] given some constants \(\gamma,C_{\gamma}>0\). This quantity captures the inevitable risk from hard-to-classify boundary points, which have \(\eta^{Q}\) values too close to \(1/2\) and \(\eta^{P}\) and show weak signal relative to \(\eta^{Q}\). The explicit form of \(\varepsilon(z)\) is provided for some special examples in Section 2.3. If \(s(x)\geq C_{\gamma}|\eta^{Q}(x)-1/2|^{\gamma}\) covers all of \(\Omega\), then \(\varepsilon(z)=0\) and the setting reduces to that of Cai and Wei (2021) with a strong relative signal of \(\eta^{P}\) across the entire feature space \(\Omega\). Given the measure above, we propose a simple but effective classifier that can surprisingly adapt to any level of ambiguity in the signal conveyed by \(\eta^{P}\), named the _Transfer Around Boundary_ (TAB) classifier, or the TAB model: \[\hat{f}_{TAB}(x)=\begin{cases}\mathbf{1}\{\hat{\eta}^{Q}(x)\geq\frac{1}{2}\},&\quad\text{if }|\hat{\eta}^{Q}(x)-\frac{1}{2}|\geq\tau,\\ \hat{f}^{P}(x),&\quad\text{otherwise},\end{cases}\] where \(\hat{\eta}^{Q}\) is an estimate of \(\eta^{Q}\) obtained by the target data and \(\hat{f}^{P}\) is a classifier obtained by the source data. To clarify our decision-making process, we rely on \(\hat{\eta}^{Q}\) as the final prediction when it deviates from \(\frac{1}{2}\). Otherwise, we switch to the prediction made by the source data. In Section 3.1, we present two related general convergence theorems to showcase this classifier, with a proper choice of \(\tau>0\), utilizing both the relative signal of \(\eta^{P}\) and the ambiguity level \(\varepsilon(z)\) from the source data. The target data involved in this classifier is noteworthy for its dual role: not only does it provide an upper bound on the excess risk if the source data is unreliable, but it also helps to alleviate the risk caused by ambiguity in the source data. We then apply our general convergence results to non-parametric classification as well as logistic regression to illustrate the potential of our transfer learning method on parametric classification. The results for these two important cases show that our method is indeed nearly minimax optimal and our measure of the ambiguity level is effective, extending results in previous literature. **Non-parametric Classification:** Suppose that \(\eta^{Q}\) is \(\beta\)-smooth (Condition 2), margin assumption holds for \(\eta^{Q}\) with parameter \(\alpha\) (Assumption 1), and strong density condition (Condition 3) holds for \(Q_{X}\) and \(P_{X}\). Let \(\Pi^{NP}\) denote the set of all such distribution pairs \((Q,P)\). At the moment, we consider the scenario where \(\eta^{P}\) is sufficiently smooth. Suppose that \(\Pi^{NP}_{S}\) is a subset of \(\Pi^{NP}\) such that \(\eta^{P}\) is \(\beta_{P}\)-smooth with \(\beta_{P}\geq\gamma\beta\). Then, we show that the minimax excess risk satisfies \[\left(n_{P}^{-\frac{\beta(1+\alpha)}{2\beta+d}}+\varepsilon\Big{(} cn_{Q}^{-\frac{\beta}{2\beta+d}}\Big{)}\right)\wedge n_{Q}^{-\frac{\beta(1+ \alpha)}{2\beta+d}}\lesssim\inf_{f}\sup_{(Q,P)\in\Pi_{S}^{NP}}\mathbb{E}\mathcal{ E}_{Q}(\hat{f})\] \[\lesssim \left(n_{P}^{-\frac{\beta(1+\alpha)}{2\beta+d}}+\varepsilon\Big{(} c\log(n_{Q}\lor n_{P})n_{Q}^{-\frac{\beta(1+\alpha)}{2\beta+d}}\Big{)}\right) \wedge\left(\log^{1+\alpha}(n_{Q}\lor n_{P})n_{Q}^{-\frac{\beta(1+\alpha)}{2 \beta+d}}\right),\] for some constant \(c>0\) that is independent of \(\alpha\). With the optimal choice of \(\tau\asymp\log(n_{Q}\lor n_{P})n_{Q}^{-\frac{\beta}{2\beta+d}}\), the upper bound we obtain using the TAB classifier with \(K\)-NN classifier components is optimal up to some logarithmic terms of \(n_{Q}\lor n_{P}\). This indicates that a necessary and sufficient condition for the source data to improve the excess risk rate is a large source data sample size such that \(n_{P}\gg n_{Q}^{\frac{2\beta+d}{2\beta+d}}\), paired with a small ambiguity level \(\varepsilon(z)\ll z^{1+\alpha}\). Additionally, we provide the optimal rate considering the "band-like" ambiguity condition between \(\eta^{Q}\) and \(\eta^{P}\). A particularly interesting and realistic example is to assume that \(\sup_{x\in\Omega}|\eta^{P}(x)-\eta^{Q}(x)|\leq\Delta\). In this case, the minimax optimal excess risk becomes \(\left(n_{P}^{-\frac{\beta(1+\alpha)}{2\beta+d}}+\Delta^{1+\alpha}\right) \wedge n_{Q}^{-\frac{\beta(1+\alpha)}{2\beta+d}}\), up to some logarithmic terms of \(n_{Q}\lor n_{P}\). Note that we do not require any smoothness condition on \(\eta^{P}\) in the band-like ambiguity case. **Logistic Regression:** If the covariates follow the standard normal random design (see (22) for a precise definition), and the target and source logistic regression coefficient pair \((\beta_{Q},\beta_{P})\) belongs to \[\Theta(s,\Delta)=\left\{(\beta_{Q},\beta_{P}):\|\beta_{Q}\|_{0}\leq s,\angle( \beta_{Q},\beta_{P})\leq\Delta\right\},\] for some \(0\leq\Delta\leq\pi/2\), where \(\angle(\beta_{Q},\beta_{P})\) denotes the angle between directions \(\beta_{Q}\) and \(\beta_{P}\), we show that the minimax optimal excess risk satisfies \[\left(\frac{s\log d}{n_{P}}+\Delta^{2}\right)\wedge\frac{s\log d}{n_{Q}} \lesssim\inf_{f}\sup_{(Q,P)\in\Pi^{LR}}\mathbb{E}\mathcal{E}_{Q}(\hat{f}) \lesssim\left(\frac{s\log d}{n_{P}}+\Delta^{2}\right)\wedge\left(\frac{s\log d }{n_{Q}}\log^{2}(n_{Q}\lor n_{P})\right).\] Thus, when \(n_{P}\gg n_{Q}\) and \(\Delta\ll\frac{s\log d}{n_{Q}}\), the source data improves the convergence rate of the excess risk. Importantly, we only assume a "small cone condition" between \(\beta_{Q}\) and \(\beta_{P}\), in contrast to the "small contrast condition" in previous works (Li et al., 2022; Tian and Feng, 2022), i.e., \(\|\beta_{Q}-\beta_{P}\|_{q}\) is small for some \(q\in[0,1]\). In addition, unlike the small contrast condition, which implicitly assumes sparsity patterns of \(\beta_{P}\) through \(l_{q}\) norms with \(q\leq 1\), our constructed parametric space does not impose any sparsity conditions on \(\beta_{P}\). In the setting without access to source data, our upper bound \(\frac{s\log d}{n_{Q}}\) is tighter than the \((\frac{s\log d}{n_{Q}})^{\frac{2}{3}}\) bound obtained in Theorem 7 of Abramovich and Grinshtein (2019), assuming the same margin parameter \(\alpha=1\). As evidenced by the above two examples, the TAB classifier maintains the performance of the target data against an unreliable source and enhances the performance, if the source data sample size is large and the ambiguity is relatively small, i.e., \(n_{P}\gg n_{Q}\) and \(\Delta\ll\sqrt{\frac{s\log d}{n_{Q}}}\). While the target excess risk provided by the target data, \(\mathcal{E}_{Q}(\hat{f}^{Q})\), has been extensively studied in the literature, much less is known about the target excess risk \(\mathcal{E}_{Q}(\hat{f}^{P})\) provided by the source data, which is one key component in deducing the excess risk bound. To fill this gap and gain better insight into the contribution of the source data, we present a general result in Section 3.2 that provides a direct upper bound on \(\mathcal{E}_{Q}(\hat{f}^{P})\) in terms of \(\mathcal{E}_{P}(\hat{f}^{P})\), which can be obtained through conventional theoretical analysis. Therefore, it is feasible to bound the excess risk using conventional statistical learning tools, without requiring specialized expertise in transfer learning. By providing such a general and accessible theoretical framework, our approach has the potential to make transfer learning more accessible. ### Notation and Organization We introduce some notation to be used throughout the paper. For any \(q\in[0,\infty]\) and vector \(x=(x_{1},\cdots,x_{d})\in\mathbb{R}^{d}\), we write \(\|x\|_{q}\) for its \(l_{q}\) norm. We write \(\|x\|\) or \(\|x\|_{2}\) for the Euclidean norm of \(x\), and, given \(r>0\), we write \(B(x,r)\) or \(B_{d}(x,r)\) for the closed Euclidean sphere of radius \(r\) centered at \(x\). For two probability measures \(\mu,\nu\) on any general space, if \(\mu\) is absolutely continuous with respect to \(\nu\), we write \(\frac{d\mu}{d\nu}\) for the Radon-Nikodyn derivative of \(\mu\) with respect to \(\nu\). Write \(\lambda\) as the Lebesgue measure on \(\mathbb{R}^{d}\). Let \(a\wedge b\) and \(a\lor b\) denote the minimum and maximum of \(a\) and \(b\), respectively. Let \(a_{n}\lesssim b_{n}\) denote \(|a_{n}|\leq c|b_{n}|\) for some constant \(c>0\) when \(n\) is large enough. Let \(a_{n}\gtrsim b_{n}\) denote \(|a_{n}|\geq c|b_{n}|\) for some constant \(c>0\) when \(n\) is large enough. Let \(a_{n}\asymp b_{n}\) denote \(|a_{n}|/|b_{n}|\to c\) for some constant \(c>0\) when \(n\) is large enough. Let \(a_{n}\stackrel{{ n\to\infty}}{{\longrightarrow}}\infty\) denote that \(a_{n}\) tends to infinity with \(n\) growing to infinity. Let \(a_{n}\ll b_{n}\) denote \(|a_{n}|/|b_{n}|\to 0\) when \(n\) is large enough. Let \(a_{n}\gg b_{n}\) denote \(|b_{n}|/|a_{n}|\to 0\) when \(n\) is large enough. Let \(\lfloor a\rfloor\) be the maximum integer that is less equal than \(a\) for any real value \(a\). Finally, we assume \(0^{0}=0\) for simplicity. We state our main working assumptions and measure of ambiguity, called the _ambiguity level_, in Section 2. We provide two general convergence results on the risk of transfer learning in Section 3.1. Section 3.2 presents an approach to bound the signal transfer risk, a crucial part of general convergence results, in terms of the excess risk rate studied in the conventional statistical learning literature. In Sections 4 and 5, we apply our results to non-parametric classification and logistic regression, respectively, and provide upper and lower bounds on the excess risk. In Section 6, we present simulation results for non-parametric classification and logistic regression, supporting the theoretical properties of our proposed method. ## 2 Model ### Problem Formulation For two Borel-measurable distributions \(P\) and \(Q\), both taking values in \(\mathbb{R}^{d}\times\{0,1\}\), we observe two independent random samples, the _source_ data \(\mathcal{D}_{P}=\{(X_{1}^{P},Y_{1}^{P}),\,\cdots,(X_{n_{P}}^{P},Y_{n_{P}}^{P}) \}\stackrel{{\text{iid}}}{{\sim}}P\) and the _target_ data \(\mathcal{D}_{Q}=\{(X_{1},Y_{1}),\cdots,(X_{n_{Q}},Y_{n_{Q}})\}\stackrel{{ \text{iid}}}{{\sim}}Q\). Suppose that \(n_{P}\stackrel{{ n_{Q}\to\infty}}{{\longrightarrow}}\infty\). Our goal is to improve the target data empirical classifier by transferring useful information from the source data. Consider the marginal probability distributions of \(X\) for the \(P\) and \(Q\) distributions, denoted by \(P_{X}\) and \(Q_{X}\) respectively. Let \(\Omega_{P}:=\text{supp}(P_{X})\) and \(\Omega:=\text{supp}(Q_{X})\) represent the support sets of \(P_{X}\) and \(Q_{X}\). The regression function for the source and target distributions are respectively defined as follows: \[\eta^{P}(x)=P(Y^{P}=1|X^{P}=x),\quad\eta^{Q}(x)=Q(Y=1|X=x). \tag{1}\] The goal of a classification model is to forecast the label \(Y\) based on the value \(X\). The effectiveness of a decision rule \(f:\mathbb{R}^{d}\to\{0,1\}\) is evaluated by its misclassification rate with respect to the target distribution, which is defined as follows: \[R(f):=Q(Y\neq f(X)).\] The Bayes classifier (or Bayes estimator, Bayes decision rule) \(f^{*}_{Q}(x)=\mathbf{1}\{\eta^{Q}(x)\geq\frac{1}{2}\}\) is the minimizer of \(R(f)\) over all Borel functions defined on \(\mathbb{R}^{d}\) and taking values in \(\{0,1\}\). We similarly define the Bayes decision rule for \(\eta^{P}\) that is \(f^{*}_{P}(x)=\mathbf{1}\{\eta^{P}(x)\geq\frac{1}{2}\}\). Since the Bayes decision rule \(f^{*}_{Q}(x)\) is a minimizer of the misclassification rate \(R(f)\), the performance of any empirical classifier \(\hat{f}:\mathbb{R}^{d}\to\{0,1\}\) can be then measured by the excess risk (on the target distribution): \[\mathcal{E}_{Q}(\hat{f})=R(\hat{f})-R(f^{*}_{Q})=2\mathbb{E}_{(X,Y)\sim Q} \left[\Big{|}\eta_{Q}(X)-\frac{1}{2}\Big{|}\mathbf{1}\left\{\hat{f}(X)\neq f^ {*}_{Q}(X)\right\}\right]. \tag{2}\] The last equality is the dual representation of the excess risk (Gyorfi, 1978). Given the excess risk defined in (2), the objective of transferring useful information from the source data can be reformulated as the task of constructing an empirical classifier that improves the excess risk, which is accomplished by utilizing labeled data samples drawn from both the target and source distributions. The rate at which the excess risk \(\mathcal{E}_{Q}(\hat{f})\) converges to zero depends on the assumptions made about the target and source distributions \((Q,P)\). In classification, a common assumption is the margin assumption (Audibert and Tsybakov, 2007; Mammen and Tsybakov, 2004). This assumption is used to measure the behavior of \(Q_{X}\) with respect to the distance between \(\eta^{Q}(X)\) and \(1/2\), which is essential for determining the convergence rate of the excess risk. **Assumption 1** (Margin).: There exists some constant \(\alpha\geq 0,\ C_{\alpha}>0\) such that for any \(t>0\), we have \(Q_{X}(0<|\eta^{Q}(X)-1/2|\leq t)\leq C_{\alpha}t^{\alpha}\). Instead of assuming identical Bayes classifiers for \(\eta^{Q}\) and \(\eta^{P}\) over \(x\in\Omega\) as in some existing literature (Hanneke and Kpotufe, 2019; Cai and Wei, 2021), we allow them to differ. This is a realistic relaxation because, although the target and source distributions may have similar Bayes classifiers over a high probability region, they could still have slightly different decision boundaries. Therefore, it is crucial to assess the impact of an unreliable source distribution on the optimal rates of classification. ### Source Data Ambiguity In this subsection, we provide a detailed discussion of the condition that characterizes the ambiguity of an unreliable source distribution. Here we introduce the signal strength function, which measures the relative signal of \(\eta^{P}\) compared with \(\eta^{Q}\). This function is crucial in capturing the efficacy of the source data for the classification task under \(Q\). **Definition 1** (Signal Strength).: The signal strength of \(\eta^{P}\) relative to \(\eta^{Q}\) is defined as \[s(x): =\left\{\operatorname{sgn}\!\left(\eta^{Q}(x)-\frac{1}{2}\right) \times\left(\eta^{P}(x)-\frac{1}{2}\right)\right\}\lor 0\] \[=\begin{cases}|\eta^{P}(x)-\frac{1}{2}|,&\operatorname{sgn}\left( \eta^{Q}(x)-\frac{1}{2}\right)\times\left(\eta^{P}(x)-\frac{1}{2}\right)\geq 0, \\ 0,&\text{otherwise},\end{cases}\] for any \(x\in\Omega\). It is reasonable to consider the signal strength as non-zero only when \((\eta^{P}(x)-1/2)(\eta^{Q}(x)-1/2)\geq 0\), indicating that the target and source data provide consistent information about the Bayes classification boundary. In this case, the source data \(\eta^{P}(x)\) is beneficial for classifying the target data, and the signal strength is measured by \(|\eta^{P}(x)-1/2|\). Conversely, when \((\eta^{P}(x)-1/2)(\eta^{Q}(x)-1/2)<0\), the source data does not provide useful information for classifying \(x\) in the target data, and the signal strength at \(x\) is zero. Next, we present the main working assumption that measures the ambiguity level of the source data, based on the signal strength. **Assumption 2** (Ambiguity Level).: For some given \(\gamma,C_{\gamma}>0\), there exists a continuous function \(\varepsilon(z;\gamma,C_{\gamma})\) that is monotone increasing with \(z\in[0,1/2]\) such that \[\mathbb{E}_{(X,Y)\sim Q}\left[\left|\eta^{Q}(X)-\frac{1}{2}\middle|\mathbf{1} \left\{s(X)\leq C_{\gamma}\middle|\eta^{Q}(X)-\frac{1}{2}\middle|^{\gamma}, \middle|\eta^{Q}(X)-\frac{1}{2}\middle|\leq z\right\}\right]\leq\varepsilon(z; \gamma,C_{\gamma}). \tag{3}\] We abbreviate \(\varepsilon(z;\gamma,C_{\gamma})\) as \(\varepsilon(z)\) when there is no need to specify \(\gamma\) and \(C_{\gamma}\). The expression in the expectation operator of (3) can be divided into two factors. The first factor represents the distance between \(\eta^{Q}(X)\) and \(1/2\), which corresponds to the dual representation of the excess risk (2). The second factor is crucial for understanding transfer learning ambiguity. On one hand, the constraint \(s(X)\leq C_{\gamma}|\eta^{Q}(X)-1/2|^{\gamma}\) indicates a lack of strong signal from the source data relative to the target data. On the other hand, the constraint \(|\eta^{Q}(X)-1/2|\leq z\) describes the hard-to-classify boundary points of the target distribution. Therefore, our indicator precisely captures the challenging boundary region where classification becomes difficult using either the target or source data. As will be seen later, only the behavior around \(z=0\) matters for the asymptotics of transfer learning. The ambiguity level \(\varepsilon(\cdot)\) allows for the presence of unreliable source data, as it accounts for situations where \(\eta^{P}\) may not consistently provide a strong signal compared to \(\eta^{Q}\). Additionally, \(\varepsilon(\cdot)\) controls the ambiguity with respect to the distance from \(1/2\). The larger the ambiguity level, the harder the classification task. Note that by the margin assumption, a trivial upper bound of the ambiguity level is \(\varepsilon(z)=C_{\alpha}z^{1+\alpha}\). ### On the Ambiguity Level To help clarify the ambiguity level assumption, we provide some examples where explicit formulas for the ambiguity level \(\varepsilon(\cdot)\) are available, with a proper choice of \(\gamma\) and \(C_{\gamma}\). We defer the case of two logistic regression models across the target and source distributions to Section 5. **Example 1** (Perfect Source).: Assume the condition of _relative signal exponent_ proposed in Cai and Wei (2021), which amounts to \[s(x)\geq C_{\gamma}|\eta^{Q}(x)-\frac{1}{2}|^{\gamma}\quad\forall x\in\Omega.\] Then Assumption 2 holds with \(\varepsilon(z)\equiv 0\). Specifically, if \(\eta^{P}=\eta^{Q}\), it is straightforward to set \(\gamma=1\) and \(\varepsilon(\cdot)=0\). In this scenario, the Bayes classifiers are identical and \(\eta^{P}\) gives strong signal compared to \(\eta^{Q}\) over the whole support \(\Omega\), so the ambiguity level is set as zero. **Example 2** (Strong Signal over \(\Omega\)).: Instead of having a strong signal over the entire region as in Example 1, here we have a strong signal over only a given region: \[s(x)\geq C_{\gamma}|\eta^{Q}(x)-\frac{1}{2}|^{\gamma}\quad\forall x\in\Omega_{P}.\] Then Assumption 2 holds by taking \[\varepsilon(z)\geq\mathbb{E}_{(X,Y)\sim Q}\left[\Big{|}\eta^{Q}(X)-\frac{1}{2} \Big{|}\mathbf{1}\left\{|\eta^{Q}(X)-\frac{1}{2}|\leq z,X\notin\Omega_{P} \right\}\right],\] i.e., the ambiguity level is controlled by the risk within the complement \(\Omega/\Omega_{P}\). This corresponds to the common case where the source data is collected from a subpopulation with respect to the target distribution. **Example 3** (Strong Signal with Imperfect Transfer).: Suppose that the transfer signal is strong, but the Bayes classifiers may differ, i.e., \[\Big{|}\eta^{P}(x)-\frac{1}{2}\Big{|}\geq C_{\gamma}\Big{|}\eta^{Q}(x)-\frac{1 }{2}\Big{|}^{\gamma},\quad\forall x\in\Omega,\] \[\Omega_{R}:=\left\{x\in\Omega:\Big{(}\eta^{P}(x)-\frac{1}{2}\Big{)}\Big{(}\eta ^{Q}(x)-\frac{1}{2}\Big{)}<0\right\}\neq\emptyset.\] Then Assumption 2 holds if \[\varepsilon(z)\geq\mathbb{E}_{(X,Y)\sim Q}\left[\Big{|}\eta^{Q}(X)-\frac{1}{2 }\Big{|}\mathbf{1}\left\{\Big{|}\eta^{Q}(X)-\frac{1}{2}\Big{|}\leq z,X\in\Omega _{R}\right\}\right].\] In other words, the ambiguity level is determined by the risk associated with the region where different Bayes classifiers exist. A specific scenario that aligns with this example and utilizes \(\gamma=C_{\gamma}=1\) can be expressed as \[\eta^{P}(x)=\eta^{Q}(x)\mathbf{1}\{x\notin\Omega_{R}\}+\left(1-\eta^{Q}(x) \right)\mathbf{1}\{x\notin\Omega_{R}\},\quad\forall x\in\Omega,\] where only the response values within \(\Omega_{R}\) are flipped. **Example 4** (Band-like Ambiguity).: A further noteworthy scenario is when the probability distribution \(\eta^{P}\) concentrates around a "band" that is centered on an informative curve with respect to \(\eta^{Q}\), but with some small deviation. A related situation is studied in Reeve et al. (2021), where \(\eta^{P}\) is approximated by a linear transfer function of \(\eta^{Q}\). Suppose that there exists some band error constant \(\Delta\geq 0\), which represents the deviation level, such that \[s(x)\geq C_{\gamma}\Big{|}\eta^{Q}(x)-\frac{1}{2}\Big{|}^{\gamma}-\Delta, \tag{4}\] for any \(x\in\Omega\). Then Assumption 2 holds with \[\varepsilon(z;\gamma,C_{\gamma}/2)=\Big{(}C_{\alpha}z^{1+\alpha}\Big{)}\wedge \Big{(}2^{\frac{1+\alpha}{\gamma}}C_{\alpha}C_{\gamma}^{-\frac{1+\alpha}{ \gamma}}\Delta^{\frac{1+\alpha}{\gamma}}\Big{)}. \tag{5}\] Notably, the case of \(\Delta=0\) degenerates into the perfect source scenario in Example 1. For the proof of the statement in Example 4, see Lemma E.1. For simplicity, we assume that (4) holds over the entire feature space \(\Omega\). Specifically, if we have \[\sup_{x\in\Omega}|\eta^{P}(x)-\eta^{Q}(x)|\leq\Delta, \tag{6}\] then the band-like ambiguity condition 4 holds with \(\gamma=C_{\gamma}=1\). This is common and meaningful in real-world applications where the regression function of the source distribution deviates slightly from \(\eta^{Q}\). By (5), Assumption 2 holds in this special case with \[\varepsilon(z,1,1/2)=\Big{(}C_{\alpha}z^{1+\alpha}\Big{)}\wedge\Big{(}2^{1+ \alpha}C_{\alpha}C_{\gamma}^{-(1+\alpha)}\Delta^{1+\alpha}\Big{)}.\] ## 3 General Convergence Results Let \(\Pi\) be any given subset of distributions \((Q,P)\) satisfying that Assumption 1 and 2 with parameter \(\alpha\geq 0\) and \(\gamma,C_{\alpha},C_{\gamma}>0\), and ambiguity level \(\varepsilon(\cdot)\). Our analysis focuses on the performance of any classifier when the target and source distribution pair \((Q,P)\) belongs to \(\Pi\). This framework captures the essential information in the source data and how it can be used to improve the convergence rate of the excess risk. ### Performance of the TAB model In preparation for the general result, it is necessary to define the risk learned by the source data over the region \(s(x)\geq C_{\gamma}|\eta^{Q}(x)-1/2|^{\gamma}\) with strong signal strength. **Definition 2** (Signal Transfer Risk).: Define the _signal transfer risk_ of the classifier \(f\) with respect to parameters \(\gamma,C_{\gamma}>0\) as \[\xi(f;\gamma,C_{\gamma}):=\mathbb{E}_{(X,Y)\sim Q}\left[\Big{|}\eta^{Q}(X)- \frac{1}{2}\Big{|}\mathbf{1}\left\{\hat{f}^{P}(X)\neq f_{Q}^{*}(X),s(X)\geq C_ {\gamma}\Big{|}\eta^{Q}(X)-\frac{1}{2}\Big{|}^{\gamma}\right\}\right]. \tag{7}\] We abbreviate it as \(\xi(f)\) when there is no need to specify \(\gamma\) and \(C_{\gamma}\). The signal transfer risk results from the classification of points belonging to the area where \(s(x)\geq C_{\gamma}|\eta^{Q}(x)-1/2|^{\gamma}\). For simplicity, we assume the mild condition that \(\sup_{(Q,P)\in\Pi}\mathbb{E}_{\mathcal{D}_{P}}\xi(\hat{f}^{P})\gtrsim n_{P}^{-c}\) for some constant \(c>0\) to prevent a convergence rate faster than polynomial. Due to the strong signals offered by the source data within this area, it is intuitively expected that the signal transfer risk can be reduced with the aid of the source data sample \(\mathcal{D}_{P}\). In this paper, the classifier derived from the target data is assumed to be a _plug-in rule_, of the form \(\mathbf{1}\{\hat{\eta}^{Q}(x)\geq\frac{1}{2}\}\), where \(\hat{\eta}^{Q}\) is an estimator of the regression function \(\eta^{Q}\). By introducing a novel strategy called the _TAB model_, the following result demonstrates a general approach to achieving a faster convergence rate of the excess risk. **Theorem 1**.: Let \(\hat{\eta}^{Q}\) be an estimate of the regression function \(\eta^{Q}\) and \(\hat{f}^{P}\) be a classifier obtained by \(\mathcal{D}_{P}\). Suppose there exist two \(n_{Q}\)-sequences \(\delta_{Q},\delta_{f}\) such that \(\delta_{Q}^{1+\alpha}\gtrsim n_{Q}^{-c}\) for some constant \(c>0\) and with probability at least \(1-\delta_{f}\), for any \(x\in\Omega\) we have \(\forall t>0\), \[\sup_{(Q,P)\in\Pi}\mathbb{P}_{\mathcal{D}_{Q}}\big{(}|\hat{\eta}^{Q}(x)-\eta^ {Q}(x)|\geq t|X_{1:n_{Q}}\big{)}\leq C_{1}\exp\Big{(}-\Big{(}\frac{t}{\delta_ {Q}}\Big{)}^{2}\Big{)}, \tag{8}\] for some constant \(C_{1}>0\). Given the choice of \(\tau\gtrsim\log(n_{Q}\lor n_{P})\delta_{Q}\), the _TAB classifier_ \[\hat{f}_{TAB}(x)=\begin{cases}\mathbf{1}\{\hat{\eta}^{Q}(x)\geq\frac{1}{2}\},& \text{ if }|\hat{\eta}^{Q}(x)-\frac{1}{2}|\geq\tau,\\ \hat{f}^{P}(x),&\text{ otherwise.}\end{cases} \tag{9}\] satisfies \[\sup_{(Q,P)\in\Pi}\mathbb{E}_{(\mathcal{D}_{Q},\mathcal{D}_{P})}\mathcal{E}_{ Q}(\hat{f}_{TAB})\lesssim\left(\sup_{(Q,P)\in\Pi}\mathbb{E}_{\mathcal{D}_{P}} \xi(\hat{f}^{P})+\varepsilon(2\tau)\right)\wedge\tau^{1+\alpha}+\delta_{f}. \tag{10}\] We now explain every important term in the upper bound (10). * The term \(\sup_{(Q,P)\in\hat{\Pi}}\mathbb{E}_{\mathcal{D}_{P}}\xi(\hat{f}^{P})\) captures the risk transferred by the source data using a classifier \(\hat{f}^{P}\), excluding the ambiguity component. It often exhibits a faster convergence rate than \(\delta_{Q}^{1+\alpha}\) and quantifies the benefits of transfer learning. * The term \(\varepsilon(2\tau)\) quantifies the ambiguity level only when \(\eta^{Q}\) is close to \(1/2\). Thus, the target data not only establishes an upper bound on the excess risk but also plays a critical role in reducing the risk caused by ambiguity in the source data. * The threshold \(\tau\) balances the classification ability of the target and source data by filtering out points easily classified with the target data. For \(\tau=0\), our approach mirrors Audibert and Tsybakov (2007), achieving an asymptotically lower excess risk than \(\delta_{Q}^{1+\alpha}\) without source data. Alternatively, for \(\tau=1/2\), only the source data is used to construct the classifier, which is often the case in domain adaptation when no target data is available. Our choice of \(\tau\) between these extremes combines the advantages of the target and source data. * The final term \(\delta_{f}\) represents the probability of the concentration inequality (8) failing due to extreme realizations of \(\mathcal{D}_{Q}\). Generally, it is significantly smaller than the other terms that upper bound the excess risk. For instance, it decays exponentially with respect to \(n_{Q}\) in non-parametric classification using K-NN estimators (See Lemma 9.1 of Cai and Wei (2021)). With the optimal choice of \(\tau\asymp\log(n_{Q}\lor n_{P})\delta_{Q}\), our upper bound cannot be worse than the \(\delta_{Q}^{1+\alpha}\) in Audibert and Tsybakov (2007), up to some logarithmic terms. Occasionally, the concentration of \(\hat{\eta}^{Q}\) may not be exponential, as in (8). To overcome this limitation, the following theorem generalizes Theorem 1 to allow any type of concentration property of \(\hat{\eta}^{Q}\). **Theorem 2**.: Let \(\hat{\eta}^{Q}\) be an estimate of the regression function \(\eta^{Q}\), and let \(\hat{f}^{P}\) be a classifier obtained from \(\mathcal{D}_{P}\). Suppose that for some \(\tau>0\), there exist functions \(\delta(\cdot,\cdot),\;\delta_{Q}(\cdot,\cdot)\) such that for any \((Q,P)\in\Pi\), the concentration property \[\mathbb{P}_{\mathcal{D}_{Q}}\left(|\hat{\eta}^{Q}(x)-\eta^{Q}(x)|\geq\tau \right)\leq\delta_{Q}(n_{Q},\tau) \tag{11}\] holds for any \(x\in\Omega^{*}\subset\Omega\) with \(Q(\Omega^{*})\geq 1-\delta(n_{Q},\tau)\). Then the _TAB classifier_ \[\hat{f}_{TAB}(x)=\begin{cases}\mathbf{1}\{\hat{\eta}^{Q}(x)\geq\frac{1}{2}\},& \text{ if }|\hat{\eta}^{Q}(x)-\frac{1}{2}|\geq\tau,\\ \hat{f}^{P}(x),&\text{ otherwise.}\end{cases} \tag{12}\] satisfies \[\sup_{(Q,P)\in\Pi}\mathbb{E}_{(\mathcal{D}_{Q},\mathcal{D}_{P})}\mathcal{E}_{ Q}(\hat{f}_{TAB})\lesssim\left(\sup_{(Q,P)\in\Pi}\mathbb{E}_{\mathcal{D}_{P}} \xi(\hat{f}^{P})+\varepsilon(2\tau)\right)\wedge\tau^{1+\alpha}+\delta_{Q}(n_{ Q},\tau)+\delta(n_{Q},\tau). \tag{13}\] The quantity \(\delta_{Q}(n_{Q},\tau)\) in (11) controls the failure probability bound of the concentration inequality concerning \(\mathcal{D}_{Q}\). In Theorem 1, it is further incorporated into the probability of extreme realizations of \(X_{1:n_{Q}}\) (8), and the subsequent concentration inequality for the failure probability related to \(Y_{1:n_{Q}}\). To reduce the more general and abstract Theorem 2 to Theorem 1, it suffices to set \(\delta_{Q}(n_{Q},\tau)=\delta_{f}+C_{1}\exp(-(\tau/\delta_{Q})^{2})\) and \(\Omega^{*}\equiv\Omega\), where \(\delta_{f}\) and \(C_{1}\) are the notations used in Theorem 1. Readers may be concerned that the exponential concentration (8) might not hold for all points over \(\Omega\). Indeed, this is particularly true when the support \(\Omega\) is not compact, and the concentration inequalities only hold within a part of \(\Omega\) with high probability. However, we can address this by adding a small failure probability with exponential concentration, i.e., \(1-Q_{X}(\Omega^{*})\leq\delta(n_{Q},\tau)\), to the risk bound given in (13). ### Simple Approach to Bounding Signal Transfer Risk The majority of the existing literature focuses on bounding the target excess risk using only the target data. While the excess risk of a classifier \(\hat{f}^{P}\) with respect to the source distribution, i.e., \[2\mathbb{E}_{(X,Y)\sim P}\left[\Big{|}\eta^{P}(X)-\frac{1}{2}\Big{|}\mathbf{1 }\left\{\hat{f}^{P}(X)\neq f_{P}^{*}(X)\right\}\right],\] is well-studied, its signal transfer risk \(\xi(\hat{f}^{P})\), given by (7), is less explored. To incorporate the vast literature of traditional statistical learning into our framework, we present a result that directly bounds the signal transfer risk \(\xi(\hat{f}^{P})\) in terms of the excess risk of the source distribution. This result involves a more refined version of the signal transfer risk, and more significantly, does not rely on any concentration property often required by plug-in rules. Some additional assumptions are needed. Condition 1 ensures the boundedness of the Radon-Nikodym derivative \(\frac{dQ_{X}}{dP_{X}}\). It ensures that \(P_{X}\) is sufficiently large enough to learn every point in \(\Omega\) relative to \(Q_{X}\). **Condition 1** (Absolutely Continuity).: \(Q_{X}\) is absolutely continuous with respect to \(P_{X}\). Moreover, there exists some constant \(M>0\) such that the Radon-Nikodym derivative satisfies \(\frac{dQ_{X}}{dP_{X}}(x)\leq M\) for any \(x\in\Omega\). Denote all such marginal distribution pairs \((Q_{X},P_{X})\) that satisfy Condition 1 with parameter \(M>0\) by \(\mathcal{A}(M)\). In other words, the source distribution provides enough coverage over the space \(\Omega\) to enable accurate learning of the target distribution. It is worth noting that Condition 1 implicitly implies that \(\Omega\) is a subset of \(\Omega_{P}\). Now we are in a position to present the upper bound for the signal transfer risk with respect to the target distribution in terms of the source excess risk. **Theorem 3**.: Define the source excess risk as \[\varepsilon_{P}:=\sup_{(Q,P)\in\Pi\cap\mathcal{A}(M)}\mathbb{E}_{\mathcal{D}_{ P}}\mathbb{E}_{(X,Y)\sim P}\left[\Big{|}\eta^{P}(X)-\frac{1}{2}\Big{|}\mathbf{1} \left\{\hat{f}^{P}(X)\neq f_{P}^{*}(X)\right\}\right].\] Then, the signal transfer risk satisfies \[\sup_{(Q,P)\in\Pi\cap\mathcal{A}(M)}\mathbb{E}_{\mathcal{D}_{P}}\xi(\hat{f}^{P })\leq\begin{cases}2M^{\frac{1+\alpha}{\gamma+\alpha}}C_{\alpha}^{\frac{\gamma -1}{\gamma+\alpha}}C_{\gamma}^{-\frac{1+\alpha}{\gamma+\alpha}}\varepsilon_{P}^ {\frac{1+\alpha}{\gamma+\alpha}},&\gamma\geq 1,\\ 2^{\gamma-1}MC_{\gamma}^{-1}\varepsilon_{P},&\gamma<1.\end{cases}\] Although the result may be sub-optimal when \(\gamma<1\) and additionally requires that \((Q,P)\in\mathcal{A}(M)\), Theorem 3 shows that the signal transfer risk can be bounded by the source excess risk \(\varepsilon_{P}\), and inherits the performance and possible consistency with respect to classification on the source data. As long as \(\hat{f}^{P}\) learns the source distribution sufficiently well, the signal transfer risk will converges faster than the rate obtained with only the target data, namely, \(\delta_{Q}^{1+\alpha}\). Theorem 3 only requires knowledge of conventional statistical learning techniques, which are widely studied and well-understood in the literature. Therefore, researchers and practitioners can easily apply our framework to a wide range of problems, without the need for specialized knowledge or expertise in transfer learning or related fields. Conventional theory on the excess risk can thus be applied to transfer learning. ## 4 Applications in Non-parametric Classification In this section, we aim to apply the general result of Theorem 1 to non-parametric classification settings. Here we design the TAB classifier by combining plug-in rules over the target and source data, and obtain minimax optimal rates under non-parametric settings. See Audibert and Tsybakov (2007) for a comprehensive overview of theoretical properties of plug-in rules. We adopt \(K\)-nearest neighbor classifiers as plug-in rules for both \(\hat{\eta}^{Q}\) and \(\hat{\eta}^{P}\). Our analysis in this section then builds on prior work on rates for \(K\)-nearest neighbor classification (e.g. Hall et al. (2008); Samworth (2012); Gadat et al. (2016); Celisse and Mary-Huard (2018); Cannings et al. (2020a)). For a review of early work on the theoretical properties of the \(K\)-NN classifier, see Devroye et al. (1997). Also, in the literature of non-parametric classification, see Fan (1993) and Fan and Gijbels (1996) for local polynomial regression as an alternative to \(K\)-NN methods. If the classifier \(\hat{f}^{P}(\cdot)=\mathbf{1}\{\hat{\eta}^{P}(\cdot)\geq\frac{1}{2}\}\) is a general plug-in rule, we also provide an explicit upper bound for the signal transfer risk \(\xi(\hat{f}^{P})\) based on the point-wise misclassification rate of \(\hat{\eta}^{P}(x)\). See Appendix A.1 for the details of this bound and related results. ### K-Nearest Neighbor TAB Classifier Given a query point \(x\in\mathbb{R}^{d}\), we first reorder the target data pairs as \((X_{(1)},Y_{(1)}),\ldots,(X_{(n_{Q})},Y_{(n_{Q})})\) based on the Euclidean distances of the \(X_{i}\)'s to \(x\), i.e., \[\|X_{(1)}-x\|_{2}\leq\cdots\leq\|X_{(n_{Q})}-x\|_{2}.\] Then, we define the \(K\)-NN estimate \(\hat{\eta}^{Q}_{k}(x)\) as the simple average of the response values of the \(k_{Q}\) nearest neighbors of \(x\) in the target data: \[\hat{\eta}^{Q}_{k_{Q}}=\frac{1}{k_{Q}}\sum_{i=1}^{k_{Q}}Y_{(i)}(x).\] Similarly, we define the \(K\)-NN estimate \(\hat{\eta}^{P}_{k_{P}}(x):=k_{P}^{-1}\sum_{i=1}^{k_{P}}Y^{P}_{(i)}(x)\) for the source data pairs \(\mathcal{D}_{P}\). Finally, we plug these estimates into the TAB \(K\)-NN classifier to obtain \[\hat{f}^{NN}_{TAB}(x)=\begin{cases}\mathbf{1}\{\hat{\eta}^{Q}_{k_{Q}}(x)\geq \frac{1}{2}\},&\quad\text{if }|\hat{\eta}^{Q}_{k_{Q}}(x)-\frac{1}{2}|\geq\tau,\\ \mathbf{1}\{\hat{\eta}^{P}_{k_{P}}(x)\geq\frac{1}{2}\},&\quad\text{otherwise}. \end{cases}\] As for the threshold parameter, we choose \[\tau\asymp\log(n_{Q}\lor n_{P})k_{Q}^{-\frac{1}{2}}.\] Further elaboration on the rationale behind choosing \(\tau\), as well as the well-studied optimal selection of \((k_{Q},k_{P})\), can be found in Section 4.3. ### Non-parametric Classification Setting We are now in a position to state the applications of our proposed TAB classifier in non-parametric classification under the finite dimension regime. In addition to the margin assumption and the ambiguity level assumption, this paper considers the non-parametric classification problem when the following smoothness condition holds. **Condition 2** (Smoothness).: For any \(\beta\in[0,1]\), the \((\beta,C_{\beta})\)-_Holder_ is the class of functions \(g:\mathbb{R}^{d}\to\mathbb{R}\) satisfying, for any \(x,x^{\prime}\in\mathbb{R}^{d}\) \[|g(x)-g(x^{\prime})|\leq C_{\beta}\|x-x^{\prime}\|^{\beta},\] for some constant \(C_{\beta}>0\). We denote this class of functions by \(\mathcal{H}(\beta,C_{\beta})\). Previous works, including Cai and Wei (2021) and Reeve et al. (2021), do not typically require any smoothness assumption for \(\eta^{P}\). While this simplification is justified when \(\eta^{P}\) is closely related to \(\eta^{Q}\), it may overlook valuable information from a sufficiently smooth \(\eta^{P}\), leading to a phase-transition in the upper and lower bounds of the excess risk (for a detailed discussion, refer to Appendix A.2). In contrast, our approach considers the smoothness of _both_ the target and source regression functions. Specifically, we assume that \[\eta^{Q}\in\mathcal{H}(\beta,C_{\beta}),\quad\eta^{P}\in\mathcal{H}(\beta_{P}, C_{\beta_{P}}).\] This condition allows us to obtain a more refined upper bound that depends on the smoothness of both functions. Our next condition concerns the mass of the source and target distributions in the sense that the density functions with respect to \(Q_{X}\) and \(P_{X}\) are bounded from zero and infinity. A similar condition has been imposed in (Audibert and Tsybakov, 2007) and Cai and Wei (2021). We require that both \(Q_{X}\) and \(P_{X}\) satisfy the following condition: **Condition 3** (Strong Density).: The marginal distribution \(Q_{X}\) is absolutely continuous with respect to the Lebesgue measure \(\lambda\) on its _compact support_ (denoted by \(\Omega\)). Furthermore, we have that \[\frac{dQ_{X}}{d\lambda}(x)\in[\mu^{-},\mu^{+}],\quad\frac{ \lambda(B(x,r)\cap\Omega)}{\lambda(B(x,r))}\geq c_{\mu},\quad\forall\:0<r<r_{ u},\ x\in\Omega,\] Denote the set of such marginal distributions \(Q_{X}\) by \(\mathcal{S}(\mu)\) with positive parameters \(\mu=(\mu^{+},\mu^{-},c_{\mu},r_{\mu})\). Taking all the conditions above into account, we consider the subset of source and target distribution pairs in \(\Pi\) that satisfies Assumptions 1, 2 and Conditions 2, 3: \[\Pi^{NP}:= \{(Q,P)\in\Pi:\eta^{Q}\in\mathcal{H}(\beta,C_{\beta}),\eta^{P} \in\mathcal{H}(\beta_{P},C_{\beta_{P}}),Q_{X},P_{X}\in\mathcal{S}(\mu)\}.\] We further impose the mild assumption \(\alpha\beta\leq d\) to rule out the "super-fast" rates of convergence mentioned in Audibert and Tsybakov (2011). This is guaranteed to hold when \(\eta^{Q}\) equals \(1/2\) at an interior point of \(\Omega\) (See Proposition 3.4 of Audibert and Tsybakov (2011)). In the following part of this section, we explore three types of additional conditions on the space \(\Pi^{NP}\) and analyze the respective excess risks. 1. **Band-like Ambiguity:** We consider the scenario of band-like ambiguity described in Example 4. Define the focal parametric space as \[\Pi^{NP}_{BA}:=\left\{(Q,P)\in\Pi^{NP}:s(x)\geq C_{\gamma}\Big{|} \eta^{Q}(x)-\frac{1}{2}\Big{|}^{\gamma}-\Delta,\ \forall x\in\Omega\right\}.\] Recall that in Example 4, we had shown that this implies \[\varepsilon(z;\gamma,C_{\gamma}/2)=\Big{(}C_{\alpha}z^{1+\alpha} \Big{)}\wedge\Big{(}2^{\frac{1+\alpha}{\gamma}}C_{\alpha}C_{\gamma}^{-\frac{1 +\alpha}{\gamma}}\Delta^{\frac{1+\alpha}{\gamma}}\Big{)}.\] Furthermore, in this case, we set \(\beta_{P}=0\), which means that no additional smoothness condition is imposed on \(\eta^{P}\). Instead, we allow \(\eta^{P}\) to arbitrarily fluctuate within a small band whose width is measured by \(\Delta\). When \(\Delta=0\), our setting covers the one in Cai and Wei (2021). 2. **Smooth Source:** We add the condition \(\beta_{P}\geq\gamma\beta\) for scenarios where \(\eta^{P}\) is smooth. While the ambiguity level \(\varepsilon(\cdot)\) is arbitrary, this condition ensures that points with strong signal strength have neighboring data points with strong signal strength of \(\eta^{P}\) as well, enhancing the classification of source data points with strong signal strength. Define \(\Pi^{NP}_{S}\) as the subset of \(\Pi^{NP}\) such that \(\beta_{P}\geq\gamma\beta\). When \(\eta^{Q}\) and \(\eta^{P}\) share the same smoothness degree \(\beta\), we set \(\gamma=1\). 3. **Strong Signal with Imperfect Transfer:** We consider the scenario in Example 3 where a strong signal strength exists but the direction may be reversed. This condition ensures that the region \(s(x)\leq C_{\gamma}|\eta^{Q}(x)-1/2|^{\gamma}\) is smooth, which further ensures the availability of a sufficient number of neighboring source data points with the strong signal. In fact, its boundary is a part of the decision boundary \(\{x\in\Omega:\eta^{Q}(x)=1/2\}\) whose smoothness is guaranteed by the smoothness of \(\eta^{Q}\) (see the proof of Theorem 4). Define \[\Pi_{I}^{NP}:=\left\{(Q,P)\in\Pi^{NP}:\eta^{P}\text{ is continuous, }\Big{|}\eta^{P}(x)-\frac{1}{2}\Big{|}\geq C_{\gamma}\Big{|}\eta^{Q}(x)- \frac{1}{2}\Big{|}^{\gamma},\ \forall x\in\Omega\right\}.\] It is worth noting that \(s(x)\geq C_{\gamma}|\eta^{Q}(x)-1/2|^{\gamma}\) if and only if \(x\notin\Omega_{R}\), i.e., \((\eta^{P}(x)-1/2)(\eta^{Q}(x)-1/2)\geq 0\). Therefore, the ambiguity level \[\varepsilon(z)=\mathbb{E}_{(X,Y)\sim Q}\left[\Big{|}\eta^{Q}(X)-\frac{1}{2} \Big{|}\mathbf{1}\left\{\Big{|}\eta^{Q}(X)-\frac{1}{2}\Big{|}\leq z,X\in \Omega_{R}\right\}\right]\] precisely captures the risk caused by different Bayes classifiers between the target and source data. We also let \(\beta_{P}=0\) in this case, but assume that \(\eta^{P}\) is continuous to make sure that the region \(s(x)\leq C_{\gamma}|\eta^{Q}(x)-1/2|^{\gamma}\) has a continuous boundary. The analysis of the excess risk rate with respect to the wider class \(\Pi^{NP}\) is provided in Appendix A.2. ### Optimal Rate of Excess Risk Before presenting the results regarding the optimal rate of \(\mathcal{E}_{Q}(\hat{f}_{TAB}^{NN})\), we first discuss parameter selection. We choose the number of nearest neighbors as follows: \[k_{Q}=\lfloor c_{Q}n_{Q}^{\frac{2\beta}{2\beta+d}}\rfloor,\ k_{P}=\lfloor c_{ P}n_{P}^{\frac{2\gamma\beta}{2\beta+d}}\rfloor\] where \(c_{Q}\) and \(c_{P}\) are positive constants. This choice is motivated by previous work such as Gadat et al. (2016) and Cannings et al. (2020a), where similar choices are made in the context of nearest-neighbor methods. Notably, the choice of \(k_{P}\) is similarly derived by seeing \(\gamma\beta\) as the "smoothness" parameter for the source data, and our choice coincides with the classical optimal choice when \(\beta_{P}=\gamma\beta\). In addition, we assume \(\gamma\) and \(\beta\) are known here for convenience. For adaptive and rate-optimal approaches to determining the number of nearest neighbors, see Lepski (1993); Reeve et al. (2021). As for the threshold in the TAB classifier \(\hat{f}_{TAB}^{NN}\), we choose \[\tau\asymp\log(n_{Q}\lor n_{P})k_{Q}^{-\frac{1}{2}}.\] This choice is consistent with the concentration property of \(\hat{\eta}_{k_{Q}}^{Q}\), of which the "uncertainty" level \(\delta_{Q}\) in (8) is proportional to \(k_{Q}^{-\frac{1}{2}}\), since \(\hat{\eta}_{k_{Q}}^{Q}\) is the average of \(k_{Q}\) random variables. By the definition of \(k_{Q}\), we see that \[\tau\asymp\log(n_{Q}\lor n_{P})n_{Q}^{-\frac{\beta}{2\beta+d}}.\] Given a family of the target and source distributions, the next theorem gives a provable upper bound on the excess risk of the TAB \(K\)-NN classifier \(\hat{f}_{TAB}^{NN}\), with the proper parameter choices. **Theorem 4** (Non-parametric Classification Upper Bound).: Suppose that \(n_{Q}^{\frac{d}{2\beta+d}}\exp(-c_{Q}n_{Q}^{\frac{2\beta}{2\beta+d}})\lesssim n_{P} ^{-\frac{\beta(1+\alpha)}{2\gamma\beta+d}}\). Then the TAB \(K\)-NN classifier \(\hat{f}_{TAB}^{NN}(x)\) satisfies: 1. **Band-like Ambiguity:** \[\sup_{(Q,P)\in\Pi_{BA}^{NP}}\mathbb{E}_{(\mathcal{D}_{Q},\mathcal{D}_{P})} \mathcal{E}_{Q}(\hat{f}_{TAB}^{NN})\lesssim\left(n_{P}^{-\frac{\beta(1+\alpha) }{2\gamma\beta+d}}+\Delta^{\frac{1+\alpha}{\gamma}}\right)\wedge\left(\log^{1+ \alpha}(n_{Q}\lor n_{P})n_{Q}^{-\frac{\beta(1+\alpha)}{2\beta+d}}\right).\] (14) 2. **Smooth Source:** \[\sup_{(Q,P)\in\Pi_{S}^{NP}}\mathbb{E}_{(\mathcal{D}_{Q},\mathcal{D}_{P})} \mathcal{E}_{Q}(\hat{f}_{TAB}^{NN})\lesssim\left(n_{P}^{-\frac{\beta(1+\alpha) }{2\gamma\beta+d}}+\varepsilon(2\tau)\right)\wedge\left(\log^{1+\alpha}(n_{Q} \lor n_{P})n_{Q}^{-\frac{\beta(1+\alpha)}{2\beta+d}}\right).\] (15) 3. **Strong Signal with Imperfect Transfer:** \[\sup_{(Q,P)\in\Pi_{I}^{NP}}\mathbb{E}_{(\mathcal{D}_{Q},\mathcal{D}_{P})} \mathcal{E}_{Q}(\hat{f}_{TAB}^{NN})\lesssim\left(n_{P}^{-\frac{\beta(1+\alpha )}{2\gamma\beta+d}}+\varepsilon(2\tau)\right)\wedge\left(\log^{1+\alpha}(n_{Q} \lor n_{P})n_{Q}^{-\frac{\beta(1+\alpha)}{2\beta+d}}\right).\] (16) Theorem 4 is obtained simply by verifying the conditions in Theorem 1, as demonstrated in Lemmas C.1 and C.2. We adopt a more refined and technical approach to analyze the signal transfer risk of plug-in rules (see Theorem 8) compared to Theorem 3. The mild condition \(n_{Q}^{\frac{d}{2\beta+d}}\exp(-c_{Q}n_{Q}^{\frac{2\beta}{2\beta+d}})\lesssim n _{P}^{-\frac{\beta(1+\alpha)}{2\gamma\beta+d}}\) controls the failure probability \(\delta_{f}\) in the terminology of Theorem 1. Our risk bounds reveal that transfer learning leads to faster convergence rates of excess risk when \(n_{P}\) is large compared to \(n_{Q}\) and the ambiguity level is small; specifically, \[\Pi_{BA}^{NP}:n_{P}\gg n_{Q}^{\frac{2\gamma\beta+d}{2\beta+d}},\ \Delta\ll n_{Q}^{-\frac{\gamma\beta}{2\beta+d}};\] \[\Pi_{S}^{NP},\Pi_{I}^{NP}:n_{P}\gg n_{Q}^{\frac{2\gamma\beta+d}{ 2\beta+d}},\ \varepsilon(2\tau)\ll\tau^{1+\alpha}.\] On the other hand, if \(n_{P}\) is small compared to \(n_{Q}\), then the term \(n_{Q}^{-\frac{\beta(1+\alpha)}{2\beta+d}}\) dominates the upper bound, and reduces to the risk rate in the conventional setting with only target data and the strong density assumption (Audibert and Tsybakov, 2007), up to logarithmic factors. **Remark 1**.: (a) When \(\Delta=0\) in (14), or \(\varepsilon(\cdot)\equiv 0\) in (16), our upper bound reduces to Theorem 2 of Cai and Wei (2021), up to logarithmic factors. Importantly, our method demonstrates robustness against a positive band error \(\Delta\), in constrast to the weighted \(K\)-NN estimator proposed in their work. We refer the reader to Figure 2 in Section 6 for a numerical comparison. (b) Our result (15) supplements these previous works (Cai and Wei, 2021; Reeve et al., 2021) by allowing for an arbitrary type of ambiguity after imposing a smoothness condition on \(\eta^{P}\). We next provide lower bounds on the minimax excess risk, which show that the TAB \(K\)-NN classifier achieves the minimax optimal rates, even when we add the constraint \(\Omega=\Omega_{P}\). Throughout this paper, all lower bound results consider \(\Pi\) to be the set of distributions \((Q,P)\) that satisfy Assumption 1 and 2 with ambiguity level \(\varepsilon(\cdot)\), without restricting to any given subset. **Theorem 5** (Non-parametric Classification Lower Bound).: Fix the parameters \(\alpha\beta\leq d\) and set \(\tau\asymp\log(n_{Q}\lor n_{P})n_{Q}^{-\frac{\beta}{2\beta+d}}\). We have that 1. **Band-like Ambiguity:** \[\inf_{\hat{f}}\sup_{\begin{subarray}{c}(Q,P)\in\Pi_{PA}^{NP}\\ \Omega=\Omega_{P}\end{subarray}}\mathbb{E}_{(\mathcal{D}_{Q},\mathcal{D}_{P}) }\mathcal{E}_{Q}(\hat{f}_{TAB}^{NN})\gtrsim\left(n_{P}^{-\frac{\beta(1+\alpha) }{2\beta+d}}+\Delta^{\frac{1+\alpha}{\gamma}}\right)\wedge n_{Q}^{-\frac{ \beta(1+\alpha)}{2\beta+d}}.\] (17) 2. **Smooth Source:** For some constant \(c>0\) that is independent of \(\alpha\), \[\inf_{\hat{f}}\sup_{\begin{subarray}{c}(Q,P)\in\Pi_{S}^{NP}\\ \Omega=\Omega_{P}\end{subarray}}\mathbb{E}_{(\mathcal{D}_{Q},\mathcal{D}_{P}) }\mathcal{E}_{Q}(\hat{f}_{TAB}^{NN})\gtrsim\left(n_{P}^{-\frac{\beta(1+\alpha )}{2\beta+d}}+\varepsilon(cn_{Q}^{-\frac{\beta}{2\beta+d}})\right)\wedge n_{Q} ^{-\frac{\beta(1+\alpha)}{2\beta+d}}.\] (18) 3. **Strong Signal with Imperfect Transfer:** For some constant \(c>0\) that is independent of \(\alpha\), \[\inf_{\hat{f}}\sup_{\begin{subarray}{c}(Q,P)\in\Pi_{I}^{NP}\\ \Omega=\Omega_{P}\end{subarray}}\mathbb{E}_{(\mathcal{D}_{Q},\mathcal{D}_{P}) }\mathcal{E}_{Q}(\hat{f}_{TAB}^{NN})\gtrsim\left(n_{P}^{-\frac{\beta(1+\alpha )}{2\beta+d}}+\varepsilon(cn_{Q}^{-\frac{\beta}{2\beta+d}})\right)\wedge n_{Q} ^{-\frac{\beta(1+\alpha)}{2\beta+d}}.\] (19) In the special case where \(\sup_{x\in\Omega}|\eta^{Q}(x)-\eta^{P}(x)|\leq\Delta\), we can determine that the minimax optimal excess risk is \(\left(n_{P}^{-\frac{\beta(1+\alpha)}{2\beta+d}}+\Delta^{1+\alpha}\right)\wedge n _{Q}^{-\frac{\beta(1+\alpha)}{2\beta+d}}\), up to logarithmic factors of \(n_{Q}\lor n_{P}\). As long as \[n_{P}\gg n_{Q},\ \Delta\ll n_{Q}^{-\frac{\beta}{2\beta+d}},\] the classifier will benefit from the source data. The proof of Theorem 5 reveals that the condition (6), although slightly stronger than the band-like ambiguity condition (4), remains compatible with the lower bound construction as it ensures \(\sup_{x\in\Omega}|\eta^{Q}(x)-\eta^{P}(x)|\leq\Delta\). ## 5 Applications in Logistic Regression Besides non-parametric classification, we also investigate the use of transfer learning in logistic regression models, which are a commonly used parametric approach in classification. Previous works such as Zheng et al. (2019) have studied the "data enriched model" for logistic regression under a single-source setting, Abramovich and Grinshtein (2019) have explored sparse logistic regression in high-dimensional settings, and Tian and Feng (2022) have considered transfer learning in generalized linear models. Our goal is to reveal how incorporating an additional source logistic regression model with a different linear term coefficient can enhance the convergence of the excess misclassification rate. Suppose the source and target distributions are high-dimensional (\(d\overset{n_{Q}\to\infty}{\longrightarrow}\infty\)) logistic regression models given by \[\begin{split}\text{Target data model: }\eta^{Q}(x)&= \sigma({\beta_{Q}}^{T}x)\\ \text{Source data model: }\eta^{P}(x)&=\sigma({\beta_{P}}^{T}x), \end{split} \tag{20}\] where two independent samples \((X_{1},Y_{1}),\ldots,(X_{n_{Q}},Y_{n_{Q}})\stackrel{{\text{iid}}}{{ \sim}}Q\) and \((X_{1}^{P},Y_{1}^{P}),\ldots,(X_{n_{p}}^{P},Y_{n_{p}})\stackrel{{ \text{iid}}}{{\sim}}P\) are observed. To simplify the theoretical analysis, we assume that the marginal distributions \(Q_{X}\) and \(P_{X}\) are both \(N(0,I_{d})\), the \(d\)-dimension standard normal distribution. This marginal distribution is convenient when working with the restricted strong convexity condition (See Negahban et al. (2009)). Let \(\angle(\alpha,\beta)\) be the angle of two vectors \(\alpha\) and \(\beta\), in the range of \(0\) and \(\pi/2\). Consider the following parametric space of the coefficient pair \((\beta_{Q},\beta_{P})\): \[\Theta(s,\Delta)=\left\{(\beta_{Q},\beta_{P}):\|\beta_{Q}\|_{0}\leq s,\angle (\beta_{Q},\beta_{P})\leq\Delta\right\}, \tag{21}\] for some \(s>0\) and \(\Delta\in[0,\pi/2]\). The corresponding family of distribution pairs is then \[\Pi^{LR}=\Pi^{LR}(s,\Delta,M)=\{ (Q,P):X,X^{P}\sim N(0,I_{d}),\eta^{Q}(x)=\sigma(\beta_{Q}^{T}x), \tag{22}\] \[\eta^{P}(x)=\sigma(\beta_{P}^{T}x),(\beta_{Q},\beta_{P})\in \Theta(s,\Delta)\}.\] To ensure the control of the ambiguity level, we impose a constraint on the angle between \(\beta_{Q}\) and \(\beta_{P}\) in (21), which must be smaller than a constant \(\Delta\). Importantly, our constructed parametric space does not impose any sparsity conditions on \(\beta_{P}\). Given the family of logistic distribution pairs \(\Pi^{LR}\), we show that \(\Pi^{LR}\) is a subset of the overall distribution pair space \(\Pi\) with \(\alpha=1\) and \(\varepsilon(z,1,m/\pi)\lesssim z^{2}\wedge\Delta^{2}\), provided that \(\|\beta_{P}\|\geq m\|\beta_{Q}\|\) for some constant \(m>0\). See Lemma D.1 for a detailed proof. For model fitting, we minimize the negative Bernoulli likelihood function with lasso regularization terms to obtain \(\hat{\beta}_{Q}\) and \(\hat{\beta}_{P}\), i.e., \[\hat{\beta}_{Q} =\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p}}\frac{1}{n_{Q} }\sum_{i=1}^{n_{Q}}\left\{\log(1+e^{X_{i}^{T}\beta})-Y_{i}X_{i}^{T}\beta \right\}+\lambda_{Q}\|\beta\|_{1} \tag{23}\] \[\hat{\beta}_{P} =\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p}}\frac{1}{n_{P} }\sum_{i=1}^{n_{P}}\left\{\log(1+e^{X_{i}^{P}{}^{T}\beta})-Y_{i}^{P}X_{i}^{P} {}^{T}\beta\right\}+\lambda_{P}\|\beta\|_{1},\] where \(\lambda_{*}\asymp\sqrt{\frac{\log d}{n_{*}}}\) for \(*\in\{Q,P\}\). The corresponding target and source plug-in classifiers are then \(\mathbf{1}\{\sigma(\hat{\beta}_{Q}^{T}x)\geq\frac{1}{2}\}=\mathbf{1}\{\hat{ \beta}_{Q}^{T}x\geq 0\}\) and \(\mathbf{1}\{\hat{\beta}_{P}^{T}x\geq 0\}\). Hence, the TAB logistic lasso classifier becomes \[\hat{f}^{LR}_{TAB}(x)=\begin{cases}\mathbf{1}\{\hat{\beta}_{Q}^{T}x\geq 0\},& \quad\text{if }|\sigma(\hat{\beta}_{Q}^{T}x)-\frac{1}{2}|\geq\tau,\\ \mathbf{1}\{\hat{\beta}_{P}^{T}x\geq 0\},&\quad\text{otherwise.}\end{cases}\] By setting \(\lambda_{Q}\), \(\lambda_{P}\), and \(\tau\) properly, the excess risk upper bound we obtain is given by the following theorem. **Theorem 6**.: Assume that \(L\leq m\|\beta_{Q}\|_{2}\leq\|\beta_{P}\|_{2}\leq U\) for some constants \(L,U>0\) and \(0<m\leq 1\). Suppose that for some constant \(K>0\), we have \(d^{K}\gtrsim\frac{n_{Q}\lor n_{P}}{s\log d},\ n_{Q}\gg\log\frac{n_{P}}{s\log d}\), and \(n_{Q}\wedge n_{P}\gg s\log d\). Let \(\hat{\beta}_{Q},\hat{\beta}_{P}\) be obtained in (23) with \[\lambda_{Q}=c_{Q}\sqrt{\frac{\log d}{n_{Q}}},\ \lambda_{P}=c_{P}\sqrt{\frac{\log d }{n_{P}}},\] for some constants \(c_{Q},c_{P}\geq\sqrt{(K+1)}\). The TAB lasso classifier with threshold \[\tau=c_{\tau}\sqrt{\frac{s\log d}{n_{Q}}}\log(n_{Q}\lor n_{P}),\] for some constant \(c_{\tau}>0\), satisfies \[\sup_{(Q,P)\in\Pi^{LR}}\mathbb{E}_{(\mathcal{D}_{Q},\mathcal{D}_{P})}\mathcal{E }_{Q}(\hat{f}^{LR}_{TAB})\lesssim\left(\frac{s\log d}{n_{P}}+\Delta^{2}\right) \wedge\left(\frac{s\log d}{n_{Q}}\log^{2}(n_{Q}\lor n_{P})\right). \tag{24}\] The term \(\frac{s\log d}{n_{Q}}\) is the classical risk term with access to only the target data, and \(\frac{s\log d}{n_{P}}\) is the risk term transferred by the source data with an additional term \(\Delta^{2}\) measuring the angle discrepancy between \(\beta_{Q}\) and \(\beta_{P}\). We see that knowledge from the source data can significantly improve the learning performance when \(n_{P}\) is large and \(\Delta\) is small, namely, \[n_{P}\gg n_{Q},\quad\Delta\ll\sqrt{\frac{s\log d}{n_{Q}}}.\] It is worth noting that the small angle condition between \(\beta_{Q}\) and \(\beta_{P}\) considered in this paper is a more general assumption than the contrast assumption in Tian and Feng (2022), which requires that \(\beta_{Q}\) is sparse and the \(l_{q}\)-norm of the difference between \(\beta_{Q}\) and \(\beta_{P}\) is small for some \(q\in[0,1]\). In contrast, our result is applicable to a broader class of parameter spaces, allowing for \(\beta_{P}\) to not only be non-sparse, but also differ significantly in norm from \(\beta_{Q}\). We choose this \(\tau\) to ensure that, with high probability, we have \(\|\hat{\beta}_{Q}-\beta_{Q}\|_{2}\lesssim\sqrt{s}\lambda_{Q}\). This, in turn, implies \[\left|\sigma(\hat{\beta}_{Q}^{T}x)-\sigma(\beta_{Q}^{T}x)\right|\lesssim\sqrt {s}\lambda_{Q}\] with high probability with respect to \(Q_{X}=N(0,I_{d})\). The proof of Theorem 6 then comes from verifying that \(\sup_{(Q,P)\in\Pi^{LR}}\mathbb{E}_{\mathcal{D}_{P}}\xi(\mathbf{1}\{\hat{\beta }_{P}^{T}x\geq 0\})\asymp\frac{s\log d}{n_{P}}+\Delta^{2}\) in Theorem 2 and \(\varepsilon(z)\asymp z^{2}\wedge\Delta^{2}\) in Lemma D.1. This indicates that even when \(\beta_{P}\) is non-sparse, we can still obtain a reliable estimate of \(\beta_{P}\) by incorporating a lasso regularizer, if in addition \(\Delta\) is sufficiently small (See Lemma D.3). Theorem 7 below shows that the upper bound (24) in Theorem 6 is optimal up to logarithmic factors of \(n_{Q}\lor n_{P}\). **Theorem 7**.: Suppose that \(\frac{s\log d}{n_{Q}\lor n_{P}}\lesssim 1\). We have that \[\inf_{\hat{f}}\sup_{(Q,P)\in\Pi^{LR}}\mathbb{E}\mathcal{E}_{Q}(\hat{f}) \gtrsim\left(\frac{s\log d}{n_{P}}+\Delta^{2}\right)\wedge\frac{s\log d}{n_{Q}}.\] The derivation of the lower bound involves two terms. The term \(\frac{s\log d}{n_{P}}\wedge\frac{s\log d}{n_{Q}}\) represents the optimal convergence rate when the target and source distributions are identical, i.e., \(\beta_{P}=\beta_{Q}\). The term \(\Delta^{2}\wedge\frac{s\log d}{n_{Q}}\) corresponds to the choice \(\beta_{P}=(1,0,0,\ldots,0)\) in the lower bound construction, which imposes a sparsity constraint on \(\beta_{Q}\) within a small cone. It is worth noting that traditional lower bounds, typically derived from Fano's lemma, only consider minimax rates with respect to a distance metric. To show our lower bound of the excess risk \(\mathbb{E}\mathcal{E}_{Q}(\hat{f})\), we introduce a novel transformation that relates the excess risk to the angle difference of the linear coefficients. By applying Fano's lemma to this transformed quantity, we obtain the desired lower bound. We refer the reader to Lemma E.4 for a detailed explanation. Simulation Studies As mentioned earlier, the TAB classifier offers benefits in scenarios where \(n_{P}\) is large and the ambiguity level is small, and prevents negative transfer when the source data lacks sufficient information to aid the classification. In this section, we present simulation studies to demonstrate the practical benefits of transfer learning and the TAB classifier. We separately consider the non-parametric classification and logistic regression settings. ### Non-parametric Classification Setting The setting we considered is as follows: \(d=2,\ \Omega=\Omega_{P}=[0,1]^{2},\ Q_{X}=P_{X}=\text{Uniform}([0,1]^{2})\), the uniform distribution over the square. For the target regression function, let \(\eta^{Q}(x)=\eta^{Q}(x_{1},x_{2})=\frac{1}{2}+\frac{1}{10}\sin(2\pi(x_{1}+x_{ 2}))\). We have \(\alpha=\beta=1\) for some constants \(C_{\alpha}\) and \(C_{\beta}\). Next, we consider two different non-parametric regression scenarios, by specifying the source regression function \(\eta^{P}\) with different types of ambiguity. 1. **Band-like Ambiguity:** For \(\Delta\in\{0,0.1,0.2,0.3,0.4,0.5,0.6\},\ \gamma\in\{0.5,1\}\): \[\eta^{P}(x)=\begin{cases}\frac{1}{2}+2\Big{(}\eta^{Q}(x)-\frac{1}{2}\Big{)}^{ \gamma}-\Delta,&\text{ if }\eta^{Q}(x)\geq\frac{1}{2},\\ \frac{1}{2}-2\Big{(}\frac{1}{2}-\eta^{Q}(x)\Big{)}^{\gamma}+\Delta,&\text{ if }\eta^{Q}(x)<\frac{1}{2}.\end{cases}\] Here, \(\eta^{P}\) concentrates around an informative curve with respect to \(\eta^{Q}\) with some ambiguity \(\Delta\). In this case, we have \(s(x)\geq|\eta^{Q}(x)-1/2|^{\gamma}-\Delta\). 2. **Partially Flipped Sine Functions:** For \(r\in\{0,0.05,0.1,0.15,0.2,0.25,0.3,0.35\},\ \gamma\in\{0.5,1\}\): \[\eta^{P}(x)=\eta^{P}(x_{1},x_{2})=\begin{cases}\frac{1}{2}-\frac{1}{5}\sin \Big{(}2\pi\frac{\lfloor x_{1}+x_{2}\rfloor}{r}\Big{)}^{\gamma},&\text{ if }\{2x_{1}+2x_{2}\}\in[0,r],\\ \frac{1}{2}+\frac{1}{5}\sin\Big{(}2\pi\frac{\lfloor x_{1}+x_{2}\rfloor-r}{1-r} \Big{)}^{\gamma},&\text{ if }\{2x_{1}+2x_{2}\}\in(r,1],\end{cases}\] where \(\{a\}=a-\lfloor a\rfloor\) represents the fractional part of a real value \(a\). The following graph illustrates our setup of \(\eta^{P}\). While keeping \(\eta^{P}\) continuous with \(\beta_{P}=1\), the ratio parameter \(r\) creates an area where the Bayes classifier differs from the target distribution. The positive classification regimes are identical when \(r=0\) and completely opposite when \(r=1\). See Figure 1 for a visualization. In each scenario, we set \(n_{Q}=200\) and \(n_{P}=1000\), as a large \(n_{P}\) is necessary to observe the benefits of transfer learning. Previous studies Cai and Wei (2021); Reeve et al. (2021) have shown that accuracy improves with increasing \(n_{P}\). We also generate \(50000\) independent test pairs from \(Q\). For the TAB \(K\)-NN classifier, we choose \(k_{Q}=\lfloor n_{Q}^{\frac{2\beta}{2\beta+d}}\rfloor=31,\ k_{P}=\lfloor n_{P} ^{\frac{2\beta}{2\beta+d}}\rfloor\), and \(\tau=0.05\). As mentioned before, \(\beta=1\). We choose the simple \(K\)-NN classifiers on \(Q-\)data and \(P\)-data as two benchmarks for comparison. Moreover, we add the \(K\)-NN classifier on the pooled data, combining both the target and source with the nearest-neighbor parameter \(k\) chosen by \(5\)-fold cross-validation. We also consider the weighted \(K\)-NN classifiers proposed in Cai and Wei (2021) with their indicated optimal weighting scheme \(w_{Q}+w_{P}=1,\ w_{P}/w_{Q}=(n_{Q}+n_{P}^{\frac{2\beta+d}{2\beta+d}})^{\frac{( \gamma-1)\beta}{2\beta+d}}\). Figure 2 shows that our TAB classifier is accurate when \(\Delta\) is small, as does \(K\)-NN on \(P\)-data. Furthermore, our TAB classifier outperforms \(K\)-NN on \(P\)-data, \(K\)-NN on pooled data, and the weighted \(K\)-NN by a significant margin for large \(\Delta\), demonstrating its ability to avoid negative transfer when the source data is unreliable. The pooling and weighting algorithm benchmarks improve the classification when \(\Delta\) is small; however, they tend to break down when \(\Delta\) is large. Additionally, Figure 3 shows that our TAB classifier improves the accuracy when the ambiguity level, depicted by \(r\), is small. In addition, our TAB classifier significantly outperforms the other three benchmarks when the ambiguity level is too large to benefit from transfer learning, conserving the classification ability using only \(Q\)-data. Figure 1: Illustration of \(\eta^{P}\) in the second simulation setup with \(\gamma=1\) and \(r=0.4\). Figure 2: Accuracy of the TAB \(K\)-NN classifiers under the band-like ambiguity scenario. We experiment with different values of \(\Delta\) for a given \(\gamma=0.5\) and \(1\). Blue: TAB \(K\)-NN classifier; Red: \(K\)-NN classifier on only \(Q\)-data; Green: \(K\)-NN classifier on only \(P\)-data; Brown: \(K\)-NN classifier on pooled data. ### Logistic Regression Setting We next consider the scenario where both \(\eta^{Q}\) and \(\eta^{P}\) follow the logistic models (20), where the parameters are estimated by (23). We set \(n_{Q}=200,\ n_{P}=500,\ Q_{X}=P_{X}=N(0,I_{d})\), and simulate 50000 test data points. The linear coefficient is given by \[\beta_{Q}=(0.5\cdot\mathbf{1}_{s},\mathbf{0}_{d-s}),\quad\beta_{P}=(1.5\cdot \mathbf{1}_{s},\frac{\left\|\beta_{Q}\right\|}{\sqrt{d-s}}\tan\Delta\cdot \mathbf{1}_{d-s}),\] where \(\mathbf{1}_{s}\) and \(\mathbf{0}_{s}\), respectively, denotes a vector of all 1s and a vector of all 0s with size \(s\). Here, we set \(s=10\), which is small compared to \(d\). For the source distribution, \(\beta_{P}\) could be treated as a rotated version of \(3\beta_{Q}\), with an angle of exactly \(\Delta\) between them. The range of \(\Delta\) is chosen in the set \(\{0,0.2,0.4,0.6,0.8,1,1.2,1.4,1.6\}\), which gradually approaches \(\pi/2\). We simulate 50000 test data points for each value of \(\Delta\). The lasso regularization parameter is selected by 5-fold cross-validation and chosen to be the largest \(\lambda\) at which the MSE is within one standard error of the minimum MSE, namely, \(\lambda_{1se}\). In addition to our proposed TAB classifier, we compare three benchmarks for performance: logistic regression with lasso penalty on \(Q\)-data, \(P\)-data, and pooled data. Figure 4 demonstrates that our TAB classifier achieves high accuracy when the angle between \(\beta_{Q}\) and \(\beta_{P}\) is small. Interestingly, as evidence of its robustness, our classifier retains some classification ability with the target data even when the angle is large, while the benchmark classifiers based on \(P\)-data and pooled data suffer from negative transfer. ## 7 Conclusion In this paper, we have proposed a new approach to transfer learning that is robust against an unreliable source distribution with arbitrary ambiguity in the source data. Our work Figure 3: Accuracy of the TAB \(K\)-NN classifiers under the scenario with partially flipped sine functions. We experiment with different values of the ratio parameter \(r\) for a given \(\gamma=0.5\) and \(1\). Blue: TAB \(K\)-NN classifier; Red: \(K\)-NN classifier on only \(Q\)-data; Green: \(K\)-NN classifier on only \(P\)-data; Brown: \(K\)-NN classifer on pooled data. uses a different way of transferring the source data and, in particular, encompasses the nonparametric setting in Cai and Wei (2021) and Reeve et al. (2021) and parametric setting in Li et al. (2022). By introducing the ambiguity level, our approach enables us to understand the circumstances under which we can improve classification performance, given source data with potential ambiguity. Our proposed TAB classifier, with a threshold \(\tau\) that balances the performance of both the target and source data, is shown to be both _efficient_ and _robust_, as the excess risk improves for a reliable source distribution and avoids negative transfer with an unreliable source distribution. Furthermore, we provide simple approaches to bound the signal transfer risk, a key component of the excess risk in our general convergence result, in terms of the conventional excess risk extensively studied in the literature of statistical learning. We then demonstrate the power of our approach on specific classification tasks, with a focus on non-parametric classification and logistic regression settings. The upper bounds are shown to be optimal up to some logarithmic factors and are more general than previous work on transfer learning. Simulation studies provide numerical evidence for these two classification tasks. There are several promising avenues for future research that may build on the contributions of this paper. One potential direction is to consider an extension of the signal strength and ambiguity level that incorporates a translation parameter, i.e., \[s_{\kappa}(x):=\begin{cases}|\eta^{P}(x)-\frac{1}{2}-\kappa|,&\text{sgn} \left(\eta^{Q}(x)-\frac{1}{2}-\kappa\right)\times(\eta^{P}(x)-\frac{1}{2}) \geq 0,\\ 0,&\text{otherwise}.\end{cases}\] \[\mathbb{E}_{(X,Y)\sim Q}\left[\Big{|}\eta^{Q}(X)-\frac{1}{2}\Big{|}\mathbf{1 }\left\{s_{\kappa}(X)\leq C_{\gamma}\Big{|}\eta^{Q}(X)-\frac{1}{2}\Big{|}^{ \gamma}\leq C_{\gamma}z^{\gamma}\right\}\right]\leq\varepsilon_{\kappa}(z; \gamma,C_{\gamma}).\] This extension is natural since the decision boundary \(\{x\in\Omega:\eta^{Q}(x)=\frac{1}{2}\}\) may be similar Figure 4: Accuracy of the TAB logistic classifier with lasso penalty. We conduct experiments with difference choices of the angle \(\Delta\in[0,\pi/2]\). Blue: TAB logistic classifiers with lasso penalty; Red: Logistic classifier with lasso penalty on only \(Q\)-data; Green: Logistic classifier with lasso penalty on only \(P\)-data; Brown: Logistic classifier with lasso penalty on pooled data. to \(\{x\in\Omega:\eta^{P}(x)=\frac{1}{2}+\kappa\}\). We conjecture that an additional estimation error of \(\left(\frac{\log n_{Q}}{n_{Q}}\right)^{\frac{1+\alpha}{2+\alpha}}\) may be incurred due to the presence of an unknown \(\kappa\), and an empirical risk minimization procedure after obtaining \(\hat{\eta}^{P}\) such as \[\hat{\kappa}=\operatorname*{arg\,min}_{\kappa\in[-\frac{1}{2},\frac{1}{2}]} \frac{1}{n}\sum_{i=1}^{n}\mathbf{1}\left\{\mathbf{1}\Big{\{}\hat{\eta}^{P}(X_{ i})\geq\frac{1}{2}+\kappa\Big{\}}\neq Y_{i}\right\}\] may be required to obtain a corrected version of \(\hat{f}^{P}\) over the target data. Another direction is to develop an adaptive and generic procedure for selecting the threshold \(\tau\). There are three possible directions. Firstly, it is an open question whether simple Empirical Risk Minimization (ERM) methods for choosing \(\tau\) still keeps the upper bound optimal. Secondly, Lepski's method (Lepski, 1993) may offer a solution for maintaining optimal rates with an adaptive choice of \(\tau\). Thirdly, we conjecture that choosing \(\tau\asymp\log(n_{Q}\lor n_{P})k_{Q}^{-\frac{1}{2}}\), where \(\hat{\eta}^{Q}\) is an \(M\)-estimator consisting of a simple average of \(k_{Q}\) terms, is helpful for obtaining an optimal rate. As a final future direction, the conditions presented in the non-parametric and logistic model settings could be relaxed to some extent, for instance, by considering non-compact feature spaces with sub-Gaussian conditions or other marginal distribution assumptions (e.g., Assumption A4 in Gadat et al. (2016)). Our proposed TAB classifier is expected to perform well in these settings without a significant modification of the framework, and may offer advantages over other case-specific estimators in the transfer learning literature.
2310.03169
Metadynamics calculations of the effect of thermal spin fluctuations on skyrmion stability
The stability of magnetic skyrmions has been investigated in the past, but mostly in the absence of thermal fluctuations. However, thermal spin fluctuations modify the magnetic properties (exchange stiffness, Dzyaloshinskii-Moriya interaction (DMI) and anisotropy) that define skyrmion stability. Thermal magnons also excite internal skrymion dynamics, deforming the skyrmion shape. Entropy has also been shown to modify skyrmion lifetimes in experiments, but is absent or approximated in previous studies. Here we use metadynamics to calculate the free energy surface of a magnetic thin film in terms of the topological charge and magnetization. We identify the free energy minima corresponding to different spin textures and the lowest energy paths between the ferromagnetic and single skyrmion states. We show that at low temperatures the lowest free energy barrier is a skyrmion collapse process. However, this energy barrier increases with temperature. An alternative path, where a singularity forms on the skrymion edge, has a larger free energy barrier at low temperatures but decreases with increasing temperature and eventually becomes the lowest energy barrier.
Ioannis Charalampidis, Joseph Barker
2023-10-04T21:27:09Z
http://arxiv.org/abs/2310.03169v1
# Metadynamics calculations of the effect of thermal spin fluctuations on skyrmion stability ###### Abstract The stability of magnetic skyrmions has been investigated in the past, but mostly in the absence of thermal fluctuations. However, thermal spin fluctuations modify the magnetic properties (exchange stiffness, Dzyaloshinskii-Moriya interaction (DMI) and anisotropy) that define skyrmion stability. Thermal magnons also excite internal skyrmion dynamics, deforming the skyrmion shape. Entropy has also been shown to modify skyrmion lifetimes in experiments, but is absent or approximated in previous studies. Here we use metadynamics to calculate the free energy surface of a magnetic thin film in terms of the topological charge and magnetization. We identify the free energy minima corresponding to different spin textures and the lowest energy paths between the ferromagnetic and single skyrmion states. We show that at low temperatures the lowest free energy barrier is a skyrmion collapse process. However, this energy barrier increases with temperature. An alternative path, where a singularity forms on the skyrmion edge, has a larger free energy barrier at low temperatures but decreases with increasing temperature and eventually becomes the lowest energy barrier. Magnetic skyrmions are particle-like spin textures that may be used to store or transport information [1; 2; 3]. Skyrmions are said to have 'topological protection', meaning that there is no smooth way to unwind the spin texture back to a uniform state [4]. Creating or destroying a skyrmion therefore requires a high-energy process to cause a sharp break in the texture, working against the strong exchange interaction. Skyrmions can therefore be created, for example, by applying energy through heating from electric currents [5] or lasers [6]. Geometric defects such as notches can reduce the energy cost of creating them [7]. Understanding the energy required to create or destroy a skyrmion is important for both fundamental science and practical applications. So far, calculations of energy barriers for skyrmion creation and annihilation have focused on the nudged elastic band method [8; 9]. This method relaxes a series of states (spin configurations) to a saddle point to identify the energy barrier between the initial and final states. To find higher-energy paths, a different choice of the set of states is needed, but convergence to a specific path is not guaranteed. The transition path is also described in terms of an abstract coordinate, which has no physical interpretation, limiting the understanding of the physical process. Most importantly, with respect to our work, nudged elastic band calculations do not account for temperature and, therefore, do not include the role of entropy or the effect of thermal spin fluctuations. Thermal spin fluctuations change magnetic properties such as exchange stiffness, DMI, and anisotropy, all of which determine the properties of skyrmions and even whether skyrmions can exist. Thermal magnons also excite the internal modes of the skyrmion [10; 11; 12; 13]. Internal modes may have an impact on the stability and lifetime of the skyrmions. Theoretical and experimental studies have also shown that entropy plays an important role in skyrmion stability and has a significant impact on skyrmion lifetimes [14; 15]. But nudged elastic band calculations provide only the zero temperature internal energy, \(U(0)\), not the free energy \(F(T)=U(T)-TS\). The entropy contribution has been included using Langer's theory [15] but this also requires several approximations and still ignores the renormalization of magnetic parameters. In this Letter, we use metadynamics, a general numerical method for calculating free energies most commonly used within molecular dynamics [16; 17]. It has recently been used for spin models to study the temperature dependence of magnetic anisotropy energy and the associated spin reorientation transitions [18; 19; 20] and within the micromagnetism formalism it has been used to sample the free energy landscape of a vortex in a nanodot, although without studying any thermal effects [21]. Here, we apply metadynamics with atomistic spin modeling to calculate the free energy landscape of topological spin textures in a 2D lattice. This enables us to calculate the free energy barriers between the ferromagnetic ground state and topological spin textures such as skyrmions. It also gives a complete picture of the energy landscape including alternative, higher-energy paths, as well as identifying other metastable and even unstable spin textures. At low temperatures, we find that the minimum energy path for skyrmion annihilation (creation) is through the skyrmion 'collapse' process, where the skyrmion shrinks (expands) about a single point [9]. The path with the second-lowest energy barrier corresponds to skyrmion annihilation (creation) through a Bloch singularity on the edge of the skyrmion (domain). Although both of these paths have been identified previously, the energy barrier for the collapse process was always significantly lower than that of the singularity. In this work, we find the two barriers have an opposite temperature dependence, with the energy barrier for the collapse process increasing and the energy barrier for the singularity process decreasing as the temperature increases. There is a crossover temperature at which the singularity path becomes the minimum energy path, 36 K in for our model parameters. We postulate that the configurational entropy plays an important role. Our results demonstrate both how metadynamics can be applied to study topological spin textures and the important role spin fluctuations have on skyrmion stability. _Method_ - We model a hexagonal lattice with crystal space group P6/mmm (191) using basis vectors \(\mathbf{a}_{1}=(1,0,0)\), \(\mathbf{a}_{2}=(1/2,\sqrt{3}/2,0)\), \(\mathbf{a}_{3}=(0,0,1)\) in a \(65\times 65\times 1\) supercell. Periodic boundaries are applied in the \((\mathbf{a}_{1},\mathbf{a}_{2})\) plane, and open boundaries along \(\mathbf{a}_{3}\), making a 2D system. The Hamil tonian is \[\mathcal{H}=-\frac{1}{2}\sum_{\langle ij\rangle}J_{ij}\mathbf{S}_{i}\cdot\mathbf{S }_{j}-\frac{1}{2}\sum_{\langle ij\rangle}\mathbf{D}_{ij}\cdot(\mathbf{S}_{i} \times\mathbf{S}_{j})-\sum_{i}k_{z}S_{z,i}^{2}, \tag{1}\] where \(\langle ij\rangle\) indicates that the sum is taken only over the six nearest neighbors. \(\mathbf{S}_{i}\) are classical spin vectors of length \(|\mathbf{S}_{i}|=1\), \(J_{ij}=13.0\;\mathrm{meV}\) are the nearest-neighbor exchange interaction energies, \(\mathbf{D}_{ij}=D_{ij}(\hat{\mathbf{z}}\times\hat{\mathbf{r}}_{ij})\) are the nearest-neighbor DMI interaction vectors where \(\hat{\mathbf{z}}\) is a unit vector in the \(z\) (out-of-plane) direction, \(\hat{\mathbf{r}}_{ij}\) is the unit vector pointing from site \(i\) to site \(j\) and \(D_{ij}=0.811\;\mathrm{meV}\). \(k_{z}=0.09\;\mathrm{meV}\) is a uniaxial anisotropy energy. The factors of \(1/2\) account for the double counting in the sums. This is a typical Hamiltonian, where for certain combinations of exchange, uniaxial anisotropy and DMI, isolated skyrmions are metastable states even without an applied field. We simulate the spin model at finite temperature using the Metropolis Monte Carlo method [22]. Our Monte Carlo trial moves are small angle changes of a spin from its initial direction: \(\mathbf{S}_{i}^{\mathrm{trial}}=\mathbf{S}_{i}^{\mathrm{initial}}+\varphi \mathbf{\Omega}\) where \(\mathbf{\Omega}\) is a unit vector uniformly random on the 2-sphere and \(\varphi=0.3\). Metadynamics adds a fictitious potential \(V\) to the energy of the system, forcing it away from states that have already been sampled, thus exploring new states [23]. This enables the system to escape energy minima even over high barriers. The probability of accepting a new state \(s_{b}\) from a state \(s_{a}\) differs from the plain Metropolis probability in that it now contains the bias potential \[P(s_{a}\to s_{b})=\min\left[1,\exp\!\left(-\frac{\Delta E_{ab}+\Delta V_{ab}}{ k_{\mathrm{B}}T}\right)\right] \tag{2}\] where \(\Delta E_{ab}=E(s_{b})-E(s_{a})\) is the difference in total energy calculated from the Hamiltonian and \(\Delta V_{ab}=V(s_{b})-V(s_{a})\) is the difference in the metadynamics potential for states \(s_{b}\) and \(s_{a}\). Free energy surfaces are generally described with macroscopic coordinates of a system. In metadynamics these are called _collective variables_ (CV) and represent macroscopic quantities calculated from microscopic degrees of freedom, the spin vectors in our case. For this study our CVs are the topological charge \[Q=\frac{1}{4\pi}\int\mathrm{d}^{2}x\;\mathbf{S}\cdot\left(\frac{\partial \mathbf{S}}{\partial x}\times\frac{\partial\mathbf{S}}{\partial y}\right), \tag{3}\] and the \(z\)-component of the reduced magnetization \[m_{z}=\frac{1}{N}\sum_{i}^{N}S_{i}^{z}. \tag{4}\] where \(N\) is the number of spins in the system. To use the topological charge as a CV to sample different spin textures, we considered several key points: 1) CVs must be continuous variables so that the transitions between states can be sampled smoothly. The skyrmion topological charge should strictly be a discrete integer. However, the calculation of \(Q\) as defined in Eq. (3) on a discrete lattice and with spin fluctuations breaks the assumptions of a smooth continuum system and numerical calculations of \(Q\) therefore do produce non-integer values [24; 25]. This allows us to use \(Q\) as a CV. Other strategies for making topological charges continuous have been used in lattice quantum chromodynamics calculations with metadynamics [26; 27]. 2) The topological charge alone is not suitable for sampling the free energy landscape because multiple spin textures with the same charge, but different energy, cannot be distinguished. For example, with \(Q=0\) the system can be single domain or multi-domain and these have different energies. This means that using \(Q\) alone as a CV would give an incorrect projection of the energy landscape [28]. Hence, we use \(m_{z}\) as the second CV to distinguish between different states with the same \(Q\). 3) The variable \(m_{z}\in[-1,1]\) is bounded, giving definite limits to the state space which can be explored in the simulation. \(Q\), however, is unbounded, meaning that metadynamics can explore ever larger values of \(Q\), generating more and more topological charge within our finite lattice. To avoid the system spending a long time exploring high-\(Q\) states with many skyrmions we implemented boundary conditions on the metadynamics potential which behave like a harmonic spring, smoothly pushing the system back towards a defined region of interest (see Supplementary Information for more details). The metadynamics potential \(V(Q,m_{z})\) is built by adding a Gaussian every full Monte Carlo sweep, \(\tau\) (defined as \(N\) trial moves). The potential at a given time \(t\) is then \[\begin{split}& V(Q,m_{z},t)=\omega_{0}\sum_{\begin{subarray}{c}t^{ \prime}=\tau,2\tau,\cdots\\ t^{\prime}<t\end{subarray}}\exp\!\left(-\frac{V(Q,m_{z},t^{\prime})}{k_{ \mathrm{B}}\Delta T}\right)\\ &\times\!\exp\!\left(-\frac{(Q(t^{\prime})-Q(t))^{2}}{2\sigma_{Q} ^{2}}-\frac{(m_{z}(t^{\prime})-m_{z}(t))^{2}}{2\sigma_{m_{z}}^{2}}\right) \end{split} \tag{5}\] where \(\omega_{0}=0.1\) meV is the initial Gaussian amplitude in units of energy and \(\sigma_{Q}=0.1\) and \(\sigma_{m_{z}}=0.05\) are widths in the dimensions of \(Q\) and \(m_{z}\). The exponential decay in (5) is because we have implemented _well tempered_ metadynamics [29] which systematically reduces the Gaussian amplitude to ensure convergence. The tempering bias temperature is \(\Delta T=2500\) K. The free energy is calculated from the metadynamics potential as \[F(Q,m_{z})=-\frac{T+\Delta T}{\Delta T}V(Q,m_{z}). \tag{6}\] In a large system, the energy to create and destroy two skyrmions separated by a large distance should be almost independent. However, the size of our simulations is quite restricted (due to computational costs) and the creation energy of a second skyrmion will be influenced by the existence of the first. We have attempted to make the simulation large enough to avoid a significant self-interaction of one skyrmion with itself across the periodic boundaries, but once two skyrmions exist in the system, the energetics of that state and the energy to create or destroy the second skyrmion are likely to be affected. Inserting further skyrmions will further increase the interaction energies. Therefore, we use the harmonic spring boundary conditions to restrict the system to \(Q\in[-2.5,2.5]\). In principle calculations could be performed in open systems where the edges provide another vector for skyrmion creation and annihilation, but we have not considered this here. We solved \(1.9\times 10^{8}\) Monte Carlo steps and ran 10 independent simulations for each temperature, averaging the results to produce the energy surfaces. The standard error at all points on the calculated free energy surface is \(<40\) meV. _Results_ - Fig. 1a shows the free energy landscape calculated at \(T=20\), \(36\) and \(50\) K. The Curie temperature is \(T_{C}=136\) K. We mark local energy minima on the figure with white circles. Sample spin configurations from the simulation at the marked points are shown in Fig. 2. The top and bottom edges of the free energy surface are dark red due to the large free energy cost of states with a magnetization larger than the thermal equilibrium value; Increasing the magnetization against the thermal spin fluctuations requires work against entropy. At \(Q=0\) there are three energy minima: \(\mathrm{FM}_{+}\) and \(\mathrm{FM}_{-}\) are global minima corresponding to the uniform ferromagnetic states with magnetization aligned along \(+z\) and \(-z\) respectively. The point marked \(\mathrm{DW}\) at \((Q=0,m_{z}=0)\) is a two-domain state with half of the magnetization aligned along \(+z\) and half along \(-z\) with domain walls between. The DMI in the Hamiltonian means that the magnetization does not reverse from \(\mathrm{FM}_{+}\) to \(\mathrm{FM}_{-}\) by a coherent rotation. Instead, domain-wall nucleation and propagation is energetically cheaper. Removing the DMI from the Hamiltonian produces free energy surfaces without the \(\mathrm{DW}\) state, showing the expected coherent reversal. This is a significant benefit of metadynamics over the nudged elastic band method in that it is free of assumptions about the initial, final, or transition states. This may be useful in the study of more complex topological magnetic textures such as hopfions [30]. At \(Q=\pm 1\) the labeled states \(\mathrm{Sk}_{1\pm}\) are single skyrmions with a core in the \(\pm z\) direction. We note the whole energy surface has a \(\pi\) rotational symmetry; changing the ferromagnetic background from \(m_{z}\rightarrow-m_{z}\) the sign of \(Q\) for which skyrmions are stable swaps. The energy surface is quite shallow between \(\pm 0.25\lesssim m_{z}\lesssim\pm 0.7\), showing that the skyrmions are quite soft; there is little energy cost to expand or contract the size of the skyrmion around the equilibrium value. At \(Q=\pm 2\) we find minima \(\mathrm{Sk}_{2\pm}\) corresponding to states with two skyrmions. In the opposite topological sector from the skyrmions we find anti-skyrmions \(\mathrm{ASk}_{1\pm}\) and \(\mathrm{ASk}_{2\pm}\), marked with crosses. The DMI vectors in our Hamiltonian are incompatible with forming metastable anti-skyrmions; nevertheless, these states are explored by metadynamics as valid combinations of \(Q\) and \(m_{z}\). Although anti-skyrmions are formed, the free energy has no minima at these points. The calculation therefore explicitly shows that the anti-skyrmions are not stable or metastable in this system, and again demonstrates the ability of metadynamics to explore the full landscape without prior knowledge or assumption about Figure 1: Metadynamics calculated free energy surfaces at \(T=20\)K, \(36\)K and \(50\)K. \(\circ\) mark local energy minima \(\times\) mark unstable points where there is no minima but a non-trivial spin texture forms. Red dotted lines show the minimum energy path between FM and \(\mathrm{Sk}_{1}\) states. the state space. We now compare the free energy landscapes between the different temperatures in Figs. 1a,b,c. At the lowest temperature (\(T=20\) K), the minimum energy path between the \(\mathrm{FM}_{\pm}\) and \(\mathrm{Sk}_{1\mp}\) states is deep and narrow. With increasing temperature, all of the features of the free energy surface broaden. Minima become shallower and smeared across the \(Q\) axis, showing how thermal fluctuations cause fluctuations in the topological charge. The skyrmions become less like circular particles and become raged due to the spin fluctuations, taking us away from the idealized concept of a smooth texture that covers the sphere. The upper and lower edges of \(m_{z}=\pm 1\), in red, become thicker as the saturation magnetization decreases with temperature. The location of the energy minima move correspondingly towards smaller values of \(|m_{z}|\). We extracted the minimum energy paths from the free energy surface using the MEPSAnd software [31]. The minimum-energy paths between the \(\mathrm{FM}_{\pm}\) and \(\mathrm{Sk}_{1\mp}\) states are marked with red dotted lines on top of Figs. 1b and c. At temperatures \(T<37\) K, the lowest energy path to destroy a skyrmion is always through the 'collapse' process, where the skyrmion shrinks to a single spin which then flips. On the energy surface, this appears as an increase in \(m_{z}\) due to shrinkage, followed by a change in \(Q\) as the covering of the sphere reduces. Above \(T=36\) K the minimum energy path changes, and the skyrmion is destroyed through a'singularity' process. A Bloch point forms on the edge of the skyrmion, causing it to lose topological charge and become a trivial domain, which can then relax to the uniform magnetic state. In this case, we can see the path on the energy surface first as a change in the topological charge followed by an increase in magnetization. In Figs. 3a and b we plot the height of the free energy barrier for skyrmion creation and annihilation, for both the collapse and singularity paths, as a function of temperature. The energy barrier is asymmetric because the ferromagnetic state is lower in energy than the skyrmion state; Creating a skyrmion costs more energy than destroying a skyrmion. At low temperature (\(15\) K) the energy barrier to destroy a skyrmion via the singularity path is twice that of the collapse path, in agreement with the zero temperature results of the nudged elastic band method [9]. However, as the temperature increases, the singularity energy barrier decreases, whereas the collapse energy barrier increases. The temperature dependence is linear for both barriers, but the temperature dependence of the increase in the height of the collapse barrier is stronger. At a certain temperature, \(T=37\) K, there is a crossover, and the singularity path becomes the lowest energy path. The increasing energy barrier with temperature for the collapse process seems counter-intuitive because the anisotropy, exchange stiffness, and DMI decrease with increasing temperature due to spin fluctuations [32; 33]. Breaking the skyrmion texture would appear to be an increasingly easy when considering only the internal energy. Indeed, the energy barrier from \(\mathrm{FM}_{+}\) to \(DW\) decreases with increasing temperature. However, we must also consider the entropy bottleneck of the process, having to transition through a highly specific spin state Figure 2: Snapshots of example spin states at different points on the free energy surface as labeled in Fig. 1. Note that there are periodic boundaries in the plane. Figure 3: Free energy barrier for (a) skyrmion annihilation (\(\mathrm{Sk}_{1\mp}\to\mathrm{FM}_{\pm}\)) and (b) skyrmion creation (\(\mathrm{FM}_{\pm}\to\mathrm{Sk}_{1\mp}\)), along the collapse (blue open circles) and singularity (red filled circles) paths. Vertical lines show the standard error from calculating the energy barrier across 10 independent simulations. with a single spin flip. The thermal spin fluctations introduce entropy, which must be worked against to cause the shrinking and eventual collapse. This is similar to the high free energy cost of trying to force \(m_{z}\) to be larger than its thermodynamic equilibrium value. Our results therefore show that the entropy has a more significant effect on the collapse free energy barrier than the renormalization of the magnetic parameters. This supports the conclusions that have been drawn on the role of applied magnetic fields on skyrmion lifetimes [14]. The energy barrier to annihilate a skyrmion via the singularity process decreases with temperature. Here, entropy is less important because any single spin on the circumference of the skyrmion that suffers a large enough fluctuation can trigger the process. The larger the thermal spin fluctuations, the more likely this will occur. Therefore, the spin fluctuations and magnetic parameter renormalization combine to cause the energy barrier to decrease with increasing temperature. The change in the energy barriers to create a skyrmion (Fig. 3b) with temperature is different. For both paths, the barrier decreases with increasing temperature. The collapse path is almost flat, while the singularity path reduces significantly. They cross at the same temperature as the annihilation barriers. The thermal spin fluctuations decrease the energy needed to form Bloch points, providing some solid angle between neighboring spins either to start the expansion process or on the edge of a domain wall to follow the singularity path. The increasing entropy term \(TS\) is always helpful in this process. _Conclusions_ - Our results show that thermal spin fluctuations cannot be neglected in the calculation of energy barriers for skyrmion creation and annihilation. Spin fluctuations can, in fact, lead to changes in the minimum energy path, depending on the ambient temperature. At higher temperatures, the singularity process appears to be lower in energy, allowing the creation of skyrmions to form starting with topologically trivial bubble domains. This may explain why skyrmions can be generated using simple field cycling protocols [34]. The applied field will alter the landscape, pushing the system along the \(m_{z}\) axis, making it easier to fall into the skyrmion basins. The increase in the collapse energy barrier highlights that entropy plays an important role in skyrmion creation and annihilation. Overall, our work also demonstrates the usefulness of metadynamics in studying spin textures in magnetic materials at finite temperatures, with the method being easy to adapt to study many different systems through the careful choice of appropriate collective variables. ## Data access Data is available upon reasonable request. The Monte Carlo spin modeling software used to generate the data (JAMS) is not currently open source while intellectual property issues are being resolved, but can be made available to individual researchers upon request. ## Author contributions _Ioannis Charalampidis_: methodology, investigation, software, writing - review and editing. _Joseph Barker_: conceptualization, methodology, software, writing - original draft, funding acquisition. ## Acknowledgments The authors thank Thomas Nussle for useful discussions during this work. J.B. acknowledges support from the Royal Society through a University Research Fellowship. I.C. acknowledges support through an EPSRC Doctoral Training Partnership. Calculations were performed on ARC4, part of the High Performance Computing facilities at the University of Leeds. ## References * (1) G. Finocchio, F. Buttner, R. Tomasello, M. Carpentieri, and M. Klaui, Magnetic skyrmions: from fundamental to applications, J. Phys. D: Appl. Phys. **49**, 423001 (2016). * (2) A. Fert, N. Reyren, and V. Cros, Magnetic skyrmions: advances in physics and potential applications, Nat. Rev. Mater. **2**, 17031 (2017). * (3) A. N. Bogdanov and C. Panagopoulos, Physical foundations and basic properties of magnetic skyrmions, Nat. Rev. Phys. **2**, 492 (2020). * (4) N. Nagaosa and Y. Tokura, Topological properties and dynamics of magnetic skyrmions, Nat. Nanotechnol. **8**, 899 (2013). * (5) H. Y. Yuan and X. R. Wang, Skyrmion Creation and Manipulation by Nano-Second Current Pulses, Sci. Rep. **6**, 22638 (2016). * (6) K. Gerlinger, B. Pfau, F. Buttner, M. Schneider, L.-M. Kern, J. Fuchs, D. Engel, C. M. Gunther, M. Huang, I. Lemesh, L. Careta, A. Churikova, P. Hessing, C. Klose, C. Struber, C. v. K. Schmising, S. Huang, A. Wittmann, K. Litzius, D. Metternich, R. Battistelli, K. Bagschik, A. Sadovnikov, G. S. D. Beach, and S. Eisebitt, Application concepts for ultrafast laser-induced skyrmion creation and annihilation, Appl. Phys. Lett. **118**, (2021). * (7) J. Iwasaki, M. Mochizuki, and N. Nagaosa, Current-induced skyrmion dynamics in constricted geometries, Nat. Nanotechnol. **8**, 742 (2013). * (8) P. F. Bessarab, V. M. Uzdin, and H. Jonsson, Method for finding mechanism and activation energy of magnetic transitions, applied to skyrmion and antivortex annihilation, Comput. Phys. Commun. **196**, 335 (2015). * (9) D. Cortes-Ortuno, W. Wang, M. Beg, R. A. Pepper, M.-A. Bisotti, R. Carey, M. Yousden, T. Kluyver, O. Hovorka, and H. Fangohr, Thermal stability and topological protection of skyrmions in nanotracks, Sci. Rep. **7**, 4060 (2017). * (10) O. Petrova and O. Tchernyshyov, Spin waves in a skyrmion crystal, Phys. Rev. B **84**, 214433 (2011). * (11) M. Mochizuki, Spin-Wave Modes and Their Intense Excitation Effects in Skyrmion Crystals, Phys. Rev. Lett. **108**, 017601 (2012). * Lin _et al._ [2014]S.-Z. Lin, C. D. Batista, and A. Saxena, Internal modes of a skyrmion in the ferromagnetic state of chiral magnets, Phys. Rev. B **89**, 024415 (2014). * Kim _et al._ [2014]J.-V. Kim, F. Garcia-Sanchez, J. Sampaio, C. Moreau-Luchaire, V. Cros, and A. Fert, Breathing modes of confined skyrmions in ultrathin magnetic dots, Phys. Rev. B **90**, 064410 (2014). * Wild _et al._ [2017]J. Wild, T. N. G. Meier, S. Pollath, M. Kronseder, A. Bauer, A. Chacon, M. Halder, M. Schowalter, A. Rosenauer, J. Zweck, J. Muller, A. Rosch, C. Pfleiderer, and C. H. Back, Entropy-limited topological protection of skyrmions, Sci. Adv. **3**, (2017). * Desplat _et al._ [2018]L. Desplat, D. Suess, J.-V. Kim, and R. L. Stamps, Thermal stability of metastable magnetic skyrmions: Entropic narrowing and significance of internal eigenmodes, Phys. Rev. B **98**, 134407 (2018). * Laio and Parrinello [2002]A. Laio and M. Parrinello, Escaping free-energy minima, Proc. Natl. Acad. Sci. **99**, 12562 (2002). * Barducci _et al._ [2011]A. Barducci, M. Bonomi, and M. Parrinello, Metadynamics, WIREs Comput. Mol. Sci. **1**, 826 (2011). * Nagyfalusi _et al._ [2017]B. Nagyfalusi, L. Udvardi, and L. Szunyogh, First principles and metadynamics study of the spin-reorientation transition in Fe/Au(001) films, J. Phys.: Conf. Ser. **903**, 012016 (2017). * Nagyfalusi _et al._ [2019]B. Nagyfalusi, L. Udvardi, and L. Szunyogh, Metadynamics study of the temperature dependence of magnetic anisotropy and spin-reorientation transitions in ultrathin films, Phys. Rev. B **100**, 174429 (2019). * Nagyfalusi _et al._ [2020]B. Nagyfalusi, L. Udvardi, L. Szunyogh, and L. Rozsa, Spin reorientation transition in an ultrathin Fe film on W(110) induced by Dzyaloshinsky-Moriya interactions, Phys. Rev. B **102**, 134413 (2020). * Tobik _et al._ [2017]J. Tobik, R. Martonak, and V. Cambel, Free-energy landscapes in magnetic systems from metadynamics, Phys. Rev. B **96**, 140413 (2017). * Landau and Binder [2021]D. P. Landau and K. Binder, _A guide to Monte Carlo simulations in statistical physics_ (Cambridge University Press, 2021). * Laio and Gervasio [2008]A. Laio and F. L. Gervasio, Metadynamics: a method to simulate rare events and reconstruct the free energy in biophysics, chemistry and material science, Rep. Prog. Phys. **71**, 126601 (2008). * Kim and Mulkers [2020]J.-V. Kim and J. Mulkers, On quantifying the topological charge in micromagnetics using a lattice-based approach, IOP SciNotes **1**, 025211 (2020). * [25]Using the 'geometrical definition' of topological charge as a summation over plaquettes [35] always produces an integer and therefore cannot be used as a CV for metadynamics. * Laio _et al._ [2016]A. Laio, G. Martinelli, and F. Sanfilippo, Metadynamics surfing on topology barriers: the \(CP^{N-1}\) case, J. High Energy Phys. **2016**, 89. * Bonanno _et al._ [2019]C. Bonanno, C. Bonati, and M. D'Elia, Topological properties of \(CP^{N-1}\) models in the large-\(N\) limit, J. High Energy Phys. **2019**, 3. * Bussi and Laio [2020]G. Bussi and A. Laio, Using metadynamics to explore complex free-energy landscapes, Nat. Rev. Phys. **2**, 200 (2020). * Barducci _et al._ [2008]A. Barducci, G. Bussi, and M. Parrinello, Well-Tempered Metadynamics: A Smoothly Converging and Tunable Free-Energy Method, Phys. Rev. Lett. **100**, 020603 (2008). * Rybakov _et al._ [2022]F. N. Rybakov, N. S. Kiselev, A. B. Borisov, L. Doring, C. Melcher, and S. Blugel, Magnetic hopfions in solids, APL Mater. **10**, (2022). * Marcos-Alcalde _et al._ [2020]I. Marcos-Alcalde, E. Lopez-Vinas, and P. Gomez-Puertas, MEPSAnd: minimum energy path surface analysis over \(n\)-dimensional surfaces, Bioinformatics **36**, 956 (2020); MEPSAnd v.1.6 (2022). * Rozsa _et al._ [2017]L. Rozsa, U. Atxitia, and U. Nowak, Temperature scaling of the Dzyaloshinsky-Moriya interaction in the spin wave spectrum, Phys. Rev. B **96**, 094436 (2017). * Tomasello _et al._ [2018]R. Tomasello, K. Y. Guslienko, M. Ricci, A. Giordano, J. Barker, M. Carpentieri, O. Chubykalo-Fesenko, and G. Finocchio, Origin of temperature and field dependence of magnetic skyrmion size in ultrathin nanodots, Phys. Rev. B **97**, 060402 (2018). * Zeissler _et al._ [2018]K. Zeissler, S. Finizio, K. Shahbazi, J. Massey, F. A. Ma Mari, D. M. Bracher, A. Kleibert, M. C. Rosamond, E. H. Linfield, T. A. Moore, J. Raabe, G. Burnell, and C. H. Marrows, Discrete Hall resistivity contribution from Neel skyrmions in multilayer nanodiscs, Nat. Nanotechnol. **13**, 1161 (2018). * Berg and Luscher [1981]B. Berg and M. Luscher, Definition and statistical distributions of a topological number in the lattice O(3) \(\sigma\)-model, Nucl. Phys. B **190**, 412 (1981). Supplementary Information:Metadynamics calculations of the effect of thermal spin fluctuations on skyrmion stability Ioannis Charalampidis [email protected] School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT, United Kingdom Joseph Barker [email protected] School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT, United Kingdom ## S1: Calculation of topological charge We use the definition of the topological charge \[Q=\frac{1}{4\pi}\int\mathrm{d}^{2}x\;\mathbf{S}\cdot\left(\frac{\partial \mathbf{S}}{\partial x}\times\frac{\partial\mathbf{S}}{\partial y}\right), \tag{1}\] which we calculate numerically on the lattice using finite differences for the gradients \[\begin{split}\frac{\partial\mathbf{S}(\mathbf{r}_{i})}{\partial x }&\approx\frac{\Delta\mathbf{S}(\mathbf{r}_{i})}{\Delta x}=\frac{ 1}{6}\left[2\mathbf{S}(\mathbf{r}_{i}+\mathbf{a})-2\mathbf{S}(\mathbf{r}_{i}- \mathbf{a})+\mathbf{S}(\mathbf{r}_{i}+\mathbf{a}+\mathbf{b})+\mathbf{S}( \mathbf{r}_{i}-\mathbf{b})-\mathbf{S}(\mathbf{r}_{i}-\mathbf{a}-\mathbf{b})- \mathbf{S}(\mathbf{r}_{i}+\mathbf{b})\right]\\ \frac{\partial\mathbf{S}(\mathbf{r}_{i})}{\partial y}& \approx\frac{\Delta\mathbf{S}(\mathbf{r}_{i})}{\Delta y}=\frac{\sqrt{3}}{6} \left[\mathbf{S}(\mathbf{r}_{i}+\mathbf{b})+\mathbf{S}(\mathbf{r}_{i}+\mathbf{ a}+\mathbf{b})-\mathbf{S}(\mathbf{r}_{i}-\mathbf{a})-\mathbf{S}(\mathbf{r}_{i}- \mathbf{a}-\mathbf{b})\right]\end{split} \tag{2}\] The topological charge is then a sum over the finite differences \[Q=\frac{1}{4\pi}\sum_{i}\mathbf{S}(\mathbf{r}_{i})\cdot\left(\frac{\Delta \mathbf{S}(\mathbf{r}_{i})}{\Delta x}\times\frac{\Delta\mathbf{S}(\mathbf{r}_ {i})}{\Delta y}\right). \tag{3}\] ## S2: Additional details of the metadynamics implementation For computational efficiency (avoiding many exponential function calls), we discretize our metadynamics potential on a grid with \(\Delta Q=0.05\) and \(\Delta m_{z}=0.01\). To find the value of the potential at an arbitrary point \(V(Q,m_{z})\) we use bilinear interpolation from the nearest grid points. ## S2: Harmonic spring boundary conditions To avoid the system from exploring ever higher values of \(Q\) we follow Refs. [1] and [2] and apply a harmonic spring potential outside of a region of interest. The references are solving for forces in the system, but we can interpret the forces as spring-like and write the appropriate potential energy. When the collective variable \(Q\) is outside of this region, \(|Q|>Q_{\max}\) the potential used is \[V(Q,m_{z})=k(Q-Q_{\max})^{2}+V(Q_{\max},m_{z})\quad\mathrm{when}\quad|Q|>Q_{ \max} \tag{4}\] where \(k\) is a spring constant with units of energy. \(V(Q_{\max},m_{z})\) is the value of the potential at the boundary which has built up due to the metadynamics Gaussians. It must be added to the spring potential outside of the region of interest to ensure that the total potential is smooth and continuous at the boundary. We stop adding new Gaussians to the landscape while \(|Q|>Q_{\max}\). This boundary condition causes the system to be smoothly pushed back into the region of interest. In this work, we used \(k=0.05\) meV and \(Q_{\max}=2.5\).
2308.01534
Simultaneously Approximating All $\ell_p$-norms in Correlation Clustering
This paper considers correlation clustering on unweighted complete graphs. We give a combinatorial algorithm that returns a single clustering solution that is simultaneously $O(1)$-approximate for all $\ell_p$-norms of the disagreement vector; in other words, a combinatorial $O(1)$-approximation of the all-norms objective for correlation clustering. This is the first proof that minimal sacrifice is needed in order to optimize different norms of the disagreement vector. In addition, our algorithm is the first combinatorial approximation algorithm for the $\ell_2$-norm objective, and more generally the first combinatorial algorithm for the $\ell_p$-norm objective when $1 < p < \infty$. It is also faster than all previous algorithms that minimize the $\ell_p$-norm of the disagreement vector, with run-time $O(n^\omega)$, where $O(n^\omega)$ is the time for matrix multiplication on $n \times n$ matrices. When the maximum positive degree in the graph is at most $\Delta$, this can be improved to a run-time of $O(n\Delta^2 \log n)$.
Sami Davies, Benjamin Moseley, Heather Newman
2023-08-03T04:26:22Z
http://arxiv.org/abs/2308.01534v3
# One Partition Approximating All \(\ell_{p}\)-norm ###### Abstract This paper considers correlation clustering on unweighted complete graphs. We give a combinatorial algorithm that returns a _single_ clustering solution that is _simultaneously_\(O(1)\)-approximate for all \(\ell_{p}\)-norms of the disagreement vector. This proves that minimal sacrifice is needed in order to optimize different norms of the disagreement vector. Our algorithm is the first combinatorial approximation algorithm for the \(\ell_{2}\)-norm objective, and more generally the first combinatorial algorithm for the \(\ell_{p}\)-norm objective when \(2\leqslant p<\infty\). It is also faster than all previous algorithms that minimize the \(\ell_{p}\)-norm of the disagreement vector, with run-time \(O(n^{\omega})\), where \(O(n^{\omega})\) is the time for matrix multiplication on \(n\times n\) matrices. When the maximum positive degree in the graph is at most \(\Delta\), this can be improved to a run-time of \(O(n\Delta^{2}\log n)\). ## 1 Introduction Correlation clustering is one of the most prominent problems in clustering, as it cleanly models community detection problems [26, 24] and provides a way to decompose complex network structures [27, 20]. The input to the unweighted correlation clustering problem is a complete graph \(G=(V,E)\), where \(|V|=n\) and each edge \(e\in E\) is labeled positive (\(+\)) or negative (\(-\)). If the edge \((u,v)\) is positive, this indicates that \(u\) and \(v\) are similar, and analogously if the edge \((u,v)\) is negative, this indicates that \(u\) and \(v\) are dissimilar. The output of the problem is a partition of the vertex set into parts \(C_{1},C_{2},\ldots\), where each part represents a cluster. The output should cluster similar vertices together and separate dissimilar vertices. Specifically, for a fixed clustering (i.e., partition of the vertices), a positive edge \((u,v)\) is a _disagreement_ with respect to the clustering if \(u\) and \(v\) are in different clusters and an _agreement_ if \(u\) and \(v\) are in the same cluster. Similarly, a negative edge \((u,v)\) is a disagreement with respect to the clustering if \(u\) and \(v\) are in the same cluster and an agreement if \(u\) and \(v\) are in different clusters. The goal is to find a clustering that minimizes some objective that is a function of the disagreements.1 For example, the most commonly studied objective minimizes the total number of disagreements. Footnote 1: Note that the sizes and number of clusters are unspecified. As an easy example to illustrate the problem, consider a social network. Every pair of people has an edge between them, and the edge is positive if the two people have ever met before, and negative otherwise. The goal of correlation clustering translates to partitioning all the people into clusters so that people are in the same cluster as their friends/acquaintances and in different clusters than strangers. The difficulty in constructing a clustering is that the labels may not be consistent, making disagreements unavoidable. Consider in the social network what happens when there is one person with two friends who have never met each other \((u,v,w\) with \((u,v)\) and \((u,w)\) positive but \((v,w)\) negative). The choice of objective matters in determining the best clustering. For a given clustering \(\mathcal{C}\), let \(y_{\mathcal{C}}(u)\) denote the number of edges incident to \(u\) that are disagreements with respect to \(\mathcal{C}\) (we drop \(\mathcal{C}\) and write \(y\) when it is clear from context). The most commonly considered objectives are \(\left\|y_{\mathcal{C}}\right\|_{p}=\sqrt[p]{\sum_{u\in V}y_{\mathcal{C}}(u)^{p}}\) for \(p\in\mathbb{Z}_{\geqslant 1}\cup\{\infty\}\), the \(\ell_{p}\)-norms of the _disagreement vector_\(y\). Note that the optimal objective values may drastically vary for different norms too. (For instance, in the example in Appendix A of [22], the vertex set \(V=A\sqcup B\), where \(|A|=|B|=n/2\), and all edges are positive except for a negative matching between \(A\) and \(B\). The optimal \(\ell_{\infty}\)-norm objective value is \(1\) whereas the optimal for \(\ell_{1}\) is \(\Theta(n)\).) When \(p=1\), this objective minimizes the total number of disagreements. Setting \(p=\infty\) minimizes the maximum number of disagreements incident to any node, ensuring a type of worst-case fairness.2 Balancing these two extremes--average welfare on one hand and fairness on the other--is the \(\ell_{2}\)-norm, which minimizes the variance of the disagreements at each node. Footnote 2: In the social network example, minimizing the \(\ell_{1}\)-norm corresponds to finding a clustering that minimizes the total number of friends who are separated plus the total number of strangers who are in the same cluster. The \(\ell_{\infty}\)-norm corresponds to finding a clustering minimizing the number of friends any person is separated from plus the number of strangers in that person’s same cluster. Correlation clustering was proposed by Bansal, Blum, and Chawla (2004) with the objective of minimizing the \(\ell_{1}\)-norm of the disagreement vector. The problem is NP-hard and several approximation algorithms have been proposed [5, 4, 11, 13]. Puleo and Milenkovic (2016) proposed studying \(\ell_{p}\)-norms of the disagreement vector for \(p>1\), and they give a \(48\)-approximation for any fixed \(p\). Charikar, Gupta, and Schwartz (2017) introduced an improved \(7\)-approximation, which Kalhan, Makarychev and Zhou (2019) further improved to a \(5\)-approximation. When \(p>1\), up until recently, the only strategies were LP or SDP rounding. Davies, Moseley, and Newman (2023) introduced a combinatorial \(O(1)\)-approximation algorithm for \(p=\infty\), and leave open the question of discovering a combinatorial \(O(1)\)-approximation algorithm for \(1<p<\infty\). In prior work, solutions obtained for \(\ell_{p}\)-norms are tailored to each norm, and it was not well-understood what the trade-offs were between solutions that optimize different norms. It is easy to see that solutions naively optimizing one norm can be arbitrarily bad for other norms (see Figure 1). A natural question is whether this loss from using a solution to one objective for another is avoidable. More specifically: _For any graph input to unweighted, complete correlation clustering, does there exist a partition (clustering) that is **simultaneously**\(O(1)\)-approximate for all \(\ell_{p}\)-norm objectives?_ ### Results This paper is focused on optimizing the \(\ell_{p}\)-norm for correlation clustering, when \(p\geqslant 1\). The main result of the paper answers the previous question positively: surprisingly, there is a single clustering that _simultaneously_\(O(1)\)-approximates the optimal for all \(\ell_{p}\)-norms. Further, it can be found through an efficient combinatorial algorithm. Figure 1: Two clusterings of the star graph. **Left:** Clustering assigns all nodes to one (blue) cluster, and is (almost) optimal for the \(\ell_{\infty}\)-norm with cost \(\Theta(n)\). **Right:** Clustering assigns all nodes to different clusters and is (almost) optimal for the \(\ell_{1}\)-norm with cost \(\Theta(n)\). The left solution is terrible for the \(\ell_{1}\)-norm, as the negative clique has \(\Theta(n^{2})\) edges that are disagreements. In what follows, let \(O(n^{\omega})\) denote the run-time of \(n\times n\) matrix multiplication. **Theorem 1**.: _Let \(G=(V,E)\) be an instance of unweighted, complete correlation clustering on \(|V|=n\) nodes. There exists a combinatorial algorithm returning a single clustering that is simultaneously an \(O(1)\)-approximation3 for all \(\ell_{p}\)-norm objectives, for all \(p\in\mathbb{Z}_{\geqslant 1}\cup\{\infty\}\), and its run-time is \(O(n^{\omega})\)._ Footnote 3: Note this is _independent_ of \(p\). The algorithm is also the first combinatorial algorithm for the \(\ell_{p}\)-norm objective for \(2\leqslant p<\infty\). It is important to note that the algorithm gives the fastest run-time of any \(O(1)\)-approximation algorithm for the \(\ell_{p}\)-norm objective when \(p\in\mathbb{Z}_{>1}\). Further, the run-time can be improved when the positive degree of the graph is bounded, as shown in the following corollary. **Corollary 1**.: _Let \(\Delta\) denote the maximum positive degree in an instance \(G=(V,E)\) of correlation clustering on \(|V|=n\) nodes. Suppose \(G\) is given as an adjacency list representation of its positive edges. There exists a combinatorial algorithm returning a single clustering that is simultaneously an \(O(1)\)-approximation for all \(\ell_{p}\)-norm objectives, for all \(p\in\mathbb{Z}_{\geqslant 1}\cup\{\infty\}\), and its run-time is \(O(n\Delta^{2}\log n)\)._ The run-time of the algorithm matches the fastest known algorithm for the \(\ell_{\infty}\)-norm objective [14], in both the general case and when the maximum positive degree is bounded. The best known algorithm before our work relied on solving a convex relaxation on \(|V|^{2}\) variables and \(|V|^{3}\) constraints. We improve the run-time by avoiding this bottle-neck. In the setting when the positive edges form a regular graph, the interested reader may also find a clever proof (which is much simpler than that of Theorem 1) in Section 3 showing there is a solution that is simultaneously \(O(1)\)-approximate for the \(\ell_{1}\)-norm and \(\ell_{\infty}\)-norm objectives. ### Related work Correlation clustering was introduced by Bansal, Blum, and Chawla (2004). Note the version they introduced also studies the problem on unweighted, complete graphs, but is concerned with minimizing the \(\ell_{1}\)-norm of the disagreement vector. This has remained the most popular setting. For this version of the problem, Ailon, Charikar, and Newman (2008) designed the Pivot algorithm, which is a randomized algorithm that in expectation obtains a 3-approximation. While we know algorithms with better approximations for \(\ell_{1}\) correlation clustering than Pivot [11, 13], the algorithm remains a baseline in correlation clustering due to its simplicity. (However, Pivot can perform arbitrarily badly--i.e., give \(\Omega(n)\) approximation ratios--for other \(\ell_{p}\)-norms; see again the example in Appendix A of [22].) It is an active area of research to develop algorithms for the \(\ell_{1}\)-norm that focus on practical scalability [8, 12, 21, 24, 9]. In fact, in recent work Veldt [25] highlighted the need for deterministic techniques in correlation clustering that do not use linear programming. Much interest in correlation clustering stems from its connections to applications, including community detection, natural language processing, location area planning, and gene expression [26, 24, 27, 20, 7, 15]. Puleo and Milenkovic (2016) introduced correlation clustering with the goal of minimizing the \(\ell_{p}\)-norm of the disagreement vector. They show that even for minimizing the \(\ell_{\infty}\)-norm on complete, unweighted graphs, the problem is \(\mathsf{NP}\)-hard (Appendix C in [22]). Several groups found \(O(1)\)-approximation algorithms for minimizing the \(\ell_{p}\)-norm on complete, unweighted graphs [22, 10, 19], the best of which is currently the 5-approximation of Kalhan, Makarychev, and Zhou (2019). Many other interesting objectives for correlation clustering focus on finding solutions that are (in some sense) fair or locally desirable [3, 6, 2, 16, 17, 1]. Correlation clustering has also been studied on non-complete, weighted graphs [10, 19], with conditions on the cluster sizes [23], and with asymmetric errors [18]. All of these previous works that study general \(\ell_{p}\)-norms or other notions of fairness or locality rely on solving a convex relaxation. This has two downsides: (1) the run-time of the algorithms are bottle-necked by the time it takes to solve the relaxation with at least \(\Omega(n^{2})\) many variables and \(\Omega(n^{3})\) constraints. In fact, it is time-consuming to even enumerate the \(\Omega(n^{2})\) variables and \(\Omega(n^{3})\) constraints, and (2) the solution is only guaranteed to be good for one particular value of \(p\). ## 2 Preliminaries We will introduce notation, and then we will discuss two relevant works--the papers by Kalhan, Makarychev, and Zhou (2019) and Davies, Moseley, and Newman (2023). ### Notation Recall our input to the correlation clustering problem is \(G=(V,E)\), an unweighted, complete graph on \(n\) vertices, and every edge is assigned a label of either positive (\(+\)) or negative (\(-\)). Let the set of positive edges be denoted \(E^{+}\) and the set of negative edges \(E^{-}\). Then, we can define the _positive neighborhood_ and _negative neighborhood_ of a vertex \(u\) as \(N_{u}^{+}=\{v\in V\mid(u,v)\in E^{+}\}\) and \(N_{u}^{-}=\{v\in V\mid(u,v)\in E^{-}\}\), respectively. We further assume without loss of generality that every vertex has a positive self-loop to itself. A _clustering_\(\mathcal{C}\) is a partition of \(V\) into _clusters_\(C_{1},\ldots,C_{k}\). Let \(C(u)\) denote the cluster that vertex \(u\) is in, i.e., if \(\mathcal{C}\) has \(k\) clusters, there exists exactly one \(i\in[k]\) such that \(C(u)=C_{i}\). It is also helpful to consider the vertices in a different cluster than \(u\), and so we let \(\overline{C(u)}=V\backslash C(u)\) denote this. We say that a positive edge \(e=(u,v)\in E^{+}\) is a _disagreement_ with respect to \(\mathcal{C}\) if \(v\in\overline{C(u)}\). On the other hand, we say that a negative edge \(e=(u,v)\in E^{-}\) is a disagreement with respect to \(\mathcal{C}\) if \(v\in C(u)\). For a fixed clustering \(\mathcal{C}\), we denote the _disagreement vector_ of \(\mathcal{C}\) as \(y_{\mathcal{C}}\in\mathbb{Z}_{\geq 0}^{n}\), where for \(u\in V\), \(y_{\mathcal{C}}(u)\) is the number of edges incident to \(u\) that are disagreements with respect to \(\mathcal{C}\). We omit the subscript throughout the proofs when a clustering is clear. Throughout, we let \(\mathsf{OPT}\) be the optimal objective value, and the \(\ell_{p}\)-norm to which it corresponds will be clear from context. The next fact follows from the definitions seen so far (recalling also the positive self-loops). **Fact 1**.: _Let \(u,v\in V\), possibly with \(u=v\). Then_ \[n=|N_{u}^{+}\cap N_{v}^{+}|+|N_{u}^{-}\cap N_{v}^{-}|+|N_{u}^{+}\cap N_{v}^{- }|+|N_{u}^{-}\cap N_{v}^{+}|.\] ### Summary of work by Kalhan, Makarychev, and Zhou The standard linear program relaxation for correlation clustering is stated in LP 2.2.4 In the _integer_ LP, the variable \(x_{uv}\) indicates whether vertices \(u\) and \(v\) will be in the same cluster (\(0\) for yes, \(1\) for no), and the disagreement vector is \(y\); the optimal solution to the integer LP has value \(\mathsf{OPT}\), while the optimal solution to the relaxation gives a lower bound on \(\mathsf{OPT}\). Note the triangle inequality is enforced on all triples of vertices, inducing a semi-metric space on \(V\). Throughout this paper, as in [14], we refer to the algorithm by Kalhan, Makarychev, and Zhou as the KMZ algorithm. The _KMZ algorithm_ has two phases: it solves LP 2.2, and then uses the _KMZ rounding algorithm_ (Algorithm 1) to obtain an integral assignment of vertices to clusters. See Appendix D for its formal statement. At a high-level, the KMZ rounding algorithm is an iterative, ball-growing algorithm that uses the semi-metric to guide its choices on forming clusters. Their algorithm is a \(5\)-approximation, and produces different clusterings for different \(p\), since the optimal solution \(x\)* to LP 2.2 depends on \(p\). Footnote 4: Technically this is a convex program as the objective is convex. For simplicity we will refer to it as an LP as the constraints are linear. LP 2.2 \[\min\left\|y\right\|_{p}\] s.t. \[y_{u}=\sum_{v\in N_{u}^{+}}x_{uv}+\sum_{v\in N_{u}^{-}}(1-x_{uv}) \forall u\in V\] \[x_{uv}\leq x_{uw}+x_{vw} \forall u,v,w\in V\] \[0\leq x_{uv}\leq 1 \forall u,v\in V.\] **Definition 1**.: _Let \(f\) be a semi-metric on \(V\), i.e., taking \(x=f\) gives a feasible solution to LP 2.2. The fractional cost of \(f\) in the \(\ell_{p}\)-norm objective is the value of LP 2.2 that results from setting \(x=f\). When \(p\) is clear from context, we will simply call this the fractional cost of \(f\)._ ### Summary of work by Davies, Moseley, and Newman The main take-away from the work of Kalhan, Makarychev, and Zhou (2019) is that one only requires a semi-metric on the set of vertices, whose cost is comparable to the cost of an optimal solution, as input to the KMZ rounding algorithm (Algorithm 1). Thus, the insight of Davies, Moseley, and Newman (2023) for the \(\ell_{\infty}\)-norm objective is that one can combinatorially construct such a semi-metric without solving an LP, and at small loss in the quality of the fractional solution. They do this by introducing the _correlation metric_. **Definition 2** ([14]).: _For all \(u,v\in V\), the distance between \(u\) and \(v\) with respect to the correlation metric is_ \[d_{uv} =1-\frac{|N_{u}^{+}\cap N_{v}^{+}|}{|N_{u}^{+}\cup N_{v}^{+}|}\] \[=\frac{|N_{u}^{+}\cap N_{v}^{-}|+|N_{u}^{-}\cap N_{v}^{+}|}{|N_{u }^{+}\cap N_{v}^{+}|+|N_{u}^{+}\cap N_{v}^{-}|+|N_{u}^{-}\cap N_{v}^{+}|}. \tag{1}\] Note that the rewrite in Line (1) is apparent from Fact 1. The correlation metric captures useful information succinctly. Intuitively, if \(u\) and \(v\) have relatively large positive intersection, i.e., \(|N_{u}^{+}\cap N_{v}^{+}|\) is large compared to their other relevant joint neighborhoods \((N_{u}^{+}\cap N_{v}^{-})\cup(N_{u}^{-}\cap N_{v}^{+})\), then from the perspective of \(u\) and \(v\), fewer disagreements are incurred by putting \(u\) and \(v\) in the same cluster than by putting them in different clusters. This is because if \(u\) and \(v\) are in the same cluster, then they have disagreements on edges \((u,w)\) and \((v,w)\) for \(w\in(N_{u}^{+}\cap N_{v}^{-})\cup(N_{u}^{-}\cap N_{v}^{+})\), but if they are in different clusters, then \(u\) and \(v\) have disagreements on edges \((u,w)\) and \((v,w)\) for \(w\in N_{u}^{+}\cap N_{v}^{+}\). Note that the metric is not normalized by \(n\) (the size of the joint neighborhood), but instead by \(n-|N_{u}^{-}\cap N_{v}^{-}|=|N_{u}^{+}\cap N_{v}^{+}|+|N_{u}^{+}\cap N_{v}^{-} |+|N_{u}^{-}\cap N_{v}^{+}|\). In addition to the above intuition on the normalization factor, we also observe that \(w\in N_{u}^{-}\cap N_{v}^{-}\) do not necessarily force disagreements on \((u,w)\) and \((v,w)\), since \(w\) can go in a different cluster than both \(u\) and \(v\) without penalty. For more on intuition behind the correlation metric, see Section 2 in [14]. Davies, Moseley, and Newman (2023) prove that the correlation metric can be used as input to the KMZ rounding algorithm by showing that (1) the correlation metric \(d\) satisfies the triangle inequality and (2) the fractional cost of \(d\) in the \(\ell_{\infty}\)-norm (recall Definition 1) is no more than 8 times the value of the optimal integral solution (OPT). Since the KMZ rounding algorithm (Algorithm 1) loses a factor of at most 5, inputting \(d\) to that algorithm returns a 40-approximation algorithm. A benefit of the correlation metric is that it can be computed in time \(O(n^{\omega})\), and even faster when the subgraph on positive edges is sparse. ### Technical overview It is not hard to see that the correlation metric cannot be used as input to the KMZ algorithm. For \(\ell_{p}\)-norms other than \(p=\infty\), one cannot bound the fractional cost of the correlation metric against the optimal with only an \(O(1)\)-factor loss. To see why, consider the star again, as in Figure 1. Here, for all \(u,v\in\{v_{1},\ldots,v_{n-1}\}\), \(d_{uv}=1-1/(n-(n-3))=2/3\), but for the \(\ell_{1}\)-norm, we need the semi-metric to have the value \(1-d_{uv}\) be close to 0, i.e. \(O(1/n)\), for such \(u,v\), in order for the fractional cost to be comparable to OPT. There are several possible fixes one could try to make to the correlation metric. One idea is that since one can interpret the correlation metric as a coarse approximation of the probability the Pivot algorithm separates \(u\) and \(v\)5, one could try to adapt the correlation metric to more accurately approximate this probability.6 Another idea, inspired by an observation below, is that one could define a semi-metric for edges in \(E^{+}\) and another semi-metric for edges in \(E^{-}\), but then there is the difficulty of showing the triangle inequality holds when positive and negative edges are mixed together. Both of these ideas were for us unsuccessful. Footnote 6: If in LP 2.2, \(x_{uv}\) is set exactly to the probability that \(u\) and \(v\) are separated by Pivot, then \(x\) will be a feasible solution with cost at most \(\mathsf{3OPT}\). However, this probability seems difficult to express in closed form, even approximately. Instead, the following two observations of how the correlation metric works with respect to the \(\ell_{1}\)-norm led us to an effective adaptation: 1. One can bound the fractional cost, _restricted to positive edges_, of the correlation metric in the \(\ell_{1}\)-norm by an \(O(1)\)-factor times the optimal solution's cost (see Claim 1 in Appendix C). Negative edges still pose a challenge. 2. Interestingly, if the subgraph of positive edges is regular, then we show one can bound the fractional cost of the correlation metric in the \(\ell_{1}\)-norm on negative edges as well as positive.7 See Section 3 for a very clean proof via dual fitting. Footnote 7: In contrast, one cannot bound the fractional cost of the correlation metric on the star (Figure 1). These observations led us to ask whether some adjustments to the correlation metric might yield a semi-metric with bounded fractional cost in the \(\ell_{1}\)-norm or even the \(\ell_{p}\)-norm more generally (while still remaining bounded in the \(\ell_{\infty}\)-norm). Moreover, since the KMZ rounding algorithm does not depend on \(p\) (whereas in the KMZ algorithm, the solution to the LP _does_ depend on \(p\)), inputting the same semi-metric to the rounding algorithm produces the same clustering for all \(\ell_{p}\)-norms! Using the above observations, we define the _adjusted correlation metric_. Let \(\Delta_{u}\) denote the positive degree of \(u\). **Definition 3**.: _Define the adjusted correlation metric \(f:E\rightarrow[0,1]\) as follows:_ 1. _For_ \(d\) _the correlation metric_ \(d_{uv}=1-\frac{|N_{u}^{+}\cap N_{v}^{+}|}{|N_{u}^{+}\cup N_{v}^{+}|}\)_, initially set_ \(f=d\)_._ 2. _If_ \(e\in E^{-}\) _and_ \(d_{e}>0.7\)_, set_ \(f_{e}=1\) _(round up)._ 3. _For_ \(u\in V\) _such that_ \(|N_{u}^{-}\cap\{v:d_{uv}\leqslant 0.7\}|\geqslant\frac{10}{3}\Delta_{u}\)_, set_ \(f_{uv}=1\) _for all_ \(v\in V\backslash\{u\}\)_._ In Section 3, we start with a warm-up exercise and show that if the graph on positive edges is regular, then the correlation metric has \(O(1)\)-approximate fractional cost. Note this section is not necessary to understanding the rest of the paper, but we include it in the main body because we find the proof here very clever! The main technical result of the paper is Section 4, where we prove Theorem 1 by showing that the adjusted correlation metric can be input to the KMZ rounding algorithm. Namely, we will first show (quite easily) that the adjusted correlation metric satisfies an approximate triangle inequality. Then, it remains to upper bound the fractional cost of the adjusted correlation metric against \(\mathsf{OPT}\). We tackle this with a combinatorial charging argument. This argument leverages a somewhat different approach from that used in [14] and is simpler than their proof for only the \(\ell_{\infty}\)-norm. The _constant_ approximation factor obtained from inputting the adjusted correlation metric to Algorithm 1 is bounded above (and below) by universal constants for all \(p\) (this is the worst case and one can get better constants for each \(p\)). While we keep the argument for general \(\ell_{p}\)-norms in the main body of the paper, the interested reader may find a simplified proof for the \(\ell_{1}\)-norm objective in Appendix C. ## 3 Warm-up: Regular Graphs In general, the original correlation metric \(d\) (Definition 2) does not necessarily have bounded fractional cost for the \(\ell_{1}\)-norm objective (or more generally for \(\ell_{p}\)-norm objectives). So, we use the adjusted correlation metric \(f\) (Definition 3) as input to the KMZ rounding algorithm (Appendix D). In this section, we show that _if the subgraph of positive edges is regular_, then the correlation metric \(d\) can be used as is (i.e., without the adjustments in Steps 2 and 3 of Definition 3) to yield a clustering that is constant approximate for the \(\ell_{1}\)-norm and \(\ell_{\infty}\)-norm simultaneously: **Theorem 2**.: _Let \(G=(V,E)\) be an instance of unweighted, complete correlation clustering, and let \(E^{+}\) denote the set of positive edges. Suppose that the subgraph induced by \(E^{+}\) is regular. Then the fractional cost of \(d\) in the \(\ell_{1}\)-norm objective is within a constant factor of OPT:_ \[\sum_{u\in V}\sum_{v\in N_{u}^{+}}d_{uv}+\sum_{u\in V}\sum_{v\in N_{u}^{-}}(1- d_{uv})=O(\textsf{OPT}).\] _Therefore, the clustering produced by inputting \(d\) to the KMZ rounding algorithm is a constant-factor approximation simultaneously for the \(\ell_{1}\)-norm and \(\ell_{\infty}\)-norm objectives._ Proof.: Let \(\Delta\) be the (common) degree of the positive subgraph. To show that the fractional cost of \(d\) in the \(\ell_{1}\)-norm objective is \(O(\textsf{OPT})\) for regular graphs, we will use a dual fitting argument. The LP relaxation we consider is from [4], which uses a dual fitting argument to show constant approximation guarantees for Pivot (although the proof here does not otherwise resemble the proof for Pivot). The primal is given by \[\min\left\{\sum_{e\in E}x_{e}\mid x_{ij}+x_{jk}+x_{ki}\geqslant 1,\forall ijk \in\mathcal{T},x\geqslant 0\right\}\] where \(\mathcal{T}\) is the set of bad triangles (i.e., triangles with exactly two positive edges and one negative edge). For \(x\in\{0,1\}^{|E|}\), \(x\) corresponds to disagreements in a clustering: we set \(x_{e}=1\) if \(e\) is a disagreement and \(x_{e}=0\) otherwise. The constraints state that every clustering must make a disagreement on every bad triangle. Thus, \((P)\) is a relaxation for the \(\ell_{1}\)-norm objective. In fact, we will prove the stronger statement that the fractional cost is \(O(\textsf{OPT}_{P})\), where \(\textsf{OPT}_{P}\) is the optimal objective value of \((P)\). The dual is given by \[\max\left\{\sum_{T\in\mathcal{T}}y_{T}\mid\sum_{T\in\mathcal{T}:T\circ e}y_{T} \leqslant 1,\forall e\in E,y\geqslant 0\right\}.\] We show that by setting \(y_{T}=\frac{1}{2\Delta}\) for all \(T\in\mathcal{T}\), \(y\) satisfies the following properties: 1. \(y\) is feasible in \((D)\). 2. The fractional cost of \(d\) is at most \(6\cdot\sum_{T\in\mathcal{T}}y_{T}\). Letting \(\textsf{OPT}_{D}\) be the optimal objective value of \((D)\), we have \(6\cdot\sum_{T\in\mathcal{T}}y_{T}\leqslant 6\cdot\textsf{OPT}_{D}=6\cdot \textsf{OPT}_{P}\leqslant 6\cdot\textsf{OPT}\), which will conclude the proof. To prove feasibility, we case on whether \(e\) is positive or negative. If \(e\in E^{-}\), then \(|\{T\in\mathcal{T}:T\ni e\}|=|N_{u}^{+}\cap N_{v}^{+}|\leqslant\Delta\), where the equality is by the definition of a bad triangle. So \[\sum_{T\in\mathcal{T}:T\ni e}y_{T}\leqslant\frac{\Delta}{2\Delta}\leqslant 1.\] If \(e\in E^{+}\), then \(|\{T:T\ni e\}|=|(N_{u}^{+}\cap N_{v}^{-})\cup(N_{u}^{-}\cap N_{v}^{+})| \leqslant 2\Delta\). So \[\sum_{T\in\mathcal{T}:T\ni e}y_{T}\leqslant\frac{2\Delta}{2\Delta}=1.\] So \(y\) is feasible. Now we need to show that the fractional cost of \(d\) is bounded in terms of the objective value of \((D)\). First we bound the fractional cost of the negative edges: \[\sum_{(u,v)\in E^{-}}(1-d_{uv})\leqslant\sum_{(u,v)\in E^{-}}\frac{|N_{u}^{+} \cap N_{v}^{+}|}{\Delta}=\sum_{e\in E^{-}}\sum_{T\in\mathcal{T}:T\ni e}\frac {1}{\Delta}=\sum_{e\in E^{-}}\sum_{T\in\mathcal{T}:T\ni e}2y_{T}\] where in the first inequality we have used that \(|N_{u}^{+}\cup N_{v}^{+}|\geqslant\Delta\). Next we bound the fractional cost of the positive edges: \[\sum_{(u,v)\in E^{+}}d_{uv}\leqslant\sum_{(u,v)\in E^{+}}\frac{|N_{u}^{+}\cap N _{v}^{-}|+|N_{u}^{-}\cap N_{v}^{+}|}{\Delta}=\sum_{e\in E^{+}}\sum_{T\in \mathcal{T}:T\circ e}\frac{1}{\Delta}=\sum_{e\in E^{+}}\sum_{T\in\mathcal{T}:T \circ e}2y_{T}.\] So the total fractional cost is bounded by \[\sum_{e\in E}\sum_{T\in\mathcal{T}:T\circ e}2y_{T}=6\cdot\sum_{T\in\mathcal{T} }y_{T}\] since each triangle contains three edges. This is what we sought to show. Since the fractional cost of \(d\) is bounded for the \(\ell_{1}\)-norm objective (and the \(\ell_{\infty}\)-norm objective by [14]), using \(d\) as input to KMZ rounding algorithm produces a clustering that is simultaneously \(O(1)\)-approximate for the \(\ell_{1}\)- and \(\ell_{\infty}\)-norm objectives. ## 4 Proof of Theorem 1 The goal of this section is to prove Theorem 1 and the subsequent Corollary 1. We begin by showing that the adjusted correlation metric satisfies an approximate triangle inequality in Subsection 4.1. Then in Subsection 4.2, we prove the fractional cost of the adjusted correlation metric in any \(\ell_{p}\)-norm objective is an \(O(1)\) factor away from the optimal solution's value. Then, we tie it all together to prove Theorem 1 and Corollary 1 in Subsection 4.3. Before all that, we prove a proposition that will be key in several settings. Loosely, it states that if two vertices are close to each other, then they have a large shared positive neighborhood. **Proposition 1**.: _Fix vertices \(u,v\in V\) and a clustering \(\mathcal{C}\) on \(V\) such that \(d_{uv}\leqslant 0.7\) and \(|N_{u}^{+}\cap C(u)|\:/\:|N_{u}^{+}|\geqslant 0.85\). Then \(|N_{u}^{+}\cap N_{v}^{+}\cap C(u)|\geqslant 0.15\cdot|N_{u}^{+}|\)._ Proof.: Since \(d_{uv}\leqslant 0.7\), \[\frac{|N_{u}^{+}\cap N_{v}^{+}|}{|N_{u}^{+}|}\geqslant\frac{|N_{u}^{+}\cap N_{ v}^{+}|}{|N_{u}^{+}\cup N_{v}^{+}|}\geqslant 0.3.\] Using the assumption on \(u\) together with the above inequality implies that \(|N_{v}^{+}\cap N_{u}^{+}\cap C(u)|\:/\:|N_{u}^{+}|\geqslant 0.85+0.3-1=0.15\). ### Triangle inequality Recall that the correlation metric \(d\) satisfies the triangle inequality (see Section 4.2 in [14]). We will show that the adjusted correlation metric \(f\) satisfies an approximate triangle inequality, which is sufficient for the KMZ rounding algorithm. Formally, we say that a function \(g\) is a _\(\delta\)-semi-metric_ on some set \(S\) if it is a semi-metric on \(S\), except instead of satisfying the triangle inequality, \(g\) satisfies \(g(u,v)\leqslant\delta\cdot(g(u,w)+g(v,w))\) for all \(u,v,w\in S\). **Lemma 1** (Triangle Inequality).: _The adjusted correlation metric \(f\) is a \(\frac{10}{7}\)-semi-metric._ The proof of Lemma 1 is straightforward given that \(d\) satisfies the triangle inequality; for completeness it can be found in Appendix A. Lemma 3 in [14] proves that one can input a semi-metric that satisfies an approximate triangle inequality instead of the triangle inequality to the KMZ rounding algorithm (with some loss in the approximation factor). We summarize the main take-away below. **Lemma 2** ([14]).: _If \(g\) is a \(\delta\)-semi-metric on the set \(V\), instead of a true semi-metric (i.e., \(1\)-semi-metric), then the KMZ algorithm loses a factor of \(1+\delta+\delta^{2}+\delta^{3}+\delta^{4}\).8_ Footnote 8: When \(\delta=1\), this factor equals \(5\), which is the loss in the KMZ algorithm. Since we show in Lemma 1 that \(f\) is a \(\frac{10}{7}\)-semi-metric, we lose a factor of \(12\) in inputting \(f\) to the KMZ algorithm (along with the factor loss from the fractional cost). ### Bounding the fractional cost of \(\ell_{p}\)-norms This section bounds the fractional cost of the adjusted correlation metric for the \(\ell_{p}\)-norms. The following lemma considers the case where \(p=\infty\). The general case is handled in the subsequent lemma. **Lemma 3**.: _The fractional cost of the adjusted correlation metric \(f\) in the \(\ell_{\infty}\)-norm objective is at most \(56\cdot\textsf{OPT}\), where \(\textsf{OPT}\) is the cost of the optimal integral solution in the \(\ell_{\infty}\)-norm._ The lemma follows from the fact that the fractional cost of the correlation metric \(d\) in the \(\ell_{\infty}\)-norm is known to be bounded by [14], and that it only decreases when \(d\) is replaced by \(f\) due to Definition 3. For completeness, we include a proof in Appendix B. When \(p\in\mathbb{Z}_{\geqslant 1}\), we use two primary lemmas--one for the positive edge fractional cost and one for the negative edge fractional cost--to show that the adjusted correlation metric well approximates the optimal for general \(\ell_{p}\)-norms. **Lemma 4**.: _The fractional cost of the adjusted correlation metric \(f\) in the \(\ell_{p}\)-norm objective is a constant factor (independent of \(p\)) away from the cost of the optimal integral solution in the \(\ell_{p}\)-norm._ Proof.: Let \(y\) be the disagreement vector for an optimal clustering \(\mathcal{C}\) in the \(\ell_{p}\)-norm, for any \(p\in\mathbb{Z}_{\geqslant 1}\cup\{\infty\}\). When \(p=\infty\), see Lemma 3. For \(p\in\mathbb{Z}_{\geqslant 1}\), by definition \(\textsf{OPT}^{p}=\sum_{w\in V}(y(w))^{p}\), and the \(p\)th power of the fractional cost of \(f\) is given by \[cost(f)^{p}=\sum_{u\in V}\left[\sum_{v\in N_{u}^{+}}f_{uv}+\sum_{v\in N_{u}^{-} }(1-f_{uv})\right]^{p}.\] Observe that \[cost(f)^{p}\leqslant 2^{p}\underbrace{\sum_{u\in V}\left(\sum_{v\in N_{u}^{+} }f_{uv}\right)^{p}}_{(S^{+})^{p}}+2^{p}\underbrace{\sum_{u\in V}\left(\sum_{v \in N_{u}^{-}}(1-f_{uv})\right)^{p}}_{(S^{-})^{p}}.\] We refer to bounding \((S^{+})^{p}\) as bounding the fractional cost of the positive edges, and likewise \((S^{-})^{p}\) for the negative edges. The first sum, \((S^{+})^{p}\), is bounded in Lemma 5 and the second sum, \((S^{-})^{p}\), is bounded in Lemma 6. Using those two bounds, together we have \[cost(f)\leqslant[2^{p}((S^{+})^{p}+(S^{-})^{p})]^{1/p}\leqslant 529\] for \(p\in[1,\infty)\). Specifically, the middle term is maximized at \(p=1\), giving the bound of \(529\), and tends to below \(214\) as \(p\to\infty\). (Note the more tailored analysis in Appendix C gives a constant of \(74\) for \(p=1\).) We note that we did not give particular attention to optimizing constants. #### 4.2.1 Fractional cost of positive edges in \(\ell_{p}\)-norms Bounding the fractional cost of the positive edges in the \(\ell_{p}\)-norm will be similar to in the simpler \(\ell_{1}\)-norm (see Appendix C), only there will be an extra step in which we apply Jensen's inequality. **Lemma 5**.: _For \(p\in\mathbb{Z}_{\geqslant 1}\), the fractional cost of the adjusted correlation metric \(f\) in the \(\ell_{p}\)-norm objective for the set of positive edges is a constant factor approximation to the optimal, i.e.,_ \[(S^{+})^{p}=\sum_{u\in V}\left(\sum_{v\in N_{u}^{+}}f_{uv}\right)^{p}\leqslant 2 ^{p}\cdot[(8^{p}/2+1)((20/3)^{p}+2+2\cdot 4^{p})+8^{p}+1]\cdot\textsf{OPT}^{p}\] Proof.: Fix an optimal clustering \(\mathcal{C}\). We partition vertices based on membership in \(C(u)\) or \(\overline{C(u)}\) (as defined in Subsection 2.1). Let \(y\) denote the disagreement vector of \(\mathcal{C}\). We have \[(S^{+})^{p}=\sum_{u\in V}\left(\sum_{v\in N_{u}^{+}}f_{uv}\right)^{p}\leq 2^{p} \underbrace{\sum_{u\in V}\left(\sum_{v\in N_{u}^{+}\cap C(u)}f_{uv}\right)^{p }}_{S_{1}^{+}}+2^{p}\underbrace{\sum_{u\in V}\left(\sum_{v\in N_{u}^{+}\cap \overline{C(u)}}f_{uv}\right)^{p}}_{S_{2}^{+}}.\] It is easy to bound \(S_{2}^{+}\) by using the trivial upper bound \(f_{uv}\leq 1\): \[\boxed{S_{2}^{+}=\sum_{u\in V}\left(\sum_{v\in N_{u}^{+}\cap\overline{C(u)}}f_ {uv}\right)^{p}\leq\sum_{u\in V}\left(\sum_{v\in N_{u}^{+}\cap\overline{C(u)}} 1\right)^{p}\leq\sum_{u\in V}(y(u))^{p}=\mathsf{OPT}^{p},}\] where we have used that every edge \((u,v)\in E^{+}\) with \(v\notin C(u)\) is a disagreement incident to \(u\). Next, we bound \(S_{1}^{+}\). Let \(R_{1}\) be the set of \(u\) for which Step 3 of Definition 3 applies. For these \(u\), we have \(f_{uv}=1\) for all \(v\in V\backslash\{u\}\). Let \(R_{2}=V\backslash R_{1}\). For \(u\in R_{2}\) and \(v\in N_{u}^{+}\), we have that either \(v\in R_{2}\), in which case \(f_{uv}=d_{uv}\); or \(v\in R_{1}\), in which case \(f_{uv}=1\). (Note that \(V\) is the disjoint union of \(R_{1}\) and \(R_{2}\).) So \[S_{1}^{+}=\underbrace{\sum_{u\in R_{1}}\left(\sum_{v\in N_{u}^{+}\cap C(u),v \neq u}1\right)^{p}}_{S_{11}^{+}}+\underbrace{\sum_{u\in R_{2}}\left(\sum_{v \in N_{u}^{+}\cap C(u)}f_{uv}\right)^{p}}_{S_{12}^{+}}\] so in particular \[S_{12}^{+} =\sum_{u\in R_{2}}\left(\sum_{v\in N_{u}^{+}\cap C(u)\cap R_{1}} 1+\sum_{v\in N_{u}^{+}\cap C(u)\cap R_{2}}d_{uv}\right)^{p}\] \[=\sum_{u\in R_{2}}\left(\sum_{\begin{subarray}{c}v\in N_{u}^{+} \cap C(u)\cap R_{1}\\ d_{uv}\leq 1/4\end{subarray}}1+\sum_{\begin{subarray}{c}v\in N_{u}^{+}\cap C(u) \cap R_{1}\\ d_{uv}\geq 1/4\end{subarray}}1+\sum_{v\in N_{u}^{+}\cap C(u)\cap R_{2}}d_{uv} \right)^{p}\] \[\leq\sum_{u\in R_{2}}\left(\sum_{\begin{subarray}{c}v\in N_{u}^{+} \cap C(u)\cap R_{1}\\ d_{uv}\leq 1/4\end{subarray}}1+\sum_{\begin{subarray}{c}v\in N_{u}^{+}\cap C(u) \cap R_{1}\\ d_{uv}\geq 1/4\end{subarray}}4\cdot d_{uv}+\sum_{v\in N_{u}^{+}\cap C(u)\cap R_{ 2}}d_{uv}\right)^{p}\] \[\leq\sum_{u\in R_{2}}\left(\sum_{\begin{subarray}{c}v\in N_{u}^{+} \cap C(u)\cap R_{1}\\ d_{uv}\leq 1/4\end{subarray}}1+\sum_{v\in N_{u}^{+}\cap C(u)}4\cdot d_{uv} \right)^{p}\] \[\leq 2^{p}\underbrace{\sum_{u\in R_{2}}\left(\sum_{\begin{subarray} {c}v\in N_{u}^{+}\cap R_{1}\\ d_{uv}\leq 1/4\end{subarray}}1\right)^{p}}_{S_{13}^{+}}+2^{p}\cdot 4^{p} \cdot\underbrace{\sum_{u\in R_{2}}\left(\sum_{v\in N_{u}^{+}\cap C(u)}d_{uv} \right)^{p}}_{S_{14}^{+}}\] First we will bound \(S_{13}^{+}\). To do so, we will strongly use that \(d_{uv}\leq 1/4\) in the inner sum. In particular, we will make use of the following easy proposition. **Proposition 2**.: _Let \(d\) be the correlation metric, and suppose \(d_{uv}\leqslant 1/4\). Then \(|N_{u}^{+}|\leqslant\frac{7}{3}\cdot|N_{v}^{+}|\)._ Proof of Proposition 2.: The proof is a straightforward calculation. We have that \(d_{uv}\leqslant 1/4\) implies \(1-d_{uv}\geqslant 3/4\), that is, \[\frac{|N_{u}^{+}\cap N_{v}^{+}|}{|N_{u}^{+}|+|N_{v}^{+}|-|N_{u}^{+}\cap N_{v}^{ +}|}\geqslant 3/4\] and in turn \[7\cdot|N_{v}^{+}|\geqslant 7\cdot|N_{u}^{+}\cap N_{v}^{+}|\geqslant 3\cdot|N_{u }^{+}|+3\cdot|N_{v}^{+}|\geqslant 3\cdot|N_{u}^{+}|\] which completes the proof. Next, we will need to create a bipartite auxiliary graph \(H=(R_{2},R_{1},F)\) with \(R_{2}\) and \(R_{1}\) being the sides of the partition, and \(F\) being the edge set. We will then use a double counting argument. Place an edge between \(u\in R_{2}\) and \(v\in R_{1}\) if \(uv\in E^{+}\) and \(d_{uv}\leqslant 1/4\). Then we have precisely that \[S_{13}^{+}=\sum_{u\in R_{2}}\deg_{H}(u)^{p}.\] We will show that \[\boxed{S_{13}^{+}=\sum_{u\in R_{2}}\deg_{H}(u)^{p}\leqslant 4^{p-1}\cdot \sum_{v\in R_{1}}|N_{v}^{+}|^{p}\leqslant 4^{p-1}\cdot((20/3)^{p}+2+2\cdot 4^{p}) \cdot\mathsf{OPT}^{p}} \tag{2}\] where the last bound follows from Proposition 3, which we establish separately below. We will bound via double counting the quantity \(L\), defined below. First we upper bound \(L\). Let \(N_{H}(\cdot)\) denote the neighborhoods in \(H\) of the vertices. \[L :=\sum_{f=uv\in F}\left(\deg_{H}(u)+\deg_{H}(v)\right)^{p-1}\] \[\leqslant\sum_{v\in R_{1}}\sum_{u\in N_{H}(v)}\left(\deg_{H}(v)+ \deg_{H}(u)\right)^{p-1}\] \[\leqslant\sum_{v\in R_{1}}\sum_{u\in N_{H}(v)}\left(|N_{v}^{+}|+| N_{u}^{+}|\right)^{p-1}\] \[\leqslant\sum_{v\in R_{1}}\sum_{u\in N_{H}(v)}4^{p-1}\cdot|N_{v}^ {+}|^{p-1} \tag{3}\] \[\leqslant 4^{p-1}\cdot\sum_{v\in R_{1}}|N_{v}^{+}|\cdot|N_{v}^{+}|^{ p-1}\] \[=4^{p-1}\cdot\sum_{v\in R_{1}}|N_{v}^{+}|^{p}\] where in (3) we've used Proposition 2. Note that \(L\) is upper bounded by the right-hand side in (2). Now it just remains to show that \(L\) is lower bounded by the left-hand side in (2). \[L =\sum_{f=uv\in F}\left(\deg_{H}(u)+\deg_{H}(v)\right)^{p-1} \tag{4}\] \[=\sum_{u\in R_{2}}\sum_{v\in N_{H}(u)}\left(\deg_{H}(u)+\deg_{H} (v)\right)^{p-1}\] (5) \[\geqslant\sum_{u\in R_{2}}\sum_{v\in N_{H}(u)}\deg_{H}(u)^{p-1}\] (6) \[=\sum_{u\in R_{2}}\deg_{H}(u)\cdot\deg_{H}(u)^{p-1}\] (7) \[=\sum_{u\in R_{2}}\deg_{H}(u)^{p} \tag{8}\] which is what we sought to show. Now we bound \(S_{14}^{+}\). \[S_{14}^{+} \leqslant\sum_{u\in V}\left(\sum_{v\in N_{u}^{+}\cap C(u)}\frac{|N_ {u}^{+}\cap N_{v}^{-}|+|N_{u}^{-}\cap N_{v}^{+}|}{|N_{u}^{+}\cup N_{v}^{+}|} \right)^{p}\] \[\leqslant\sum_{u\in V}\left(\sum_{v\in N_{u}^{+}}\frac{y(u)+y(v)} {|N_{u}^{+}\cup N_{v}^{+}|}\right)^{p}\leqslant\sum_{u\in V}|N_{u}^{+}|^{p-1} \sum_{v\in N_{u}^{+}}\frac{(y(u)+y(v))^{p}}{|N_{u}^{+}\cup N_{v}^{+}|^{p}}\] \[\leqslant 2^{p}\sum_{u\in V}\sum_{v\in N_{u}^{+}}|N_{u}^{+}|^{p-1} \cdot\frac{y(u)^{p}}{|N_{u}^{+}\cup N_{v}^{+}|^{p}}+2^{p}\sum_{u\in V}\sum_{v \in N_{u}^{+}}|N_{u}^{+}|^{p-1}\cdot\frac{y(v)^{p}}{|N_{u}^{+}\cup N_{v}^{+}|^ {p}}.\] In the second line, the first inequality uses the fact that for \(w\in(N_{u}^{+}\cap N_{v}^{-})\cup(N_{u}^{-}\cap N_{v}^{+})\), then at least one of \((u,w),(v,w)\) is a disagreement, since \(v\in C(u)\) in the inner summation of the first line. The second inequality in the second line uses Jensen's inequality. To bound the first double sum above, we use an averaging argument: \[\sum_{u\in V}\sum_{v\in N_{u}^{+}}|N_{u}^{+}|^{p-1}\cdot\frac{y(u)^{p}}{|N_{u} ^{+}\cup N_{v}^{+}|^{p}}\leqslant\sum_{u\in V}\sum_{v\in N_{u}^{+}}\frac{y(u)^ {p}}{|N_{u}^{+}|}=\sum_{u\in V}y(u)^{p}=\mathsf{OPT}^{p}.\] To bound the second double sum, we first have to flip it: \[\sum_{u\in V}\sum_{v\in N_{u}^{+}}|N_{u}^{+}|^{p-1}\cdot\frac{y(v )^{p}}{|N_{u}^{+}\cup N_{v}^{+}|^{p}} =\sum_{v\in V}\sum_{u\in N_{u}^{+}}|N_{u}^{+}|^{p-1}\cdot\frac{y(v )^{p}}{|N_{u}^{+}\cup N_{v}^{+}|^{p}}\] \[\leqslant\sum_{v\in V}\sum_{u\in N_{u}^{+}}|N_{u}^{+}|^{p-1} \frac{y(v)^{p}}{|N_{u}^{+}|^{p-1}\cdot|N_{v}^{+}|}\] \[=\sum_{v\in V}y(v)^{p}=\mathsf{OPT}^{p}.\] In total, we have \[\boxed{S_{14}^{+}\leqslant 2\cdot 2^{p}\cdot\mathsf{OPT}^{p}=2^{p+1}\cdot \mathsf{OPT}^{p}}\] and \[\boxed{S_{12}^{+}\leqslant 2^{p}\cdot S_{13}^{+}+8^{p}\cdot S_{14}^{+} \leqslant 2^{p}\cdot 4^{p-1}\cdot((20/3)^{p}+2+2\cdot 4^{p})\cdot\mathsf{OPT}^{p} +8^{p}\cdot\mathsf{OPT}^{p}}\] Next we turn to bounding \(S_{11}^{+}\). Recall that \(R_{1}=\{u:|N_{u}^{-}\cap\{v:d_{uv}\leqslant 0.7\}|\geqslant\frac{10}{3}\cdot \Delta_{u}\}\) and \[S_{11}^{+}\leqslant\sum_{u\in R_{1}}|N_{u}^{+}\cap C(u)|^{p}\leqslant\sum_{u \in R_{1}}|N_{u}^{+}|^{p}.\] So it suffices to bound the right-hand side, which we do in the following proposition. **Proposition 3**.: _Let \(R_{1}\) be the set of \(u\) for which Step 3 of Definition 3 applies. Then_ \[\sum_{u\in R_{1}}|N_{u}^{+}|^{p}\leqslant((20/3)^{p}+2+2\cdot 4^{p})\cdot \mathsf{OPT}^{p}.\] Proof of Proposition 3.: For \(u\in R_{1}\), define \(R_{1}(u)=N_{u}^{-}\cap\{v:d_{uv}\leqslant 0.7\}\), so in particular \(|R_{1}(u)|\geqslant\frac{10}{3}\cdot\Delta_{u}\). Fix a vertex \(u\in R_{1}\). We consider a few cases. The crux of the argument is Case 2a(ii). **Case 1:** _At least a 0.15 fraction of \(N_{u}^{+}\) is in clusters other than \(C(u)\)._ Let \(u\in V^{1}\) be the vertices in this case. This means that \(0.15\cdot|N_{u}^{+}|\leqslant y(u)\), so \[\boxed{\sum_{u\in V^{1}}|N_{u}^{+}|^{p}\leqslant\sum_{u\in V^{1}}\frac{1}{0.15 ^{p}}y(u)^{p}\leqslant(20/3)^{p}\cdot\mathsf{OPT}^{p}.}\] **Case 2:** _At least a 0.85 fraction of \(N_{u}^{+}\) is in \(C(u)\)._ We further partition the cases based on how much \(R_{1}(u)\) intersects \(C(u)\). **Case 2a:** _At least half of \(R_{1}(u)\) is in clusters other than \(C(u)\)._ We partition into cases (just one more time!) based on the size of \(N_{u}^{-}\cap C(u)\). See Figure 2. * **Case 2a(i):** _At least half of \(R_{1}(u)\) is in clusters other than \(C(u)\) and \(|N_{u}^{-}\cap C(u)|\geqslant\Delta_{u}\)._ Let \(u\in V^{2a(i)}\) be the vertices in this case. Note that \(y(u)\geqslant|N_{u}^{-}\cap C(u)|\). Then \[\boxed{\sum_{u\in V^{2ai}}|N_{u}^{+}|^{p}=\sum_{u\in V^{2ai}}\Delta_{u}^{p} \leqslant\sum_{u\in V^{2ai}}|N_{u}^{-}\cap C(u)|^{p}\leqslant\sum_{u\in V^{2ai }}y(u)^{p}\leqslant\mathsf{OPT}^{p}.}\] * **Case 2a(ii):** _At least half of \(R_{1}(u)\) is in clusters other than \(C(u)\) and \(|N_{u}^{-}\cap C(u)|\leqslant\Delta_{u}\)._ Let \(u\in V^{2a(ii)}\) be the vertices in this case. Denote the vertices in \(R_{1}(u)\) that are in clusters other than \(C(u)\) by \(R_{1}^{\prime}(u)\). By definition of Case 2a(ii), \(|R_{1}^{\prime}(u)|\geqslant\frac{5}{3}\cdot\Delta_{u}\). A key fact we will use is that \(|C(u)|\leqslant 2\cdot\Delta_{u}\): \[|C(u)|=|N_{u}^{-}\cap C(u)|+|N_{u}^{+}\cap C(u)|\leqslant\Delta_{u}+\Delta_{ u}=2\cdot\Delta_{u}.\] For \(u\in V^{2a(ii)}\) and \(w\in N_{u}^{+}\cap C(u)\), define \[\varphi(u,w)=|R_{1}^{\prime}(u)\cap N_{w}^{+}|.\] Each \(w\in N_{u}^{+}\cap C(u)\) dispenses \[\frac{\varphi(u,w)^{p}}{|C(u)|}\] charge to \(u\). Also, observe that for \(v\in R_{1}^{\prime}(u)\), we have that \(d_{uv}\leqslant 0.7\), so we know by Proposition 1 that \(|N_{u}^{+}\cap N_{v}^{+}\cap C(u)|\geqslant 0.15\cdot|N_{u}^{+}|\). This implies that \[\sum_{w\in N_{u}^{+}\cap C(u)}|R_{1}^{\prime}(u)\cap N_{w}^{+}| =\sum_{w\in N_{u}^{+}\cap C(u)}\sum_{v\in R_{1}^{\prime}(u)\cap N _{w}^{+}}1=\sum_{v\in R_{1}^{\prime}(u)}\sum_{\begin{subarray}{c}w\in N_{u}^ {+}\\ \cap C(u)\cdot N_{u}^{+}\end{subarray}}1\] \[=\sum_{v\in R_{1}^{\prime}(u)}|C(u)\cap N_{u}^{+}\cap N_{v}^{+}| \geqslant\sum_{v\in R_{1}^{\prime}(u)}0.15\cdot|N_{u}^{+}|\] \[=0.15\cdot|N_{u}^{+}|\cdot|R_{1}^{\prime}(u)|\geq 0.15\cdot\Delta_{u}\cdot\frac{ 5}{3}\Delta_{u}=0.25\cdot\Delta_{u}^{2}.\] First we lower bound the amount of charge each \(u\) satisfying Case 2a(ii) receives. The amount of charge each such \(u\) receives is \[\frac{1}{|C(u)|}\sum_{w\in N_{u}^{+}\cap C(u)}\varphi(u,w)^{p} \geq\frac{1}{|C(u)|}\cdot\frac{1}{|N_{u}^{+}\cap C(u)|^{p-1}}\cdot \left(\sum_{w\in N_{u}^{+}\cap C(u)}\varphi(u,w)\right)^{p}\] \[\geq\frac{1}{2\Delta_{u}}\cdot\frac{1}{\Delta_{u}^{p-1}}\cdot \left(0.25\cdot\Delta_{u}^{2}\right)^{p}\] \[\geq\frac{1}{2}\cdot 0.25^{p}\cdot|N_{u}^{+}|^{p},\] where in the first inequality we have applied Jensen's inequality. Next we need to upper bound the amount of charge dispensed in total to _all_\(u\) satisfying Case 2a(ii). Note by definition that \(\varphi(u,w)\leq y(w)\). Each vertex \(w\in V\) dispenses at most \[\frac{y(w)^{p}}{|C(u)|}=\frac{y(w)^{p}}{|C(w)|}\] charge to each \(u\in C(w)\cap N_{w}^{+}\). So in total each \(w\) dispenses at most \(|C(w)|\cdot y(w)^{p}/|C(w)|=y(w)^{p}\) charge to all \(u\) satisfying Case 2a(ii). Now we put together the lower and upper bounds on the total charge dispensed: \[\sum_{w\in V}y(w)^{p} \geq\text{charge dispensed}\geq\sum_{u\in V^{2a(ii)}}\frac{1}{|C (u)|}\sum_{w\in N_{u}^{+}\cap C(u)}\varphi(u,w)^{p}\] \[\geq\sum_{u\in V^{2a(ii)}}\frac{1}{2}\cdot 0.25^{p}\cdot|N_{u}^{+}| ^{p}.\] In all, \[\boxed{\sum_{u\in V^{2a(ii)}}|N_{u}^{+}|^{p}\leq 2\cdot 4^{p}\cdot\sum_{w\in V }y(w)^{p}\leq 2\cdot 4^{p}\cdot\mathsf{OPT}^{p}.}\] **Case 2b:** _At least half of \(R_{1}(u)\) is in \(C(u)\)._ Let \(u\in V^{2b}\) be the vertices in this case. Denote the vertices in \(R_{1}(u)\) that are in \(C(u)\) by \(R_{1}^{\prime\prime}(u)\). By definition of Case 2b, \(|R_{1}^{\prime\prime}(u)|\geq\frac{5}{3}\cdot\Delta_{u}\). Since every vertex in \(R^{\prime\prime}(u)\) is in \(N_{u}^{-}\), there are at least \(|R^{\prime\prime}(u)|\) disagreements incident to \(u\). So \[y(u)\geq|R^{\prime\prime}(u)|\geq\frac{5}{3}\cdot\Delta_{u}\] which gives \[\boxed{\sum_{u\in V^{2b}}|N_{u}^{+}|^{p}=\sum_{u\in V^{2b}}\Delta_{u}^{p}\leq \sum_{u\in V^{2b}}y(u)^{p}\leq\mathsf{OPT}^{p}}.\] Adding the terms in the boxed expressions across all cases, the proposition follows. So we have \[\boxed{S_{11}^{+}\leq\sum_{u\in R_{1}}|N_{u}^{+}|^{p}\leq((20/3)^{p}+2+2\cdot 4 ^{p})\cdot\mathsf{OPT}^{p}.}\] Adding together all the cases, we conclude that \[(S^{+})^{p} \leq 2^{p}\cdot(S_{1}^{+}+S_{2}^{+})\] \[\leq 2^{p}\cdot(S_{1}^{+}S_{12^{+}}+S_{2}^{+})\] \[\leq 2^{p}\cdot[(8^{p}/2+1)((20/3)^{p}+2+2\cdot 4^{p})+8^{p}+1] \cdot\mathsf{OPT}^{p}.\] #### 4.2.2 Fractional cost of negative edges in \(\ell_{p}\)-norms This section bounds the cost of negative edges. In the analysis, the meanings of \(\mathcal{C}\), \(C(\cdot)\), and \(y\) are as in the previous subsection. **Lemma 6**.: _For \(p\in\mathbb{Z}_{\geqslant 1}\), the fractional cost of the adjusted correlation metric \(f\) in the \(\ell_{p}\)-norm objective for the set of negative edges is a constant factor approximation to the optimal, i.e.,_ \[(S^{-})^{p}=\sum_{u\in V}\left(\sum_{v\in N_{u}^{-}}(1-f_{uv})\right)^{p} \leqslant 2^{p}((200/9)^{p}+1+(10/3)^{p}+2\cdot(20/3)^{p})\cdot\textsf{OPT}^{ p}.\] Proof.: We have \[(S^{-})^{p} =\sum_{u\in V}\left(\sum_{v\in N_{u}^{-}}(1-f_{uv})\right)^{p}\] \[\leqslant 2^{p}\underbrace{\sum_{u\in V}\left(\sum_{v\in N_{u}^{- }\cap C(u)}(1-f_{uv})\right)^{p}}_{S_{1}^{-}}+2^{p}\underbrace{\sum_{u\in V} \left(\sum_{v\in N_{u}^{-}\cap\overline{C(u)}}(1-f_{uv})\right)^{p}}_{S_{2}^ {-}}.\] It is easy to bound \(S_{1}^{-}\) by using the trivial upper bound \(1-f_{uv}\leqslant 1\): \[\boxed{S_{1}^{-}=\sum_{u\in V}\left(\sum_{v\in N_{u}^{-}\cap C(u)}(1-f_{uv}) \right)^{p}\leqslant\sum_{u\in V}\left(\sum_{v\in N_{u}^{-}\cap C(u)}1\right)^ {p}\leqslant\sum_{u\in V}y(u)^{p}=\textsf{OPT}^{p},}\] where we have used that every edge \((u,v)\in E^{-}\) with \(v\in C(u)\) is a disagreement incident to \(u\). Next, we bound \(S_{2}^{-}\). Let \(R_{1}\) and \(R_{2}\) be as in the previous subsection: \(R_{1}=\{u:|N_{u}^{-}\cap\{v:d_{uv}\leqslant 0.7\}|\geqslant\frac{10}{3}\cdot \Delta_{u}\}\) and \(R_{2}=V\backslash R_{1}\). For \(u\in R_{2}\), define \[V_{u}=\{v:v\in N_{u}^{-}\cap\overline{C(u)},d_{uv}\leqslant 0.7\}.\] Note that the definition of \(V_{u}\) is the same as \(R_{1}^{\prime}(u)\) in the previous subsection, but here \(V_{u}\) is only defined for \(u\in R_{2}\), while \(R_{1}^{\prime}(u)\) was defined for \(u\in R_{1}\). For \(u\in R_{1}\), we have \(1-f_{uv}=0\) for every \(v\in V\backslash\{u\}\). So the outer sum in \(S_{2}^{-}\) only need be taken over \(u\in R_{2}\): \[S_{2}^{-}=\sum_{u\in R_{2}}\left(\sum_{v\in N_{u}^{-}\cap\overline{C(u)}}(1-f _{uv})\right)^{p}=\sum_{u\in R_{2}}\left(\sum_{\begin{subarray}{c}v:v\in N_{u }^{-}\cap\overline{C(u)},\\ d_{uv}\leqslant 0.7\end{subarray}}(1-d_{uv})\right)^{p}\leqslant\sum_{u\in R_{2}}|V_{u }|^{p}\] In the second equality, we have used that if \(u\in R_{2}\) and \(v\in N_{u}^{-}\), then \(f_{uv}=d_{uv}\), unless \(f_{uv}\) was rounded up to \(1\) in Step 2 of Definition 3, which happens when \(d_{uv}>0.7\). A key observation is that since \(u\in R_{2}\), it is the case that \(|V_{u}|\leqslant\frac{10}{3}\cdot\Delta_{u}\). Fix a vertex \(u\in R_{2}\). We consider a few cases. **Case 1:** _At least a 0.15 fraction of \(N_{u}^{+}\) is in clusters other than \(C(u)\)._ Define \(V^{1}\) to be the set of \(u\in R_{2}\) that satisfy Case 1. Then for \(u\in V^{1}\), \(0.15\cdot|N_{u}^{+}|\leqslant y(u)\), and further \[|V_{u}|\leqslant\frac{10}{3}\Delta_{u}\leqslant\frac{1}{0.15}\cdot\frac{10}{3} y(u)=\frac{200}{9}y(u).\] So \[\boxed{\sum_{u\in V^{1}}|V_{u}|^{p}\leqslant(200/9)^{p}\cdot\sum_{u\in V^{1} }y(u)^{p}\leqslant(200/9)^{p}\cdot\textsf{OPT}^{p}.}\] **Case 2:**_At least a 0.85 fraction of \(N_{u}^{+}\) is in \(C(u)\)._ Define \(V^{2}\) to be the set of \(u\in R_{2}\) that satisfy Case 2. Fix \(u\in V^{2}\) and \(v\in V_{u}\). Define \(N_{u,v}=N_{u}^{+}\cap N_{v}^{+}\cap C(u)\). Since \(d_{uv}\leqslant 0.7\) and by the assumption of this case, using Proposition 1 we have \[|N_{u,v}|=|N_{u}^{+}\cap N_{v}^{+}\cap C(u)|\geqslant 0.15\cdot\Delta_{u}.\] Observe that since \(v\notin C(u)\) for \(v\in V_{u}\), \((v,w)\) is a (positive) disagreement for all \(w\in N_{u,v}\). **Case 2a:**\(|N_{u}^{-}\cap C(u)|\geqslant\Delta_{u}\). Define \(V^{2a}\) to be the set of \(u\in V^{2}\) that satisfy Case 2a. Since all edges \((u,v)\) with \(v\in N_{u}^{-}\cap C(u)\) are disagreements, we have \(y(u)\geqslant\Delta_{u}\). Recalling that \(|V_{u}|\leqslant\frac{10}{3}\cdot\Delta_{u}\) for \(u\in R_{2}\), we have \[\boxed{\sum_{u\in V^{2a}}|V_{u}|^{p}\leqslant\sum_{u\in V^{2a}}(10/3\cdot \Delta_{u})^{p}\leqslant(10/3)^{p}\cdot\sum_{u\in V^{2a}}y(u)^{p}\leqslant(10 /3)^{p}\cdot\mathsf{OPT}^{p}.}\] **Case 2b:**\(|C(u)|\leqslant 2\Delta_{u}\). Define \(V^{2b}\) to be the \(u\in V^{2}\) satisfying Case 2b. Fix \(w\in N_{u}^{+}\cap C(u)\) and \(u\in V^{2b}\). Define \[\varphi(u,w)=|V_{u}\cap N_{w}^{+}|.\] In other words, \(\varphi(u,w)\) is the number of \(v\in V_{u}\) such that \(w\in N_{u,v}\). Each \(w\in N_{u}^{+}\cap C(u)\) dispenses \[\frac{\varphi(u,w)^{p}}{|C(u)|}\] charge to \(u\). Also, \[\sum_{w\in N_{u}^{+}\cap C(u)}\varphi(u,w) =\sum_{w\in N_{u}^{+}\cap C(u)}|V_{u}\cap N_{w}^{+}|=\sum_{w\in N _{u}^{+}\cap C(u)}\sum_{v\in V_{u}\cap N_{u}^{+}}1\] \[=\sum_{v\in V_{u}}\sum_{w\in N_{u,v}}1=\sum_{v\in V_{u}}|N_{u,v} |\geqslant|V_{u}|\cdot 0.15\cdot\Delta_{u}.\] Now we can lower bound the amount of charge each \(u\) satisfying Case 2b receives. The amount of charge each such \(u\) receives is \[\frac{1}{|C(u)|}\sum_{w\in N_{u}^{+}\cap C(u)}\varphi(u,w)^{p} \geqslant\frac{1}{|C(u)|}\cdot\frac{1}{|N_{u}^{+}\cap C(u)|^{p-1}} \left(\sum_{w\in N_{u}^{+}\cap C(u)}\varphi(u,w)\right)^{p}\] \[\geqslant\frac{1}{2\Delta_{u}}\cdot\frac{1}{\Delta_{u}^{p-1}} \left(|V_{u}|\cdot 0.15\cdot\Delta_{u}\right)^{p}=\frac{1}{2}\cdot 0.15^{p}\cdot|V_{u} |^{p},\] where in the first line we used Jensen's inequality. To upper bound the amount of charge dispensed in total to _all_\(u\) satisfying Case 2b, first note that \(\varphi(u,w)\leqslant y(w)\). Also, each vertex \(w\in V\) only distributes charge to \(u\in C(w)\cap N_{w}^{+}\), and the amount of charge distributed to each such \(u\) is \[\frac{\varphi(u,w)^{p}}{|C(u)|}=\frac{\varphi(u,w)^{p}}{|C(w)|}\leqslant\frac{ y(w)^{p}}{|C(w)|}\] so that in total each \(w\) dispenses at most \[\frac{y(w)^{p}}{|C(w)|}\cdot|C(w)|\leqslant y(w)^{p}\] charge. Putting together the lower and upper bounds on the amount of charge dispensed: \[\sum_{w\in V}y(w)^{p}\geqslant\text{total charge dispensed}\] \[\geqslant\sum_{u\in V^{2b}}\frac{1}{|C(u)|}\sum_{w\in N^{+}_{u} \cap C(u)}\varphi(u,w)^{p}\] \[\geqslant\sum_{u\in V^{2b}}\frac{1}{2}\cdot 0.15^{p}\cdot|V_{u}|^{p}.\] So in all, \[\boxed{\sum_{u\in V^{2b}}|V_{u}|^{p}\leqslant 2\cdot(20/3)^{p}\cdot\sum_{w\in V}y (w)^{p}=2\cdot(20/3)^{p}\cdot\mathsf{OPT}^{p}.}\] Adding together all the different cases, we see that \[(S^{-})^{p}\leqslant 2^{p}((200/9)^{p}+1+(10/3)^{p}+2\cdot(20/3)^{p})\cdot \mathsf{OPT}^{p}.\] ### Proofs of Theorem 1 and Corollary 1 Here we show that Theorem 1 and Corollary 1 follow directly from the preceding lemmas. Proof of Theorem 1.: First we show that Lemma 4 implies that the clustering resulting from inputting \(f\) into the KMZ rounding algorithm is \(O(1)\)-approximate in any \(\ell_{p}\)-norm. Since the rounding algorithm does not depend on \(p\), the clustering will be the same for all \(p\). Let \(\mathcal{C}^{*}\) be the clustering produced by running the KMZ rounding algorithm with the adjusted correlation metric \(f\) as input. Let \(\mathsf{ALG}(u)\) be the number of edges incident to \(u\) that are disagreements with respect to \(\mathcal{C}^{*}\). From [19] and Lemmas 1 and 2, we have that for every \(u\in V\), \[\mathsf{ALG}(u)\leqslant 12\cdot y_{u}\] where \(y_{u}\) is as in LP 2.2 when taking \(x=f\). So \(||y||_{p}\) is the fractional cost of \(f\) in the \(\ell_{p}\)-norm. Thus we have \[||\mathsf{ALG}||_{p}\leqslant 12\cdot||y||_{p}\leqslant 12\cdot 529\cdot \mathsf{OPT}(p)=6348\cdot\mathsf{OPT}(p)\] where \(||\mathsf{ALG}||_{p}\) is the objective value of \(\mathcal{C}^{*}\) in the \(\ell_{p}\)-norm and \(\mathsf{OPT}(p)\) is the optimal objective value in the \(\ell_{p}\)-norm. The last inequality follows from Lemma 4. To see that the overall runtime is \(O(n^{\omega})\), we first recall from the analysis in [14] that computing the correlation metric \(d\) takes time \(O(n^{\omega})\), and the KMZ rounding algorithm takes time \(O(n^{2})\). We just have to show the post-processing of \(d\) in Steps 2 and 3 of Definition 3 that were done in order to obtain the adjusted correlation metric \(f\) can be done quickly. Indeed, Step 2 takes \(O(n^{2})\) time as it simply iterates through the edges. Step 3 also takes \(O(n^{2})\) time, since it visits each vertex and iterates through the neighbors. Thus, the runtime remains \(O(n^{\omega})\). Proof of Corollary 1.: In [14], they observe that to reduce the run-time from \(O(n^{\omega})\) to \(O(n\Delta^{2}\log n)\) for graphs with maximum positive degree bounded by \(\Delta\), one can compute the correlation metric \(d\) in \(O(n\Delta^{2})\) time and then run the KMZ rounding algorithm in \(O(n\Delta^{2}\log n)\) time (see the proof of Corollary 1.2 in Appendix D of [14]). We need only compute \(f_{uv}\) when \(f_{uv}<1\). Otherwise, we can handle \(f_{uv}=1\) implicitly. As in [14], for each \(u\in V\), we can maintain a list of \(d_{uv}\) for all \(v\) with \(d_{uv}<1\). Computing these lists takes \(O(n\Delta^{2})\) time in total. (This is because there are at most \(\Delta^{2}\) vertices \(v\) that are distance two away from \(u\) in the positive subgraph.) Steps 2 and 3 only prune these lists further, since some distances that are below \(1\) may be raised to \(1\); importantly, no distance that is already equal to \(1\) under \(d\) will be reduced in Steps 2 and 3. To carry out Step 2, for each vertex \(u\) it takes \(O(\Delta^{2})\) time to iterate through the list for \(u\) and raise the appropriate \(d_{uv}\) to \(1\). Similarly, to carry out Step 3, for each vertex \(u\) it takes \(O(\Delta^{2})\) time to determine whether the condition in Step 3 is satisfied; if it is, we just handle the vertex \(u\) implicitly, as all distances \(f_{uv}\) are set to \(1\). Since each vertex's list still has size at most \(\Delta^{2}\) after the post-processing in Steps 2 and 3, the KMZ rounding algorithm with \(f\) as input takes time \(O(n\Delta^{2}\log n)\), by the exact same argument as in [14]. Conclusion This paper considered correlation clustering on unweighted, complete graphs, a problem that arises in many settings including community detection and the study of large networks. All previous works that study minimizing the \(\ell_{p}\)-norm (for \(p\in\mathbb{Z}_{>1}\)) of the disagreement vector rely on solving a large, convex relaxation (which is costly to the algorithm's run-time) and produce a solution that is only \(O(1)\)-approximate for one specific value of \(p\). We innovate upon this rich line of work by (1) giving the first combinatorial algorithm for the \(\ell_{p}\)-norms for \(p\in\mathbb{Z}_{>1}\), (2) designing scalable algorithms for this practical problem, and (3) obtaining solutions that are \(O(1)\)-approximate for all \(\ell_{p}\)-norms (for \(p\in\mathbb{Z}_{\geq 1}\cup\{\infty\}\)) simultaneously. We note this last point is particularly important, as such solutions are good in both global and local senses, and thus may be more desirable than typical optimal or approximate solutions for correlation clustering. The existence of these solutions reveals a surprising structural property of correlation clustering. It is of interest to implement the KMZ algorithm with the adjusted correlation metric as input, and empirically gain an understanding of how good the adjusted correlation metric is for different \(\ell_{p}\)-norms. We suspect that our analysis is lossy (for instance, we did not attempt to optimize constants), and that the approximation obtained would be of much better quality than our analysis guarantees. Finally, it would be interesting if similar results can be obtained for weighted correlation clustering.
2310.14415
A New Discriminant for the Hardy Z-Function and the Corrected Gram's law
In this paper, we introduce a novel variational framework rooted in algebraic geometry for the analysis of the Hardy $Z$-function. Our primary contribution lies in the definition and exploration of $\Delta_n(\overline{a})$, a newly devised discriminant that measures the realness of consecutive zeros of $Z(t)$. Our investigation into $\Delta_n(\overline{a})$ and its properties yields a wealth of compelling insights into the zeros of $Z(t)$, including the corrected Gram's law, the second-order approximation of $\Delta_n(\overline{a})$, and the discovery of the G-B-G repulsion relation. Collectively, these results provide compelling evidence supporting a new plausibility argument for the Riemann hypothesis.
Yochay Jerby
2023-10-22T21:09:43Z
http://arxiv.org/abs/2310.14415v1
# A new discriminant for the Hardy Z-function and the corrected Gram's law ###### Abstract. In this paper, we introduce a novel variational framework rooted in algebraic geometry for the analysis of the Hardy \(Z\)-function. Our primary contribution lies in the definition and exploration of \(\Delta_{n}(\overline{a})\), a newly devised discriminant that measures the realness of consecutive zeros of \(Z(t)\). Our investigation into \(\Delta_{n}(\overline{a})\) and its properties yields a wealth of compelling insights into the zeros of \(Z(t)\), including the corrected Gram's law, the second-order approximation of \(\Delta_{n}(\overline{a})\), and the discovery of the G-B-G repulsion relation. Collectively, these results provide compelling evidence supporting a new plausibility argument for the Riemann hypothesis. ## 1. Introducing the Summary of the Main Results ### Lack of Plausibility Argument for RH The Hardy Z-function, denoted as \(Z(t)\), is defined by: \[Z(t)=e^{i\theta(t)}\zeta\left(\frac{1}{2}+it\right)\] where \(\theta(t)\) is the Riemann-Siegel \(\theta\)-function, given by the equation: \[\theta(t)=\arg\left(\Gamma\left(\frac{1}{4}+\frac{it}{2}\right)\right)-\frac {t}{2}\log(t),\] see [10]. The Riemann Hypothesis (RH), which conjectures that all non-trivial zeros of the Riemann zeta function have real part \(\sigma=\frac{1}{2}\), can equivalently be stated that all the zeros of the Hardy Z-function are real. The conjecture stands as one of the most central and enduring challenges in analytic number theory and its resolution holds profound implications for the distribution of prime numbers and many other areas of mathematics. Despite the Riemann Hypothesis's (RH) deceptively straightforward assertion and its empirical validity, as observed through extensive numerical computations, the foundation behind its truth remains elusive. The depth of its ties to myriad pivotal areas of mathematics, ranging from the distribution of prime numbers to quantum chaos and random matrix theory, further accentuates its enigma. Over the years, many have remarked on the absence of any solid heuristic or plausibility argument underpinning the hypothesis. For instance, Edwards, in his seminal work [4], emphasizes this gap by noting the absence of any tangible reason to deem the RH as "probable", even though its validity for an extensive range of roots above the real axis seems to hint otherwise. It is worth noting that the latest numerical verifications, conducted by Platt and Trudgian, confirm the RH up to heights of \(3\cdot 10^{12}\) above the real axis [16]. This stark lack of a compelling plausibility argument contributes to the Riemann Hypothesis remaining one of the most tantalizing and resilient unsolved problems in mathematics. ### Discriminants and Real Zeros Although the Riemann Hypothesis concerns itself with the zeros of the transcendental function \(Z(t)\) the family of functions for which we can truly offer a full, closed description of their zeros is notably small, encompassing polynomials of degree less than five, certain trigonometric functions, and the like. When considering the Hardy \(Z\)-function and its chaotic nature, any hopes of a 'closed formula' for its zeros become all but impossible. However, the Riemann Hypothesis doesn't explicitly demand us to _find_ the zeros of the \(Z(t)\) function. The actual question is: _are all the zeros of \(Z(t)\) real?_. By drawing an analogy with quadratic functions \(a_{2}x^{2}+a_{1}x+a_{0}\), we recognize an important invariant addressing this very question -- the discriminant, given by \(\Delta(a_{2},a_{1},a_{0})=a_{1}^{2}-4a_{2}a_{0}\). The defining feature of this discriminant is that the zeros of \(a_{2}x^{2}+a_{1}x+a_{0}\) are real if and only if \(\Delta(a_{2},a_{1},a_{0})>0\). The discriminant captures this essential feature of the zeros, without requiring the direct computation of the zeros. Within the framework of the Riemann Hypothesis, this is _precisely_ the kind of perspective we seek. Notably, even though the closed formula for the zeros of the quadratic equation doesn't generalize directly to polynomials of higher degrees, the discriminant's concept can indeed be expanded to polynomials of any degree, as well as systems of polynomial equations and algebraic varieties [9]. In this work, we introduce an extension of the idea of the discriminant into the transcendental realm of the \(Z(t)\) function. By their very nature, discriminants act as an invariant for a family of functions. Building upon this, for any \(N\in\mathbb{N}\) we introduce the novel concept of the \(A\)-parametrized space \(\mathcal{Z}_{N}\) which is an \(N\)-dimensional space of variations of \(Z(t)\) in the region \(2N\leq t\leq 2N+1\). In this study, we introduce, \(\Delta_{n}(\overline{a})\), the local \(n\)-th discriminant for a pair of consecutive zeros \(t_{n}\) and \(t_{n+1}\) within this region. This newly defined discriminant is shown in this work to unveil a wealth of significant new results regarding the zeros of \(Z(t)\). In order to describe the construction we need to recall the classical Gram's law, which is central to all that would follow. ### The Classical Gram's Law For any integer \(n\), _the \(n\)-th Gram point_\(g_{n}\) is the unique solution of the equation \[\theta(g_{n})=\pi n.\] These points take their name from J.P. Gram, who introduced a compelling observation known as Gram's law in his work [6]. According to Gram's law, the Hardy Z-function \(Z(t)\) at a Gram point \(g_{n}\) generally satisfies the inequality \[(-1)^{n}Z(g_{n})>0. \tag{1}\] Consequently, a Gram point that upholds this inequality is termed _good_, whereas a point that fails to meet this condition is labeled _bad_. The importance of Gram's law arises from the fact that because \(Z(t)\) is a real function, whenever two consecutive Gram points, say \(g_{n}\) and \(g_{n+1}\), are good, a zero of \(Z(t)\) is guaranteed to exist between them. This observation would have substantial consequences for the RH if Gram's law were to hold for all integers, indeed, it would be a proof. However, the existence of bad Gram points, which violate Gram's law, introduces complexities into this potentially elegant correspondence. Indeed, the earliest known violation of Gram's law occurs at \(n=126\) and was discovered by Hutchinson's findings [7]. The coexistence of Gram's law's overwhelming statistical success and its periodic exceptions brings forth the question: Is Gram's law an intrinsic property of \(Z(t)\) or merely an observational regularity? Edwards for instance, in [4], reflects on Gram's law as an initial, perhaps unsophisticated, attempt to predict the oscillatory behaviour of \(Z(t)\). He mentions that, surprisingly, it turned out to be much more successful than what might have been initially anticipated. ### The Discriminant and the Corrected Gram's Law The classical discriminant of a quadratic function \(F(z;\overline{a})=a_{2}z^{2}+a_{1}z+a_{0}\) is given by \(\Delta(\overline{a})=a_{1}^{2}-4a_{0}a_{2}\), which is itself a quadratic expression in the parameters \(\overline{a}\). The extremal point of \(F(z;\overline{a})\) is \(g(\overline{a})=-\frac{a_{1}}{2a_{2}}\) and hence \[F(g(\overline{a});\overline{a})=\frac{a_{1}^{2}}{4a_{2}}-\frac{a_{1}^{2}}{2a_ {2}}+a_{0}=-\frac{a_{1}^{2}}{4a_{2}}+a_{0}=0\Leftrightarrow\Delta(\overline{a })=a_{1}^{2}-4a_{0}a_{2}=0.\] In our specific context, the discriminant \(\Delta_{n}(\overline{a})\) is analogously defined. Locally, around zero we can extend \(g_{n}(\overline{a})\) to be the variation of the Gram point \(g_{n}\) with respect to \(\overline{a}\), by defining it to be the corresponding extremal point of \(Z_{N}(t;\overline{a})\). The following is our main object of study: **Definition 1.5** (\(n\)-th Gram discriminant).: For any \(n\in\mathbb{Z}\) we refer to \[\Delta_{n}(\overline{a}):=Z_{N(n)}(g_{n}(\overline{a});\overline{a})\] with \(N(n):=\left[\frac{g_{n}}{2}\right]\), as the \(n\)_-th Gram discriminant of \(Z(t)\)_. In particular, much like the quadratic discriminant serves as a measure for whether the two zeros of \(F(z;\overline{a})\) are real, our discriminant \(\Delta_{n}(r)\) can be conceptualized as a measure for the realness of the two consecutive zeros \(t_{n}(\overline{a})\) and \(t_{n+1}(\overline{a})\) of \(Z(t;\overline{a})\). Our first main theorem is the following: **Theorem A** (Corrected Gram's law equivalent to RH).: _For any \(n\in\mathbb{Z}\), The Riemann hypothesis holds if and only if the following corrected Gram's law holds_ \[(-1)^{n}\Delta_{n}(1,...,1)>0.\] _In particular, the extended Gram point \(g_{n}(\overline{a})\) can be analytically continued to \(\overline{1}=(1,...,1)\)._ Contrary to the algebraic quadratic case, our discriminant \(\Delta_{n}(\overline{a})\) as well as the extended Gram point \(g_{n}(\overline{a})\) are transcendental functions and obtaining closed-form expressions for them is not feasible. We prove: **Theorem B** (Second-order approximation of \(\Delta_{n}(\overline{r})\)).: _For any \(n\in\mathbb{Z}\) and \(r\in[0,1]\) the following second-order approximation holds_ \[\Delta_{n}(\overline{r})=Z(g_{n};\overline{r})+\frac{1}{2}H_{n}(0)\cdot r^{2} +O(r^{3}),\] _where the second order Hessian at \(r=0\) is given by_ \[H_{n}(0)=2(-1)^{n}\left(\frac{Z^{\prime}(g_{n})}{\ln\left(\frac{g_{n}}{2\pi} \right)}\right)^{2}.\] _Moreover,_ \[Z^{\prime}(g_{n})=\frac{1}{4}(-1)^{n}\ln^{2}\left(\frac{g_{n}}{2\pi}\right) \overline{1}\cdot\nabla g_{n}(0),\] _where_ \[\nabla g_{n}(\overline{0}):=\left(\frac{\partial g_{n}}{\partial a_{1}}( \overline{0}),...,\frac{\partial g_{n}}{\partial a_{N}}(\overline{0})\right).\] _is the gradient of \(g_{n}(\overline{a})\) at \(\overline{a}=\overline{0}\)._ Theorem B implies the following key result: **Corollary 1.1**.: _The following holds for any \(n\in\mathbb{Z}\):_ 1. _The classical Gram's law_ \((-1)^{n}Z(g_{n})>0\) _is the first-order approximation of the corrected Gram's law_ \((-1)^{n}\Delta_{n}(\overline{1})>0\)_._ 2. _The second-order Hessian_ \(H_{n}(0)\) _measures the magnitude of the local shift of_ \(g_{n}(\overline{a})\) _along the_ \(t\)_-axis._ Combined, these formula reveal a deep property of the corrected Gram law: in order for the law to hold, the second-order term should function as a correcting term requiring a strong move of the position of \(g_{n}(\overline{a})\) compensating for the violation of the classical law, for bad Gram points. This leads us to introduce the following notion: **Definition 1.6** (Viscosity).: For any \(n\in\mathbb{Z}\) we refer to the value of the logarithmic derivative of the \(Z\)-function \[\mu(g_{n})=\frac{Z^{\prime}(g_{n})}{Z(g_{n})},\] as the _viscosity of the Gram point_\(g_{n}\). The viscosity can be seen as a measurement for the relationship between the two aforementioned forces: The pull towards the axis expressed by \(Z(g_{n})\) and the shift along the axis expressed by \(Z^{\prime}(g_{n})\). The above discriminant analysis leads us to discover a remarkable new empirical property of the \(Z\)-function: **Conjecture 1.1** (Repulsion G-B-G conjecture).: _Assume \(g_{n}\) is a bad Gram point with good consecutive neighbours \(g_{n-1},g_{n+1}\). Then the following (non-sharp) viscosity bound holds_ \[|\mu(g_{n})|>4.\] We argue that this newly discovered bound has profound implications regarding the behaviour of the zeros of the \(Z\)-function. Specifically, a repulsion phenomenon between consecutive zeros, which seems to be foundational for the validity of the Riemann Hypothesis itself. Based on the viscosity bound we suggest a general optimization approach for the establishment of the corrected Gram's law. Finally, recall that the Davenport-Helibronn function \(\mathcal{D}(s)\) serves as a compelling counterexample to the Riemann Hypothesis, satisfying the necessary functional equation while violating the RH, in the sense that its zeros do not all lie on the critical line, see [2]. The elusive nature of \(\mathcal{D}(s)\) often underscores the challenge of identifying the unique properties that compel the \(Z\)-function to adhere to the RH, lacking in \(\mathcal{D}(s)\). We show that, unlike the \(Z\)-function, where the violation of the Gram's law is attributed to the non-linearity of the discriminant, for \(\mathcal{D}(s)\), the failure of Gram's law is a genuine violation of the corrected Gram's law itself. In particular, we show that the Davenport-Heilbronn function does not satisfy a repulsion bound similar to the one observed for the \(Z\)-function. The rest of the work is organized as follows: Section 2 recalls the computation of \(Z(t)\) via the approximate formula and defines the sections and core of \(Z(t)\). In Section 3, the global discriminant is defined and the \(A\)-philosophy is described. Section 4 introduces the local discriminant \(\Delta_{n}(\overline{a})\) of two consecutive zeros and proves Theorem A. Section 5 studies the specific case of the linear curve and presents various examples, illustrating results proved in later sections. In Section 6, the first-order approximation of \(\Delta_{n}(\overline{r})\) is proved. Section 7 computes the Hessian \(H_{n}(0)\), establishing the second-order approximation of \(\Delta_{n}(\overline{r})\) and concludes the proof of Theorem B. In Section 8, the viscosity of a Gram point is introduced, the experimental discovery of the G-B-G repulsion relation is described, and its possible relation to the Montgomery pair correlation conjecture is discussed. Section 9 studies the violations of the corrected Gram's law and repulsion relation for the Davenport-Heilbronn function. Section 10 presents an in-depth study of the repulsion relation by introducing the adjustments \(Z_{c}^{\pm}(g_{n})\) and \(Z_{s}^{\pm}(g_{n})\). Section 11 discusses the failure of the classic Newton method for the establishment of the RH, following Edwards. Section 12 suggests a more refined non-linear optimization method, taking into account the geometry of the discriminant in \(A\)-space, for the establishment of the corrected Gram's law. Section 13 presents a summary and concluding remarks. ## 2. The \(N\)-Sections and Core of \(Z(t)\) The \(Z\)-function is formally defined as \[Z(t)=e^{i\theta(t)}\zeta\left(\frac{1}{2}+it\right),\] However, this definition is not conducive to practical calculations, which require the use of approximate formula. The Riemann-Siegel main sum, which is derived from the approximate functional equation (AFE), is a widely used alternative [4]. In this study, we use a variant of the approximate functional equation that includes a greater number of terms, which is found to be better suited for the objectives of this study, see the following Remark 2.1. This approximation is given as follows: \[Z(t)=\cos(\theta(t))+\sum_{k=1}^{N}\frac{1}{\sqrt{k+1}}\cos(\theta(t)-\ln(k+1 )t)+O\left(\frac{1}{t}\right), \tag{2}\] where \(N=\left[\frac{t}{2}\right]\), see [19, 18]. This leads to define: **Definition** (\(N\)-th Section and Core of \(Z\)).: For any \(N\in\mathbb{N}\), we denote \[Z_{N}(t)=\cos(\theta(t))+\sum_{k=1}^{N}\frac{1}{\sqrt{k+1}}\cos(\theta(t)-\ln( k+1)t),\] as the \(N\)_-th section of \(Z(t)\)_. Specifically, we define \(Z_{0}(t):=\cos(\theta(t))\) as _the core function of \(Z(t)\)_. Figure 1 presents a comparison between \(\ln|Z(t)|\) (in blue) and the core \(\ln|Z_{0}(t)|\) (in orange) in the range \(0\leq t\leq 50\): The fact that the zeros of the core \(Z_{0}(t)\) can be considered as rough approximations of the zeros of \(Z(t)\) was actually observed by various authors, see for instance [4, 5, 11, 19], and might have already been known to Riemann himself, see Section 11. Similarly, the Gram points \(g_{n}\), which are the extremal points of the core \(Z_{0}(t)\), can be considered as rough approximations of the extremal points of \(Z(t)\). Recall that the Lambert \(W\)-function, denoted as \(W_{0}(x)\), is a multi-valued function defined as the inverse of the function \(W_{0}(x)e^{W_{0}(x)}\), see [1, 8]. It is known that for any \(n\in\mathbb{Z}\), the \(n\)-th zero and extremal point of \(Z_{0}(t)\) are given respectively by \[t_{n}^{0}=\frac{(8n-11)\pi}{4\cdot W_{0}\left(\frac{8n-11}{8\cdot e}\right)} \ \ ;\ \ g_{n}=\frac{(8n+1)\pi}{4\cdot W_{0}\left(\frac{8n+1}{8\cdot e}\right)},\] see for instance [5] for the zeros and [10] for the Gram points. The following two facts regarding the core \(Z_{0}(t)\) which show that RH and Gram's law should actually be viewed as natural properties of the core: **Proposition 2.1** (RH and Gram's law for \(Z_{0}(t)\)).: _For any \(n\in\mathbb{Z}\):_ * _The zero_ \(t_{n}^{0}\) _of_ \(Z_{0}(t)\) _is real._ * _Gram's law:_ \((-1)^{n}Z_{0}(g_{n})>0\)_._ Proof.: The RH for the core \(Z_{0}(t)\) follows directly from (2). The Gram's law for the core follows from \[(-1)^{n}Z_{0}(g_{n})=(-1)^{n}cos(\theta(g_{n}))=(-1)^{n}cos(\pi n)=(-1)^{2n}=1 >0.\] Consequently, the study of the Riemann Hypothesis and Gram's law can be re-framed as a question regarding the extent of deviation of \(Z(t)\) from its core \(Z_{0}(t)\), and from the fundamental properties of its zeros (RH) and extremal points (Gram's law). The following remark is due: _Remark 2.1_.: It's important to note that the approximate formula for the \(Z\)-function (2) used here involves a summation of terms up to \([\frac{t}{2}]\). This follows from the simple approximate functional equation for \(Z(t)\). More commonly in literature, the formula \[Z(t)\approx 2\sum_{n=0}^{\left[\sqrt{\frac{t}{2\pi}}\right]}\frac{1}{\sqrt{n+1} }\cos(\theta(t)-\ln(n+1)t), \tag{3}\] justified by the Hardy-Littlewood approximate functional equation, is used, where the summation is taken up to \(\left[\sqrt{\frac{t}{2\pi}}\right]\), see [10]. The primary advantage of this latter formula is its efficiency, as it requires the computation of far fewer terms relative to \(t\). However, its main drawback is that it is not sufficiently sensitive to discern the RH by itself, as it exhibits non-real zeros and necessitates further development of the error term via the Riemann-Siegel formula. In contrast, the more robust formula (2), although more computationally intensive, has been observed to be sufficient for discerning the RH, as initially noted by Spira in his empirical investigations, see [19, 18], see also Remark 12.3. ## 3. The \(A\)-Philosophy and Discriminant Originating from the field of algebraic geometry, the \(A\)-philosophy, as introduced by Gelfand, Kapranov, and Zelevinsky in [9], advocates for studying mathematical objects not in isolation but rather in relation to a broader parameter space. This approach becomes particularly powerful when analyzing the discriminant hypersurface that forms within this parameter space, often revealing essential insights about the original mathematical object. While the \(A\)-philosophy has primarily found applications in algebraic contexts, we aim to extend it to the transcendental setting of the Z-function. This extension opens up new avenues for inquiry, significantly enriching our understanding of how such functions behave as the parameters of their approximating sums vary. **Definition** (\(N\)-th Parameter Space).: For a given \(N\in\mathbb{N}\), we define \(\mathcal{Z}_{N}\) to be the parameter space consisting of functions of the form \[Z_{N}(t;\overline{a})=Z_{0}(t)+\sum_{k=1}^{N}\frac{a_{k}}{\sqrt{k+1}}\cos( \theta(t)-\ln(k+1)t),\] where \(\overline{a}=(a_{1},\ldots,a_{N})\) belongs to \(\mathbb{R}^{N}\). This new space \(\mathcal{Z}_{N}\) allows us to investigate how subtle changes in \(\overline{a}\) influence the behaviour of \(Z_{N}(t;\overline{a})\), which is crucial for our later discussions. Our study focuses on the following concept of the global discriminant within the \(\mathcal{Z}_{N}\) space: **Definition** (Global Discriminant).: We define the global discriminant hyper-surface \(\Sigma_{N}\) in \(\mathcal{Z}_{N}\) as: \[\Sigma_{N}=\left\{\overline{a}\in\mathbb{R}^{N}\mid Z_{N}(t;\overline{a})\text{ has a multiple zero}\right\},\] where by multiple zero we mean parameters for which the function \(Z_{N}(t;\overline{a})\) and its first derivative both have a common zero. Our first theorem establishes a geometric connection between the RH and our discriminant study: **Theorem 3.1**.: _Consider \(\gamma(r)\) as a parametrized curve in \(\mathcal{Z}_{N}\) for \(r\in[0,1]\), originating from the core function \(Z_{0}(t)\) at \(r=0\)._ 1. _The zero set of_ \(Z_{N}(t;\gamma(r))\) _is self-conjugate for any_ \(r\in[0,1]\)_. That is, if_ \(z\) _is a zero of_ \(Z_{N}(t;\gamma(r))\)_, then its complex conjugate_ \(\overline{z}\) _is also a zero._ 2. _The real zeros_ \(t_{n}(\overline{r})\) _of_ \(Z_{N}(t;\gamma(r))\) _remain well-defined, smooth, and real as long as they do not collide with other consecutive zeros._ Proof.: Note that since \(\overline{\theta(\overline{t})}=\theta(t)\), all the terms of \(Z_{N}(t;\gamma(r))\) satisfy \[\overline{\cos(\theta(\overline{t})-\ln(k+1)\overline{t})}=\cos(\theta(t)-\ln (k+1)t),\] from which the self-conjugacy of (1) results, the rest follows naturally. Following the proof of Theorem 3.1, we infer that the zeros of the function can only exit the real line as pairs at points of collision, where multiple zeros appear. This notion allows us to reframe the Riemann Hypothesis (RH) in a novel way: **Conjecture 3.2** (\(A\)-Philosophy dynamic RH).: _For any \(n\in\mathbb{N}\), there exists a path \(\gamma(r)\) in \(\mathcal{Z}_{\left[\frac{t_{n}}{2}\right]}\) with \(\gamma(0)=Z_{0}(t)\) and \(\gamma(1)=Z_{N}(t)\) which is non-colliding for the consecutive pair of zeros \(t_{n}(r)\) and \(t_{n+1}(r)\) for \(r\in[0,1]\)._ This conjecture, if true, would imply a one-to-one, order-preserving correspondence between the zeros (or extremal points) of the core function \(Z_{0}(t)\) and those of \(Z(t)\). ## 4. The Corrected Gram's Law The study of elements in \(\mathcal{Z}_{N}\) is particularly challenging due to their infinite number of zeros. This complexity is compounded when considering the geometry of the global discriminant \(\Sigma_{N}\subset\mathcal{Z}_{N}\). To simplify our discussion and gain more precise insights, we introduce the concept of a _local discriminant_ for a given pair of consecutive zeros \(t_{n}\) and \(t_{n+1}\) and a given \(n\in\mathbb{Z}\), with respect to a one-parametric family in \(\mathcal{Z}_{N}\). To offer a concrete example, Fig. 2 illustrates the collision process between the 16-th and 17-th zeros of the 1-parametric family \[Z_{1}(t;r):=\cos(\theta(t))+\frac{r}{\sqrt{2}}\cos(\theta(t)-\ln(2)t)\in\mathcal{ Z}_{1},\] for \(t\) in the range \(66.5\leq t\leq 70\) and for \(r=0\) (blue), \(r=0.75\) (orange), \(r=1.5\) (green), and \(r=2.25\) (red), showing how the zeros evolve with different values of \(r\): Note that a multiple zero occurs precisely when \(g_{n}(r)\) is itself a zero of \(Z_{N}(t;\gamma(r))\). This phenomenon, evident in Fig. 2, is significant because it characterizes points where the function loses its simple zero-crossing behavior. We say that \(\gamma\) is _non-degenerate_ for the \(n\)-th Gram point if \(g_{n}(r;\gamma)\) is well-defined, real, and varies continuously for any \(r\in[0,1]\). **Definition** (The \(n\)-th Gram discriminant of a non-degenerate curve).: Let \(\gamma\) be a non-degenerate curve for the \(n\)-th pair of zeros. We refer to \[\Delta_{n}(r;\gamma):=Z_{N}(g_{n}(r;\gamma);\gamma(r))\] as the _\(n\)-th Gram discriminant of \(\gamma\)_. We now arrive at a central point of our investigation. **Theorem 4.1** (The corrected Gram law).: _The Riemann hypothesis holds if and only if for any \(n\in\mathbb{Z}\) there exists a non-degenerate curve \(\gamma_{n}\) with \(\gamma(1)=(1,\ldots,1)\) such that_ \[(-1)^{n}\Delta_{n}(1;\gamma)>0. \tag{4}\] Proof.: By definition, for any curve \(\gamma\), starting at the origin \(\gamma(0)=(0,\ldots,0)\), we have \[\Delta_{n}(0;\gamma)=Z_{N}(g_{n}(0);0)=\cos(\theta(g_{n}))=(-1)^{n}.\] Our proof demonstrates that the non-collision of \(t_{n}(r)\) and \(t_{n+1}(r)\) with respect to \(\gamma\) is tantamount to requiring that the discriminant \(\Delta_{n}(r;\gamma)\) remains invariant in sign. Figure 2. \(Z_{1}(t;a)\) in the range \(66.5\leq t\leq 70\) for \(r=0\) (blue), \(r=0.75\) (orange), \(r=1.5\) (green) and \(r=2.25\) (red). Note that, contrary to the classical Gram law (1), which is an empirical observation regarding the numerical tendency of Gram points, the corrected Gram's law (4) is expected to hold for all Gram points and is equivalent to the RH. At this point enters the second fundamental feature of the \(A\)-philosophy (aside from the existence of discriminants), which is the ability to study smooth variations through derivatives. In the next sections we will describe results regarding the geometrical content of the first and second derivatives. ## 5. The Discriminant of the Linear Curve - First Examples In this section, we examine the 1-parametric family of functions defined by \[Z_{N}(t;r):=Z_{0}(t)+r\cdot\sum_{k=1}^{N}\frac{1}{\sqrt{k+1}}\cos(\theta(t)- \ln(k+1)t)\in\mathcal{Z}_{N},\] for \(r\in[0,1]\). Note that \(Z_{N}(t;r)\) is the curve in the parameter \(A\)-space, interpolating between the core function \(Z_{0}(t)\) and \(Z_{N}(t;\overline{1})\) by gradually adding all the terms together in a proportional manner. Further motivation for considering \(Z_{N}(t;r)\) will be detailed in Section 11, where we discuss Edwards' speculation. To illustrate how \(Z_{N}(t;r)\) behaves with respect to good\(\backslash\)bad Gram points, let us examine the following example: _Example 5.1_ (\(\Delta_{n}(r)\) for \(g_{90}\) and \(g_{126}\)).: Figure 3 shows \(\Delta_{n}(r)\) (blue) and its first order approximation \(Z_{N}(g_{n};r)\) (orange) for the good Gram point \(n=90\) (left) and the bad Gram point \(n=126\) (right), with \(0\leq r\leq 1\): For the good Gram point \(g_{90}\) the first-order approximation \(Z_{N}(g_{n};r)\) serves as a rather accurate approximation of \(\Delta_{n}(r)\) itself. For the bad Gram point \(g_{126}\) the first-order approximation \(Z_{N}(g_{n};r)\) is seen to deviate from \(\Delta_{n}(r)\). However, for both \(g_{90}\) and \(g_{126}\) the linear family \(Z_{N}(t;r)\) is non-colliding and establishes the corrected Gram's law. Consider Fig. 4 which shows the graphs of \(Z_{N}(t;r)\) themselves in the range \(t\in[g_{n}-2,g_{n}+2]\) for \(n=90\) (left) and \(n=126\) (right) and various values of \(r\in[0,1]\). Our aim in the subsequent sections is to give the theoretical explanation for the phenomena observed in the above examples. In Section 6 we show that \(Z_{N}(g_{n};r)\) is the first order-approximation of \(\Delta_{n}(r)\), explaining the close proximity between the two, observed for the good Gram point \(n=91\). In Section 7 we compute the second-order approximation, explaining the shift in \(g_{n}(r)\) to the right, observed for the bad Gram point \(n=126\). ## 6. The First-Order Approximation of the Corrected Law is the Classical Law Let us consider the first-order approximation of \(\Delta_{n}(\overline{r})\) given by \[\Delta_{n}(r)=\Delta_{n}(\overline{0})+\nabla\Delta_{n}(\overline{0})\cdot \overline{r}+O(r^{2}),\] where the gradient vector is given by \[\nabla\Delta_{n}(\overline{0}):=\left(\frac{\partial\Delta_{n}}{\partial a_{1 }}(\overline{0}),...,\frac{\partial\Delta_{n}}{\partial a_{N}}(\overline{0}) \right).\] We have: **Theorem 6.1** (First order approximation is Gram's law).: _For any \(n\in\mathbb{Z}\), the first-order approximation of the discriminant \(\Delta_{n}(r)\) of the linear curve is given by_ \[\Delta_{n}(r)=Z(g_{n};r)+O(r^{2}).\] _In particular, the classical Gram's law is the first-order approximation of the corrected Gram's law for the linear curve._ Proof.: Consider the function \[F_{k}(t;a):=\cos(\theta(t))+\frac{a}{\sqrt{k+1}}\cos(\theta(t)-\ln(k+1)t)\] Figure 4. Graphs of \(Z_{N}(t;r)\) in the range \(t\in[g_{n}-2,g_{n}+2]\) for \(n=90\) (left) and \(n=126\) (right) and various values of \(r\in[0,1]\). and set \[G_{k}(t;a):=\frac{\partial}{\partial t}F_{k}(t;a).\] Denote by \(g_{n}(a)\) the extremal point of \(F_{k}(t;a)\) locally extending the gram point \(g_{n}\). Then, by definition, the discriminant can be written as \[\Delta_{n}(0,...,0,a,0,...,0)=F_{k}(g_{n}(a);a).\] Hence, by the chain rule, we have \[\frac{\partial\Delta_{n}}{\partial a_{k}}(\overline{0})=\frac{\partial}{ \partial a}F_{k}(g_{n}(a);a)(0)=\frac{\partial g_{n}}{\partial a}(0)\cdot G_{ k}(g_{n};0)+\frac{\partial}{\partial a}F_{k}(g_{n};0).\] But since the Gram points are exactly the solutions of \(G_{k}(g_{n};0)=0\), we get \[\frac{\partial\Delta_{n}}{\partial a_{k}}(\overline{0})=\frac{\partial}{ \partial a}F_{k}(g_{n};0)=\frac{1}{\sqrt{k+1}}\cos(\theta(g_{n})-\ln(k+1)g_{ n}).\] The first-order approximation of the discriminant \(\Delta_{n}(r)\) is given by \[\Delta_{n}(r)\approx\Delta_{n}(\overline{0})+\nabla\Delta_{n}( \overline{0})\cdot\overline{r}=\\ =\Delta_{n}(\overline{0})+r\cdot\sum_{k=1}^{N}\frac{\partial \Delta_{n}}{\partial a_{k}}(\overline{0})=\\ =\cos(\theta(g_{n}))+\sum_{k=0}^{N}\frac{r}{\sqrt{k+1}}\cos( \theta(g_{n})-\ln(k+1)g_{n})=Z(g_{n};r). \tag{5}\] _Remark 6.1_ (The reason for Gram's law).: Theorem 6.1 can be seen as giving a theoretical explanation to the empirical phenomena of the classical Gram law, as following from the RH. Indeed, for good Gram points the first-order approximation \(Z(g_{n})\) is close to the value of \(\Delta_{n}(1)\), and hence is expected to satisfy \((-1)^{n}Z(g_{n})>0\). ## 7. The Second-Order Approximation of \(\Delta_{n}(r)\) We can now consider the second-order approximation of \(\Delta_{n}(\overline{r})\), which in view of Theorem 6.1 can be written as \[\Delta_{n}(r)=Z(g_{n};r)+\frac{1}{2}H_{n}(\overline{0})\cdot r^{2}+O(r^{3}),\] where \[H_{n}(\overline{0}):=\sum_{k_{1},k_{2}=1}^{N}\frac{\partial^{2}\Delta_{n}}{ \partial a_{k_{1}}\partial a_{k_{2}}}(\overline{0})\] is the Hessian of second derivatives. The main result of this section shows the content of the second-order Hessian: **Theorem 7.1** (Second-order approximation).: _For any \(n\in\mathbb{Z}\), the second-order Hessian is given by_ \[H_{n}:=2(-1)^{n}\left(\frac{Z^{\prime}(g_{n})}{\ln\left(\frac{g_{n}}{2\pi} \right)}\right)^{2}.\] In order to prove the main theorem let us first prove a few preliminary results: **Lemma 7.2**.: \[\frac{\partial}{\partial a_{k}}g_{n}(\overline{a})=\frac{\sin(\theta(g_{n}( \overline{a}))-\ln(k+1)g_{n}(\overline{a}))}{2\sqrt{k+1}Z^{\prime\prime}(g_{n }(\overline{a});\overline{a})}\ln\left(\frac{g_{n}(\overline{a})}{2\pi(k+1)^ {2}}\right).\] _In particular,_ \[\frac{\partial}{\partial a_{k}}g_{n}(\overline{0})=2(-1)^{n+1}\frac{\sin( \theta(g_{n})-\ln(k+1)g_{n})}{\sqrt{k+1}\ln^{2}\left(\frac{g_{n}}{2\pi} \right)}\ln\left(\frac{g_{n}}{2\pi(k+1)^{2}}\right).\] Proof.: Let \(g_{n}(\overline{a};\epsilon)\) be the \(n\)-th extremal point of \[F_{k,\epsilon}(t;\overline{a}):=Z_{N}(t;\overline{a})+\frac{\epsilon}{\sqrt{k+ 1}}\cos(\theta(t)-\ln(k+1)t).\] for \(0<\epsilon\) small enough. That is, the zero of the equation \[G_{k,\epsilon}(t;\overline{a}):=\frac{\partial}{\partial t}F_{k,\epsilon}(t; a)=0.\] Then according to Newton's method, one can take the following first iteration \[\widetilde{g}_{n}(\overline{a};\epsilon):=g_{n}(\overline{a})-\frac{G_{k, \epsilon}(g_{n}(\overline{a});\overline{a})}{G^{\prime}_{k,\epsilon}(g_{n}( \overline{a});\overline{a})},\] as an approximation of \(g_{n}(\overline{a};\epsilon)\), which improves as \(\epsilon\) decreases, see [20]. Note that \[G_{k,\epsilon}(t;\overline{a})=Z^{\prime}_{N}(t;\overline{a})-\frac{\epsilon} {\sqrt{k+1}}\sin(\theta(t)-\ln(k+1)t)(\theta^{\prime}(t)-\ln(k+1)).\] Since \(Z^{\prime}_{N}(g_{n}(\overline{a});\overline{a})=0\) and \[\theta^{\prime}(t)=\left(\frac{t}{2}\ln\left(\frac{t}{2\pi}\right)-\frac{t}{2 }-\frac{\pi}{8}\right)^{\prime}=\frac{1}{2}\ln\left(\frac{t}{2\pi}\right),\] we have \[G_{k,\epsilon}(g_{n}(\overline{a});\overline{a})=-\frac{\epsilon}{2\sqrt{k+ 1}}\sin(\theta(g_{n}(\overline{a}))-\ln(k+1)g_{n}(\overline{a}))\cdot\ln\left( \frac{g_{n}(\overline{a})}{2\pi(k+1)^{2}}\right).\] For the derivative the main term is given by \[G^{\prime}_{k,\epsilon}(g_{n}(\overline{a});\overline{a})=Z^{\prime\prime}_{N }(g_{n}(\overline{a});\overline{a})+O(\epsilon).\] In particular, we have \[Z^{\prime\prime}_{N}(g_{n};\overline{0})=-\cos(\theta(g_{n}))(\theta^{\prime} (g_{n}))^{2}=\frac{(-1)^{n+1}}{4}\ln^{2}\left(\frac{g_{n}}{2\pi}\right),\] as required. We have: **Proposition 7.1**.: _For any \(1\leq k_{1},k_{2}\leq N\) the following holds:_ \[\frac{\partial^{2}\Delta_{n}}{\partial a_{k_{1}}\partial a_{k_{2}}}(\overline{a} )=-\frac{1}{4Z^{\prime\prime}(g_{n}(\overline{a});\overline{a})}\prod_{i=1}^{2 }\frac{\sin(\theta(g_{n}(\overline{a}))-\ln(k_{i}+1)g_{n}(\overline{a}))\cdot \ln\left(\frac{g_{n}(\overline{a})}{2\pi(k_{i}+1)^{2}}\right)}{\sqrt{k_{i}+1}}.\] _In particular,_ \[\frac{\partial^{2}\Delta_{n}}{\partial a_{k_{1}}\partial a_{k_{2}}}(\overline{ 0})=\frac{(-1)^{n}}{\ln^{2}\left(\frac{g_{n}}{2\pi}\right)}\prod_{i=1}^{2} \frac{\sin(\ln(k_{i}+1)g_{n})\cdot\ln\left(\frac{g_{n}}{2\pi(k_{i}+1)^{2}} \right)}{\sqrt{(k_{i}+1)}}.\] Proof.: Consider the function \[F_{k_{1},k_{2},\epsilon_{1},\epsilon_{2}}(t;\overline{a}):=Z_{N} (t;\overline{a})+\frac{\epsilon_{1}}{\sqrt{k_{1}+1}}\cos(\theta(t)-\ln(k_{1}+ 1)t)+\\ +\frac{\epsilon_{2}}{\sqrt{k_{2}+1}}\cos(\theta(t)-\ln(k_{2}+1)t) \tag{6}\] and set \[G_{k_{1},k_{2},\epsilon_{1},\epsilon_{2}}(t;\overline{a}):=\frac{\partial}{ \partial t}F_{k_{1},k_{2},\epsilon_{1},\epsilon_{2}}(t;\overline{a}). \tag{7}\] Denote by \(g_{n}(\overline{a};\epsilon_{1},\epsilon_{2})\) the extremal point of \(F_{k_{1},k_{2}}(t;a_{1},a_{2})\) locally extending the gram point \(g_{n}(\overline{a})\). Then, by definition, the discriminant can be written as \[\Delta_{n}(a_{1},...,a_{k_{1}}+\epsilon_{1},...,a_{k_{2}}+\epsilon_{2},..,a_{ N})=F_{k_{1},k_{2},\epsilon_{1},\epsilon_{2}}(g_{n}(\overline{a};\epsilon_{1}, \epsilon_{2});\overline{a}).\] Hence, by the chain rule, we have \[\frac{\partial\Delta_{n}}{\partial\epsilon_{1}}(a_{1},...,a_{k_{ 1}}+\epsilon_{1},...,a_{k_{2}}+\epsilon_{2},..,a_{N})=\frac{\partial}{ \partial\epsilon_{1}}F_{k_{1},k_{2},\epsilon_{1},\epsilon_{2}}(g_{n}( \overline{a};\epsilon_{1},\epsilon_{2});\overline{a})=\\ =\frac{\partial g_{n}}{\partial\epsilon_{1}}(\overline{a};\epsilon _{1},\epsilon_{2})\cdot G_{k_{1},k_{2},\epsilon_{1},\epsilon_{2}}(g_{n}( \overline{a};\epsilon_{1},\epsilon_{2});\overline{a})+\frac{\partial}{ \partial\epsilon_{1}}F_{k_{1},k_{2},\epsilon_{1},\epsilon_{2}}(g_{n}( \overline{a};\epsilon_{1},\epsilon_{2});\overline{a})=\\ =\frac{\partial}{\partial\epsilon_{1}}F_{k_{1},k_{2},\epsilon_{1},\epsilon_{2}}(g_{n}(\overline{a};\epsilon_{1},\epsilon_{2});\overline{a})=\\ =\frac{1}{\sqrt{k_{1}+1}}\cos(\theta(g_{n}(\overline{a};\epsilon _{1},\epsilon_{2}))-\ln(k_{1}+1)g_{n}(\overline{a};\epsilon_{1},\epsilon_{2} )). \tag{8}\] Again, we use the fact that \(g_{n}(\overline{a};\epsilon_{1},\epsilon_{2})\) are, by definition, the solutions of \[G_{k_{1},k_{2},\epsilon_{1},\epsilon_{2}}(g_{n}(\overline{a};\epsilon_{1}, \epsilon_{2});\overline{a})=0.\] Thus, for the second derivative we have \[\frac{\partial^{2}\Delta_{n}}{\partial\epsilon_{1}\partial\epsilon_ {2}}(a_{1},...,a_{k_{1}}+\epsilon_{1},...,a_{k_{2}}+\epsilon_{2},..,a_{N})=\\ =\frac{\partial}{\partial\epsilon_{2}}\left(\frac{1}{\sqrt{k_{1}+ 1}}\cos(\theta(g_{n}(\overline{a};\epsilon_{1},\epsilon_{2}))-\ln(k_{1}+1)g_{ n}(\overline{a};\epsilon_{1},\epsilon_{2}))\right)=\\ =-\frac{1}{2\sqrt{k_{1}+1}}\sin(\theta(g_{n}(\overline{a}; \epsilon_{1},\epsilon_{2}))-\ln(k_{1}+1)g_{n}(\overline{a};\epsilon_{1}, \epsilon_{2}))\cdot\ln\left(\frac{g_{n}(\overline{a};\epsilon_{1},\epsilon_{2} )}{2\pi(k+1)^{2}}\right)\frac{\partial g_{n}}{\partial\epsilon_{2}}( \overline{a};\epsilon_{1},\epsilon_{2}). \tag{9}\] By substituting \((\epsilon_{1},\epsilon_{2})=(0,0)\) and applying Lemma 7.2 the result follows. _Proof of Theorem 7.1:_ By Proposition 7.1 we have \[H_{n}(\overline{a}):=\sum_{k_{1},k_{2}=1}^{N}\frac{\partial^{2} \Delta_{n}}{\partial a_{k_{1}}\partial a_{k_{2}}}(\overline{0})a_{k_{1}}a_{k_ {2}},=\\ =\frac{(-1)^{n}}{\ln^{2}\left(\frac{g_{n}}{2\pi}\right)}\sum_{k_{ 1},k_{2}=1}^{N}\prod_{i=1}^{2}\frac{\sin(\ln(k_{i}+1)g_{n})\cdot\ln\left(\frac{ g_{n}}{2\pi(k_{i}+1)^{2}}\right)}{\sqrt{k_{i}+1}}a_{k_{i}}=\\ =\frac{(-1)^{n}}{\ln^{2}\left(\frac{g_{n}}{2\pi}\right)}\left( \sum_{k=1}^{N}\frac{\sin(\ln(k+1)g_{n})\cdot\ln\left(\frac{g_{n}}{2\pi(k+1)^{2 }}\right)}{\sqrt{k+1}}a_{k}\right)^{2}=4(-1)^{n}\left(\frac{Z^{\prime}(g_{n}; \overline{a})}{\ln\left(\frac{g_{n}}{2\pi}\right)}\right)^{2}, \tag{10}\] as required. Theorem 7.1 shows that the magnitude of the Hessian \(H_{n}\) is determined by the size of \(Z^{\prime}(g_{n})\). We further have: **Theorem 7.3**.: _Moreover,_ \[Z^{\prime}(g_{n};\overline{r})=\frac{1}{4}(-1)^{n}\ln^{2}\left(\frac{g_{n}}{ 2\pi}\right)\overline{r}\cdot\nabla g_{n}(0),\] _where_ \[\nabla g_{n}(\overline{0}):=\left(\frac{\partial g_{n}}{\partial a_{1}}( \overline{0}),...,\frac{\partial g_{n}}{\partial a_{N}}(\overline{0})\right)\] _is the gradient of \(g_{n}(\overline{a})\) at \(\overline{a}=\overline{0}\)._ _Proof._ The following holds \[Z^{\prime}_{N}(t;\overline{a})=-\sin(\theta(t))\theta^{\prime}(t)- \sum_{k=1}^{N}\frac{a_{k}}{\sqrt{k+1}}\sin(\theta(t)-\ln(k+1)t)(\theta^{ \prime}(t)-\ln(k+1))=\\ =-\frac{1}{2}\sin(\theta(t))\ln\left(\frac{t}{2\pi}\right)-\sum_{ k=1}^{N}\frac{a_{k}}{2\sqrt{k+1}}\sin(\theta(t)-\ln(k+1)t)\ln\left(\frac{t}{2 \pi(k+1)^{2}}\right). \tag{11}\] Hence, \[Z^{\prime}_{N}(g_{n};\overline{a})=\sum_{k=1}^{N}\frac{a_{k}}{2\sqrt{k+1}}\sin( \ln(k+1)g_{n})\ln\left(\frac{g_{n}}{2\pi(k+1)^{2}}\right).\] But also, \[\frac{1}{4}\ln^{2}\left(\frac{g_{n}}{2\pi}\right)\frac{\partial}{\partial a_{k }}g_{n}(\overline{0})=(-1)^{n}\frac{\sin(\ln(k+1)g_{n})}{2\sqrt{k+1}}\ln\left( \frac{g_{n}}{2\pi(k+1)^{2}}\right).\] Hence, \[Z^{\prime}(g_{n};\overline{a})=\frac{1}{4}(-1)^{n}\ln^{2}\left(\frac{g_{n}}{2 \pi}\right)\overline{a}\cdot\nabla g_{n}(0),\] as required. Theorem 7.1 and Theorem 7.3 together imply the following: **Corollary 7.2**.: _The following holds:_ 1. _The second-order Hessian_ \(H_{n}\) _is a measurement of the magnitude of the gradient_ \(\nabla g_{n}(0)\)_._ 2. _The direction of the shift of the_ \(n\)_-th extremal point of_ \(Z(t;\overline{a})\) _with respect to_ \(g_{n}\) _is given by the sign of_ \((-1)^{n}Z^{\prime}(g_{n};\overline{a})\)_._ The corrected Gram's law implies that for bad Gram points the second order term, represented by the Hessian \(H_{n}\), is expected to become crucial in order to compensate on the first-order violation of the classical law. Corollary 7.2 further shows that a large second-order Hessian \(H_{n}\) is expressed by the fact that the Gram point must experience a considerable positional shift for the approximation to be valid. Conversely, a small Hessian at a Gram point signifies that the first order term predominates in determining the position of the Gram point. For instance, consider: _Example 7.1_ (The Hessians of \(g_{90}\) and \(g_{126}\)).: By direct computation, the Hessians of \(\Delta_{n}(r)\) for the good Gram point \(g_{90}\) and the bad Gram point \(g_{126}\) are given by: \[H_{90}=0.00203615\ \ \ ;\ \ H_{126}=2.22893\] In accordance with Example 5.1, the Hessian \(H_{90}\) is relatively small which implies that for the good Gram point \(g_{90}\) the discriminant is largely determined by its first-order approximation \(Z(g_{n};r)\) and \(g_{n}(r)\) hardly changes position. In contrast, the Hessian for the bad Gram point \(g_{126}\) is considerably large. This conveys that \(g_{126}(r)\) undergoes a considerable displacement from its original position \(g_{126}\) as \(r\) increases. This practical shift is visually demonstrated as a shift to the right in Fig. 4. _Remark 7.2_.: Adding higher-order terms improves the level of accuracy of the approximation near \(\overline{a}=\overline{0}\). However, the level of accuracy around \(\overline{a}=\overline{1}\), can actually decrease. In particular, obtaining a reasonable approximation of \(\Delta_{n}(\overline{1})\) via Taylor approximation might require a rather substantial amount of higher degree terms, which would be completely infeasible to compute in practice, for general \(n\in\mathbb{Z}\). ## 8. An Experimental Investigation of the Viscosity of Gram Points In this section, drawing inspiration from fluid dynamics, we introduce the concept of 'viscosity' to quantify the positional shifting behaviour and conduct an initial empirical investigation of its properties. Generally, Corollary 7.2 straightforwardly suggests that if \(g_{n}\) is a bad Gram point, one should anticipate a significant shift in the position of \(g_{n}(r)\) itself to fulfil the corrected Gram's law at the second-order level. In other words, bad Gram points \(g_{n}\) are expected to demonstrate some correlation between the values of \(Z(g_{n})\) and \(Z^{\prime}(g_{n})\). For a bad Gram point, we interpret \(Z(g_{n})\) as pushing the extremal point downward towards zero to create a collision, while \(Z^{\prime}(g_{n})\) pushes the extremal point sideways to avoid a collision. Thus, let us introduce the following definition: **Definition** (Viscosity of a Gram point).: For any \(n\in\mathbb{Z}\) we refer to of the \(Z\)-function \[\mu(g_{n}):=\left|\frac{Z^{\prime}(g_{n})}{Z(g_{n})}\right|\] as _the viscosity of the Gram point \(g_{n}\)._ The viscosity of a Gram point \(\mu(g_{n})\) is essentially a measure of how much a Gram point'resists' a change in its position, analogous to how viscosity in a fluid quantifies its resistance to flow. In essence, a high viscosity at a Gram point implies a lower tendency for the point to maintain its position and vice versa. _Remark 8.1_.: The occurrence of the logarithmic derivative \(\frac{Z^{\prime}(t)}{Z(t)}\) in the study of the zeros of \(Z(t)\) is not surprising, given its integral role as manifested by the classical formula: \[\frac{Z^{\prime}(t)}{Z(t)}=i\theta^{\prime}(t)-\frac{1}{it-\frac{1}{2}}+\sum_{ \rho}\frac{1}{\frac{1}{2}+it-\rho}-\frac{1}{2}\frac{\Gamma^{\prime}(\frac{5}{ 4}+\frac{it}{2})}{\Gamma(\frac{5}{4}+\frac{it}{2})}+\frac{1}{2}\ln(\pi),\] This relation, outlined in Section 3.2 of [4], underscores the inherent connection between the logarithmic derivative of \(Z(t)\) and its zeros, further motivating our current study. However, in our case our interest is focused on the values of the logarithmic derivative at the bad Gram points \(g_{n}\). From our second-order approximation of the discriminant \(\Delta_{n}(r)\), we thus come to anticipate that \(\mu(g_{n})\) of bad Gram points will exhibit different unique features compared to those of general general Gram points. Indeed, consider Fig. 5 which presents the outcome of a numerical computation of the viscosity \(\mu(g_{n})\) for the first \(n=1,\ldots,1000\) general Gram points. As observed in Fig. 5, the values of the viscosity \(\mu(g_{n})\) for general Gram points appear to be distributed without a discernible pattern or lower boundary. In comparison, Fig. 6 illustrates an intriguing behaviour for the first \(n=1,\ldots,1000\) bad Gram points, denoted as \(g_{n}^{bad}\). Remarkably, the data suggests that the viscosity \(\mu(g_{n}^{bad})\) for bad Gram points appears to be bounded from below by a constant \(C\), which is marginally greater than \(4\). This stands in contrast to the general case and represents what seems to be an unforeseen characteristic typical of bad Gram points. However, having observed this previously undiscovered bounded behaviour of \(\mu(g_{n}^{\text{bad}})\), we encounter another surprising layer of complexity as the computational range is extended further. In fact, as we proceed to higher values, a sparse subset of bad Gram points is discovered to sporadically defy this lower bound, yielding bad gram points with viscosity dramatically below \(C\). We will refer to such unusual instances of bad Gram points as _corrupt_ Gram po Figure 5. Viscosity \(\mu(g_{n})\) for the first \(n=1,\ldots,1000\) Gram points. Figure 6. Viscosity \(\mu(g_{n}^{bad})\) for the first bad \(n=1,\ldots,1000\) Gram points, apparently bounded from below by a constant \(C>4\). 9807962-th Gram point is corrupt and its viscosity is \[\mu(g_{9807962})=0.0750883.\] Figure 7 shows the viscosity \(\mu(g_{n}^{bad})\) of all the bad Gram points arising between the \(2.4\cdot 10^{7}\)-th and \(2.43\cdot 10^{7}\)-th Gram points, with the corrupt points marked red: The emergence of such corrupt Gram points may initially seem contradictory to the insights we've developed thus far. However, it turns that these corrupt Gram points are not mere anomalies but exhibit a distinctive characteristic by themselves: they are actually observed to occur only under very specific conditions. Let us recall the following classic definition due to [17] **Definition 8.2** (Gram block).: A consecutive collection \(\{g_{n},g_{n+1},...,g_{n+N}\}\) of Gram points is called a _Gram block_ if \(g_{n}\) and \(g_{n+N}\) are good Gram points while \(g_{n+j}\) are bad Gram points for \(j=1,...,N-1\). We refer to a bad Gram point as _isolated_ if it is the middle point of a block with \(N=2\). For instance, the corrupt Gram point from the example above is part of the following Gram block of length \(N=3\) \[\left\{9807960,9807961,9807962,9807963\right\}.\] Our experimental observations in this section are thus summarized in the following conjecture: **Conjecture 8.1** (G-B-G).: _Corrupt Gram points must be non-isolated. In particular, if a bad Gram point \(g_{n}\) is in a triplet \(\{g_{n-1},g_{n},g_{n+1}\}\) such that \(g_{n-1}\) and \(g_{n+1}\) are good Gram points, then \(g_{n}\) cannot be corrupt._ We will see in the following sections that the bounded nature of \(\mu(g_{n}^{\text{bad}})\) for G-B-G points which is expressed in Conjecture 8.1 and is observed empirically to extend substantially Figure 7. Viscosity \(\mu(g_{n}^{bad})\) of the bad Gram points between the \(2.4\cdot 10^{7}\)-th and \(2.43\cdot 10^{7}\)-th Gram points with corrupt points marked red. beyond the sample examples presented here, carries significant and profound implications for the study of Gram points as well as the distribution of zeros of the \(Z\)-function. Let us conclude this section with the following remark regarding notions of repulsion between consecutive zeros of the \(Z\)-function: _Remark 8.3_ (Repulsion and Montgomery's Conjecture).: The observed viscosity bound, \(\mu(g_{n})>C\), implies that a large absolute value of \(Z(g_{n})\) necessitates a large absolute value for \(Z^{\prime}(g_{n})\). Consequently, if there exists a force pushing the value of \(Z(t)\) at \(g_{n}\) towards the axis, there must also be a lateral force pushing it sideways. This suggests that, on an infinitesimal level, an extremal G-B-G point \(g_{n}(r)\) resists a change in sign. Given that a change in sign at \(g_{n}(r)\) would arise when two consecutive zeros collide, this viscosity bound unveils a previously unrecognized repulsion property between adjacent zeros of the \(Z\)-function. For \(\alpha\leq\beta\) define \[A(T;\alpha,\beta):=\left\{(\rho,\rho^{\prime})\mid 0<\rho,\rho^{\prime}<T\text{ and } \frac{2\pi\alpha}{\ln(T)}\leq\rho-\rho^{\prime}\leq\frac{2\pi\beta}{\ln(T)} \right\}. \tag{12}\] Montgomery's famous pair correlation conjecture (under the assumption of RH) states that \[N(T;\alpha,\beta):=\sum_{A}1\sim\left(\int_{\alpha}^{\beta}\left(1-\frac{\sin (\pi u)}{\pi u}\right)\,du+\delta_{0}([\alpha,\beta])\right)\frac{T}{2\pi}\ln(T) \tag{13}\] as \(T\to\infty\), see [15]. Due to the decay of the integral for small \(u\), this conjecture is often interpreted as expressing statistical repulsion between consecutive zeros. It is crucial to note that while both our approach and Montgomery's conjecture discuss a form of repulsion between zeros, the nature of this repulsion is fundamentally different. In Montgomery's conjecture, the repulsion is statistical and encapsulates a property of zero distributions in the large scale. In contrast, our approach, guided by the dynamic nature of the A-philosophy, reveals a repulsion property which occurs at the infinitesimal level between specific pairs of consecutive zeros. This explicit, dynamic behaviour might be a foundational mechanism whose existence can be viewed as implicitly anticipated by Montgomery's statistical conjecture. ## 9. The Failure of the G-B-G Property for the DH-Function The Davenport-Heilbronn functions are a specialized class of Dirichlet functions. Unlike the Riemann zeta function, these functions are known to have zeros that deviate from the critical line, thereby violating their corresponding RH property [2, 21]. This enigmatic contrast between the behaviours of the Riemann zeta function and the Davenport-Heilbronn functions is often presented as a compelling illustration of the elusive nature of RH. We define the function as follows: **Definition 9.1** (DH-function).: We define the Davenport-Heilbronn function, \(\mathcal{D}(s)\), as \[\mathcal{D}(s)=\frac{(1-i\kappa)}{2}L(s,\chi_{5,2})+\frac{(1+i\kappa)}{2}L(s, \overline{\chi}_{5,2}),\] where \(\kappa=\frac{\sqrt{10-2\sqrt{5}}-2}{\sqrt{5}-1}\). The function satisfies the functional equation \(\xi(s)=\xi(1-s)\), where \[\xi(s)=\left(\frac{\pi}{5}\right)^{-\frac{s}{2}}\Gamma\left(\frac{1+s}{2} \right)\mathcal{D}(s).\] To carry out an analysis analogous to our earlier treatment of the \(Z\)-function, we introduce the core function of \(\mathcal{D}(s)\): \[\mathcal{D}_{0}(s):=\frac{1}{2}\left[\left(\frac{\pi}{5}\right)^{-\frac{s}{2} }\Gamma\left(\frac{1+s}{2}\right)+\left(\frac{\pi}{5}\right)^{\frac{s-1}{2}} \Gamma\left(\frac{2-s}{2}\right)\right].\] Similar to the \(Z\)-function, we can define the Davenport-Heilbronn \(Z\)-function, \(Z^{DH}(t)\), the \(A\)-space \(Z^{DH}_{N}(t;\overline{a})\), and the Gram discriminant \(\Delta^{DH}_{n}(r)\). The zeros and extremal points of the core function \(Z^{DH}_{0}(t)\) are given by the following relations: \[t^{DH}_{n}:=\tfrac{2\pi(n-\frac{5}{8})}{W_{0}(5e^{-1}(n-\frac{5}{8}))}\ \ \ ;\ \ g^{DH}_{n}:=\tfrac{2\pi(n-\frac{1}{8})}{W_{0}(5e^{-1}(n-\frac{1}{8}))}. \tag{14}\] Our attention is particularly drawn to the first pair of zeros deviating from the critical line, which occur near the Gram point \(g^{DH}_{44}\). This deviation is depicted in Figure 8, where we plot the Gram discriminant \(\Delta^{DH}_{44}(r)\) and its first order approximation \(Z^{DH}_{N}(g^{DH}_{n};r)\) over the interval \(0\leq r\leq 1\). The behaviour exhibited by \(\Delta^{DH}_{44}(r)\) is seen to be a sort of mixture of the behaviour of \(\Delta_{91}(r)\) and \(\Delta_{126}(r)\) for the \(Z\)-function, presented in Example 5.1. In the sense that, on the one hand, the first-order approximation \(Z^{DH}_{N}(g^{DH}_{n};r)\) is seen to be in superb alignment with \(\Delta^{DH}_{44}(r)\) (like for the good Gram point \(g_{91}\)) while, on the other hand, it is seen to nevertheless violate the Gram law (as for the bad Gram point \(g_{126}\)). In particular, Fig. 8 shows that, in strict contrast to the case of the \(Z\)-function, for the Davenport-Heilbron function \(Z^{DH}(t)\), the violation of Gram's law is not the result of non-linearity of the discriminant \(\Delta_{n}^{DH}(r)\), but rather represents a genuine violation of the corrected Gram's law, as well. Note that this violation of the corrected Gram's law, in contrast to the case of the bad Gram point \(g_{126}\), is the result of an unavoidable collision between the zeros \(t_{44}^{DH}(r)\) and \(t_{45}^{DH}(r)\) in the transition from \(r=0\) and \(r=1\), which will occur for any curve in the parameter space connecting the core \(Z_{N}^{DH}(t;0)\) to \(Z_{N}^{DH}(t;1)\). We thus have: **Corollary 9.1**.: _The Davenport-Heilbronn function \(D(s)\) violates the corrected Gram's law._ Figure 9 shows the graphs of \(Z_{N}^{DH}(t;r)\) in the range \(t\in[g_{44}^{DH}-2,g_{44}^{DH}+2]\) for various values of \(r\in[0,1]\): In particular, Fig. 9 shows us that the extremal point \(g_{44}^{DH}(r)\) itself presents a behaviour more similar to that of the good Gram point \(g_{90}(r)\) rather than that of the bad Gram point \(g_{126}(r)\), in the sense that it hardly exhibits a shift to the sides and remains rather almost fixed in its position as \(r\) grows. In particular, the results of this section could be summarized in the following: **Corollary 9.2**.: _The Davenport-Heilbronn function \(D(s)\) violates the G-B-G repulsion property._ It should be noted that a violation of the G-B-G property should be viewed as a more primal phenomena than a violation of RH, or the corrected Gram's law for that matter. It means that \(Z^{DH}(t)\) lacks the regulatory property that \(Z(t)\) is expected to have, of pushing to the sides bad Gram points, \(g_{n}(r)\) as \(r\) grows. ## 10. An In-Depth Investigation of The G-B-G Repulsion Property and Adjustments of \(Z(g_{n\pm 1})\) In Section 8, inspired by our discriminant analysis, we have experimentally discovered the G-B-G repulsion property according to which a bad Gram point \(g_{n}\) with good consecutive neighbours \(g_{n\pm 1}\) is expected to satisfy the viscosity bound \(\mu(g_{n})>4\). In this section, we undertake a formal exploration of the G-B-G property, aiming to uncover why the characteristics of \(g_{n}\) are anticipated to be interconnected with those of its consecutive neighbours. While a comprehensive proof of the G-B-G property is still beyond reach, subsequent sections will elucidate why we consider this property a significant stride towards a deeper understanding of the RH, and how it profoundly influences the properties of the zeros of the \(Z\)-function. ### The Cosine and Sine Adjustments of \(Z(g_{n\pm 1})\) In order to explore why the values of \(Z^{\prime}(g_{n})\) and \(Z(g_{n})\) are expected to be closely related to \(Z(g_{n-1})\) and \(Z(g_{n+1})\), we will turn to the classical Approximate Functional Equation (AFE), which is commonly used to compute the values of \(Z\) and \(Z^{\prime}\) at Gram points. The AFE is represented by the following formulas: \[\begin{cases}Z(g_{n})=2(-1)^{n}\sum_{k=1}^{N(n)}\frac{\cos(\ln(k)g_{n})}{\sqrt {k}}+O\left(g_{n}^{-\frac{1}{4}}\right),\\ Z^{\prime}(g_{n})=(-1)^{n}\sum_{k=1}^{N(n)}\ln\left(\frac{g_{n}}{2\pi k^{2}} \right)\frac{\sin(\ln(k)g_{n})}{\sqrt{k}}+O\left(g_{n}^{-\frac{1}{4}}\right),\end{cases} \tag{15}\] where \(N(n):=\left[\sqrt{\frac{g_{n}}{2\pi}}\right]\) for any \(n\in\mathbb{Z}\). For a detailed derivation, see (5.2) and (6.3) in [10]. _Remark 10.2_.: In Remark 2.1, we noted that we often compute \(Z(t)\) using the approximation (2) with a sum of \(\left[\frac{g_{n}}{2}\right]\) terms for the \(A\)-philosophy. However, in our current study focusing specifically on the values at \(g_{n}\), the classical Approximate Functional Equation (AFE) with \(N(n)\) terms is more suitable and will be employed. Let us introduce the following adjustments of \(Z(g_{n\pm 1})\) to account for the influence of the adjacent Gram points \(g_{n-1}\) and \(g_{n+1}\) on \(Z\) and \(Z^{\prime}\): **Definition 10.3** (Cosine and Sine Adjustments of \(Z(g_{n\pm 1})\)).: For any integer \(n\in\mathbb{Z}\), define: 1. \(Z_{c}^{\pm}(g_{n})=2(-1)^{n}\sum_{k=1}^{N(n)}\frac{\cos(\ln(k)g_{n\pm 1})}{ \sqrt{k}\cos(\phi_{k}^{n})}\), to which we refer as the cosine-adjustment of \(Z(g_{n\pm 1})\). 2. \(Z_{s}^{\pm}(g_{n})=(-1)^{n}\sum_{k=1}^{N(n)}\ln\left(\frac{g_{n}}{2\pi k^{2}} \right)\frac{\cos(\ln(k)g_{n\pm 1})}{\sqrt{k}\sin(\phi_{k}^{n})}\), to which we refer as the sine-adjustment of \(Z(g_{n\pm 1})\). where for \(k=1,\ldots,N(n)\) the adjustment-phase is given by \[\phi_{k}^{n}:=\ln(k)(g_{n}-g_{n-1})=\ln(k)\frac{2\pi}{\ln\left(\frac{g_{n}}{2\pi }\right)}.\] _Remark 10.4_ (\(\alpha\)-adjustments of \(Z(g_{n})\)).: In general, for any sequence \(\alpha=\alpha(n,k)\) let us define the \(\alpha\)-adjustments of \(Z(g_{n})\) with respect to \(g_{n\pm 1}\) to be \[Z^{\pm}(g_{n};\alpha):=(-1)^{n}\sum_{k=1}^{N(n)}\alpha(n,k)\frac{\cos(\ln(k)g_ {n\pm 1})}{\sqrt{k}}.\] The cosine and sine adjustments of 10.3 correspond to the following two sequences \[\alpha_{c}(n,k)=\tfrac{2}{\cos(\phi_{k}^{n})}\ \ ;\ \ \alpha_{s}(n,k)=\ln\left( \tfrac{g_{n}}{2\pi k^{2}}\right)\tfrac{1}{\sin(\phi_{k}^{n})}.\] The following result establishes the relationship between the values \(Z(g_{n})\) and \(Z^{\prime}(g_{n})\) and the cosine and sine adjustments of \(Z(g_{n\pm 1})\): **Proposition 10.1**.: _For any \(n\in\mathbb{Z}\) the following holds:_ 1. \(Z(g_{n})=\tfrac{1}{2}\left(Z_{c}^{-}(g_{n})+Z_{c}^{+}(g_{n})\right)+O\left(g_ {n}^{-\frac{1}{4}}\right)\)_._ 2. \(Z^{\prime}(g_{n})=\tfrac{1}{2}\left(Z_{s}^{-}(g_{n})-Z_{s}^{+}(g_{n})\right)+O \left(g_{n}^{-\frac{1}{4}}\right)\)_._ Proof.: The result follows from the identities \[\cos(\ln(k)g_{n})=\tfrac{\cos(\ln(k)g_{n-1})+\cos(\ln(k)g_{n+1})}{2\cos(\phi_ {k}^{n})}\ \ \ ;\ \ \sin(\ln(k)g_{n})=\tfrac{\cos(\ln(k)g_{n-1})-\cos(\ln(k)g_{n+1})}{2\sin(\phi_ {k}^{n})}\] We are thus interested in the relation between \(Z(g_{n\pm 1})\) and the adjustments \(Z_{c}^{\pm}(g_{n})\) and \(Z_{s}^{\pm}(g_{n})\). Let us investigate the properties of the phase \(\phi_{k}^{n}\) and the \(\alpha\)-functions. Figure 10 shows the graphs of \(\alpha_{c}(n,k)\) and \(\alpha_{s}(n,k)\) for \(n=9807962\) and \(k=1,...,N(n)\): Figure 11 shows the adjustment-phase \(\phi_{k}^{n}\) for \(n=9807962\) and \(k=1,...,N(n)\): The following describes the general properties of \(\alpha_{c}(n,k),\alpha_{s}(n,k)\) and \(\phi_{k}^{n}\) as continuous function of the variable \(k\): **Lemma 10.1**.: _For any integer \(n\in\mathbb{Z}\), the functions \(\alpha_{c}(n,k)\) and \(\alpha_{s}(n,k)\) exhibit the following behaviors:_ 1. _The function_ \(\alpha_{c}(n,k)\) _starts at_ \(\alpha_{c}(n,0)=2(-1)^{n}\) _and increases monotonically on the interval_ \(0\leq k\leq\sqrt{N(n)}\)_. Furthermore, it satisfies_ \[\lim_{k\to\sqrt{N(n)}^{\pm}}\alpha_{c}(n,k)=\mp\infty\] _before rising to_ \(\alpha_{c}(n,N(n))=-1\)_._ 2. _The function_ \(\alpha_{s}(n,k)\) _starts at_ \(\alpha_{s}(n,0)=+\infty\) _and decreases monotonically until it reaches_ \(\alpha_{s}(n,N(n))=\frac{1}{\pi}\ln\left(\frac{g_{n}}{2\pi}\right)\)_._ 3. _The adjustment-phase_ \(\phi_{k}^{n}\) _begins at_ \(\phi_{1}^{n}=0\)_, rises to_ \(\phi_{N(n)}^{n}=\pi\)_, and attains a value of_ \(\phi_{\sqrt{N(n)}}^{n}=\frac{\pi}{2}\)_._ Proof.: \((1),(2)\) follow from (3), which in turn follows from the definition of the phase via direct computation, the rest is immediate. ### Approximating Adjustments via the Localized Sub-Sums of \(Z(g_{n})\) By definition, the adjustments modify the terms of the sums \(Z(g_{n\pm 1})\) through the \(\alpha(n,k)\)-function. While the classical sums \(Z(g_{n\pm 1})\) and their adjusted variants behave differently on a global scale, we observe that they become approximately proportional when summed over a restricted localized set of indices. This observation leads us to define: **Definition 10.3** (Localized Sums).: Let \(1\leq a<b\leq N(n)\). The localized sum of \(Z(g_{n})\) within the interval \([a,b]\) is given by: \[Z(g_{n};a,b):=2(-1)^{n}\sum_{k=a}^{b}\frac{\cos(\ln(k)g_{n})}{\sqrt{k}}.\] Similarly, we introduce the localized sums for \(Z^{\prime}(g_{n})\) and \(Z^{\pm}_{c}(g_{n}),Z^{\pm}_{s}(g_{n})\). Figure 11. The adjustment-phase \(\phi_{k}^{n}\) for \(n=9807962\) and \(k=1,...,N(n)\). To quantify the average effect of the \(\alpha\)-function within an interval, we define: \[\alpha^{avg}(n;a,b):=\frac{1}{b-a}\int_{a}^{b}\alpha(n,k)\,dk.\] As shown in Lemma 10.1 and depicted in Fig. 10, we observe that both \(\alpha_{c}(n,k)\) and \(\alpha_{s}(n,k)\) approach constant values for sufficiently large \(k\) within the interval \([1,N(n)]\). Consequently, their corresponding averages over an interval \([a,N(n)]\) also approach these constant values, as \(a\) grows. Specifically, we have: \[\alpha_{c}^{avg}(n;a,N(n))\approx-1\ \ \ ;\ \ \alpha_{s}^{avg}(n;a,N(n)) \approx\tfrac{1}{\pi}\ln\left(\tfrac{g_{n}}{2\pi}\right),\] for relatively large \(1<a\), and the quality of the approximation improves as \(a\) increases. For instance, consider Fig. 12, which illustrates the following graphs for \(n=9807962\) and \(k=1,\ldots,N(n)\): 1. \(Z_{c}^{-}(g_{n};k,N(n))\) (blue), and \(-Z(g_{n-1};k,N(n))\) (brown). 2. \(Z_{s}^{-}(g_{n};k,N(n))\) (blue), and \(\tfrac{1}{\pi}\ln\left(\tfrac{g_{n}}{2\pi}\right)Z(g_{n-1};k,N(n))\) (brown). As expected, Fig. 12 shows that the localized sums \(Z(g_{n\pm 1})\) and their adjustments \(Z_{c}^{\pm}(g_{n})\) and \(Z_{s}^{\pm}(g_{n})\) are approximately proportional to each other, via the constants of 10.2. This proportionality holds within intervals of the form \([a,N(n)]\), with \(a\) is significantly larger than \(1\) yet still much smaller than \(N(n)\). However, when the value of \(a\) is relatively small, the localized sums \(Z(g_{n\pm 1})\) start to deviate from their adjustments \(Z_{c}^{\pm}(g_{n})\) and \(Z_{s}^{\pm}(g_{n})\), and the proportionality no longer holds. Nonetheless, for any given interval \([a,b]\), the following approximations can be considered: \[\begin{split}& Z_{c}^{\pm}(g_{n};a,b)\approx\alpha_{c}^{avg}(n;a,b) \cdot Z(g_{n\pm 1};a,b),\\ & Z_{s}^{\pm}(g_{n};a,b)\approx\alpha_{s}^{avg}(n;a,b)\cdot Z(g_{ n\pm 1};a,b),\end{split} \tag{16}\] Figure 12. Graphs of (a) \(Z_{c}^{-}(g_{n};k,N(n))\) (blue) and \(-Z(g_{n-1};k,N(n))\) (brown). (b) \(Z_{s}^{-}(g_{n};k,N(n))\) (blue) and \(\tfrac{1}{\pi}\ln\left(\tfrac{g_{n}}{2\pi}\right)Z(g_{n-1};k,N(n))\) (brown). For \(n=9807962\) and \(k=1,\ldots,N(n)\). The accuracy of these approximations increases as the size of the interval decreases. In practice, we find that these approximations remain relatively accurate even for intervals \([a,b]\) close to the origin and larger than initially expected. We suspect this resilience is due to the fluctuations of the cosine terms within the sums. Consequently, the adjustments \(Z_{c}^{\pm}(g_{n})\) and \(Z_{s}^{\pm}(g_{n})\) can be expressed as composites of localized sums \(Z(g_{n\pm 1})\), each scaled by its corresponding constant. In particular, in order to approximate \(Z_{c}^{\pm}(g_{n})\) and \(Z_{s}^{\pm}(g_{n})\) in terms of localized sums of \(Z(g_{n\pm 1})\), one can partition \([1,N(n)]\) into \(1=a_{1}<a_{2}<\ldots<a_{r}=N(n)\) and consider: \[\begin{split} Z_{c}^{\pm}(g_{n})&\approx\sum_{k=1}^ {r-1}\alpha_{c}^{\text{avg}}(n;a_{k},a_{k+1})\cdot Z(g_{n\pm 1};a_{k},a_{k+1}), \\ Z_{s}^{\pm}(g_{n})&\approx\sum_{k=1}^{r-1}\alpha_{ s}^{\text{avg}}(n;a_{k},a_{k+1})\cdot Z(g_{n\pm 1};a_{k},a_{k+1}),\end{split} \tag{17}\] which would give good approximations assuming the partition is chosen to be refined enough. Figure 13 provides a schematic representation of the relationships among \(Z(g_{n})\), \(Z^{\prime}(g_{n})\), \(Z(g_{n\pm 1})\), and their adjustment terms \(Z_{c}^{\pm}(g_{n})\) and \(Z_{s}^{\pm}(g_{n})\) so-far discussed in this section. The perpendicular arrows represent the relationships established in Proposition 10.1, which express \(Z(g_{n})\) and \(Z^{\prime}(g_{n})\) as combinations of the adjustments \(Z_{c}^{\pm}(g_{n})\) and \(Z_{s}^{\pm}(g_{n})\). On the other hand, the vertical arrows depict the approximations given by (17), enabling us to approximate the adjustments \(Z_{c}^{\pm}(g_{n})\) and \(Z_{s}^{\pm}(g_{n})\) through weighted partitions of Figure 13. Schematic representation of the relations between \(Z(g_{n}),Z^{\prime}(g_{n}),Z(g_{n\pm 1})\) and the adjustments \(Z_{c}^{\pm}(g_{n})\) and \(Z_{s}^{\pm}(g_{n})\). \(Z(g_{n})\). Together, these results shed light on the expected intimate relationship between \(Z(g_{n})\), \(Z^{\prime}(g_{n})\) and \(Z(g_{n\pm 1})\). However, we have still not discussed the central question: why the consecutive Gram neighbours \(g_{n\pm 1}\) being good or bad should have an effect on the proportion between \(Z(g_{n})\) and \(Z^{\prime}(g_{n})\)? ### Further Numerical Study of the Repulsion Property and its Violations In this sub-section, we seek to numerically investigate why the _good_ or _bad_ nature of the Gram neighbours \(g_{n\pm 1}\) is expected to have an effect on the proportion between \(Z(g_{n})\) and \(Z^{\prime}(g_{n})\). Figure 14 presents the partial sums \(Z(g_{n};1,k)\) and \(Z^{\prime}(g_{n};1,k)\) as \(k\) ranges from \(1\) to \(N(n)\) for (a) the G-B-G bad Gram point at \(n=730119\) and (b) the non-G-B-G bad Gram point at \(n=9807962\): Upon examining the partial sums of \(Z^{\prime}(g_{n})\), we identify three distinct stages across the index range \(k\) for both cases: 1. _Initial surge:_ Within the first \(\left\lceil\sqrt[4]{g_{n}}\right\rceil\) terms, both cases exhibit a significant upward spike. This is attributed to the consistent sign of \(\cos(\ln(k)g_{n})\) in this segment. Coupled with the relatively large values of \(\alpha_{s}(n,k)\) described by Lemma 10.1, this creates a pronounced surge. 2. _Middle fluctuation:_ During this phase, the G-B-G point \(n=730119\) displays a moderate upward shift following the initial impact. On the other hand, the non-G-B-G point \(n=9807962\) experiences a substantial downward pull, nearly offsetting the initial surge. 3. _Final stability:_ As we progress to the latter part of the range, both sums stabilize and show minimal variation. As \(k\) nears \(N(n)\), the partial sums converge to \(Z^{\prime}(g_{n})\). Let us now explain a few of the observed properties, by utilizing our adjustments \(Z_{s}^{\pm}(g_{n})\). We have: Figure 14. The partial sums \(Z(g_{n};1,k)\) and \(Z^{\prime}(g_{n};1,k)\) as \(k\) ranges from \(1\) to \(N(n)\) for (a) the G-B-G bad Gram point at \(n=730119\) and (b) the non-G-B-G bad Gram point at \(n=9807962\). **Lemma 10.4.1**.: _The \(k\)-th term of the sum of \(Z^{\prime}(g_{n})\) is given by_ \[(-1)^{n}\frac{\alpha_{s}(n,k)}{\sqrt{k}}\left[\cos(\ln(k)g_{n-1})-\cos(\ln(k)g_ {n-1}+2\phi_{k}^{n})\right],\] _where \(\phi_{k}^{n}\) is the adjustment-phase._ Proof.: Recall that by Proposition 10.1 we have \[Z^{\prime}(g_{n})=\frac{1}{2}\left(Z_{s}^{-}(g_{n})-Z_{s}^{+}(g_{n})\right)\] Hence, each individual term of \(Z^{\prime}(g_{n})\) can be expressed as \[(-1)^{n}\frac{\ln\left(\frac{g_{n}}{2\pi k^{2}}\right)}{\sqrt{k}sin(\phi_{k}^ {n})}\left[\cos(\ln(k)g_{n-1})-\cos(\ln(k)g_{n+1})\right], \tag{18}\] and by definition \(g_{n+1}=g_{n-1}+2\phi_{k}^{n}\). From the above, we can thus derive the following insights regarding the observed behaviour: 1. _Final stability_ By Lemma 10.1 we know that \(\phi_{k}^{n}\approx\pi\) for \(k\) close to the end of the range \(N(n)\). Hence by Lemma 10.4.1 for such indices the terms in this range satisfy: \[(-1)^{n}\frac{\alpha_{s}(n,k)}{\sqrt{k}}\left[\cos(\ln(k)g_{n-1})-\cos(\ln(k) g_{n-1}+2\phi_{k}^{n})\right]\approx\] (19) \[\approx(-1)^{n}\frac{\alpha_{s}(n,k)}{\sqrt{k}}\left[\cos(\ln(k)g_{n-1})- \cos(\ln(k)g_{n-1}+2\pi)\right]=\] (20) \[=(-1)^{n}\frac{\alpha_{s}(n,k)}{\sqrt{k}}\left[\cos(\ln(k)g_{n-1})-\cos(\ln( k)g_{n-1})\right]=0,\] (21) which elucidates why the contribution of the terms becomes negligible as \(k\) approaches \(N(n)\). 2. _Middle Fluctuations:_ Lemma 10.4.1 shows that the sign of the \(k\)-th term is fundamentally determined by the difference between \(\cos(\ln(k)g_{n-1})\) and \(\cos(\ln(k)g_{n+1})\). Consider, for instance, our specific case of \(n=9807962\), which is a non-G-B-G point. In particular, in this case the left neighbour \(g_{n-1}\) is also a bad Gram point. By definition, a Gram point is bad if \(\sum_{k=1}^{N(n)}\frac{\cos(\ln(k)g_{n})}{\sqrt{k}}<0\). This entails that the values of \(\frac{\cos(\ln(k)t)}{\sqrt{k}}\) for \(t=g_{n}\) and \(t=g_{n-1}\) possess a significant tendency for negativity. We'll delve deeper into this notion shortly, in sub-Section 10.5. In our scenario, we observe that many of the negative values for \(g_{n-1}\) turn to cluster in the middle region, pulling the partial sums back towards the axis. This becomes especially evident within the range: \[\left[\frac{1}{2}\sqrt[4]{\frac{g_{n}}{2\pi}}\right]\leq k\leq\left[2\sqrt[4]{ \frac{g_{n}}{2\pi}}\right],\] (22) This specific range is highlighted in Figure 15: In this range, the adjustment-phase is given approximately by \(2\phi_{k}^{n}\approx\pi\) according to Lemma 10.1. Consequently, within this range, the terms of \(Z^{\prime}(g_{n})\) are approximately given by: \[(-1)^{n}\frac{\alpha_{s}(n,k)}{\sqrt{k}}\left[\cos(\ln(k)g_{n-1})-\cos(\ln(k)g_ {n-1}+2\phi_{k}^{n})\right]\approx \tag{23}\] \[(-1)^{n}\frac{\alpha_{s}(n,k)}{\sqrt{k}}\left[\cos(\ln(k)g_{n-1})-\cos(\ln(k)g_ {n-1}+\pi)\right]= \tag{24}\] \[2(-1)^{n}\frac{\alpha_{s}(n,k)}{\sqrt{k}}\cos(\ln(k)g_{n-1}), \tag{25}\] according to Lemma 10.4.1. This expected behaviour is indeed illustrated in the following Fig. 16: The values of \(2cos(ln(k)g_{n-1})\) are seen to be predominantly negative which causes the observed decrease in the values of the partial sums of \(Z^{\prime}(g_{n})\) in this region. 3. _Initial range:_ The main feature of the repulsion conjecture necessitates that in an initial surge exists then the moderate behaviour in the middle range must follow. Figure 15. The partial sums \(Z^{\prime}(g_{n};1,k)\) as \(k\) varies from \(1\) to \(N(n)\) for \(n=9807962\). The region defined by (22) is accentuated in orange. That is, that there exists a strong correlation between the values of \(cos(ln(k)g_{n})\) in the range up to \(\sqrt{N(n)}\) and those from \(\sqrt{N(n)}\) to \(N(n)\). We have yet to pinpoint this correlation in the context of \(Z(t)\), but it is worth noting that for individual terms, such a correlation is inherently evident due to basic trigonometric identities. For instance, the double-angle formula for cosine \[\cos(\ln(k^{2})g_{n})=\cos(2\ln(k)g_{n})=2\cos^{2}(\ln(k)g_{n})-1\] illuminates that terms from \([1,\sqrt{N(n)}]\) naturally correlate with those in the range \([\sqrt{N(n)},N(n)]\). However, this direct correlation via the double-angle identity maps terms from \([1,\sqrt{N(n)}]\) to a sparse subset of \([\sqrt{N(n)},N(n)]\) through \(k\mapsto k^{2}\). Unravelling why a profound correlation should prevail, impacting the collective values of terms across these two distinct ranges, is a complex task. A future thorough examination, likely incorporating statistical and probabilistic approaches, is essential for a more comprehensive understanding. Figure 17 presents the partial sums \(Z(g_{n};1,k)\) and \(Z^{\prime}(g_{n};1,k)\) as \(k\) ranges from \(1\) to \(N(n)\) for the good Gram point \(n=195644\): In Fig. 17, a distinctly different behaviour is exhibited when contrasted with the bad Gram points we previously considered. For the good Gram point \(n=195644\), there is no initial surge in the partial sums of \(Z^{\prime}(g_{n})\). Instead, the initial terms seem to counterbalance each other. Furthermore, within the middle range, there's only a modest shift in the partial sums. As a result, the value of \(Z^{\prime}(g_{n})\) remains relatively small, especially when compared to \(Z(g_{n})\). Finally, we should highlight that the initial surge observed in the partial sums \(Z^{\prime}(g_{n})\) for the G-B-G Gram point \(n=730119\) is common but not a universal occurrence. There are exceptions, as depicted in Fig. 18, which presents the partial sums \(Z(g_{n};1,k)\) and \(Z^{\prime}(g_{n};1,k)\) for the G-B-G bad Gram point \(n=300894\), where the surge in the initial Figure 17. the partial sums \(Z(g_{n};1,k)\) and \(Z^{\prime}(g_{n};1,k)\) as \(k\) ranges from \(1\) to \(N(n)\) for the good Gram point \(n=195644\). range is notably absent. In this figure, as \(k\) varies from \(1\) to \(N(n)\), a different pattern of behaviour in the partial sums is clearly illustrated. As illustrated in Fig. 18, there is an absence of an initial surge and the terms appear to counterbalance each other, reminiscent of the behaviour observed for the good Gram point \(n=195644\) shown earlier. However, a surge is seen eventually to emerge further along the range, as expected by the G-B-G repulsion conjecture. To summarize, we have conducted an in-depth analysis of the fundamental relationships between the four components of the G-B-G conjecture, which are \(Z(g_{n-1}),Z(g_{n}),Z(g_{n+1})\) and \(Z^{\prime}(g_{n})\). We further conducted a numerical investigation of the repulsion property in various cases. We saw two cases of a G-B-G and non-G-B-G Gram point which were a-priori chosen to present an initial surge, and distinguished between the ability of a non-G-B-G point to counteract the surge, contrary to the moderate middle behaviour of the G-B-G point. However, we are not yet able to explain the other direction, which lies at the heart of the G-B-G conjecture and implies that if a strong pull towards the axis exists in the middle range than an even larger surge must initially occur. In particular, although the investigation conducted here sheds some light on the phenomena the main mystery still remains at this point: what causes the forced correlation between \(Z(g_{n})\) and \(Z^{\prime}(g_{n})\) observed in the G-B-G case? In order to further investigate this question, it's essential delve deeper into the factors that determine whether a Gram point is good or bad. ### Monte-Carlo Simulation for Good and Bad Gram Points Let us further investigate the distribution of negative values within the vector \[\mathbf{v}_{n}=\begin{bmatrix}\frac{\cos(\ln(1)g_{n})}{\sqrt{1}}\\ \frac{\cos(\ln(2)g_{n})}{\sqrt{2}}\\ \vdots\\ \frac{\cos(\ln(N(n))g_{n})}{\sqrt{N(n)}}\end{bmatrix}\in\mathbb{R}^{N(n)},\] Figure 18. the partial sums \(Z(g_{n};1,k)\) and \(Z^{\prime}(g_{n};1,k)\) as \(k\) ranges from \(1\) to \(N(n)\) for the G-B-G Gram point \(n=300894\). for good and bad Gram points. As a study case, Fig. 19 illustrates the vector \(\mathbf{v}_{n}\) for the bad Gram point \(n=730119\) and the good Gram point \(n=730120\): From the illustrated vector \(\mathbf{v}_{n}\), distinguishing between a good and a bad Gram point is not immediately apparent. To better discern the distinction between good and bad Gram points, one might consider sorting the vector \(\mathbf{v}_{n}\) in ascending order, from its smallest to its largest entries. Let's denote this sorted vector by \(\mathbf{v}_{n}^{\text{sorted}}\). Fig. 20 illustrates the vector \(\mathbf{v}_{n}^{\text{sorted}}\) for the bad Gram point \(n=730119\) and the good Gram point \(n=730120\): In Fig. 20, a clear pattern differentiating between good and bad points remains elusive. Actually, both the vectors corresponding to good and bad points appear to exhibit a broadly similar structure. The pertinent question then arises: what characterizes this underlying structure? In order to investigate it we introduce the following: **Definition 10.6** (Monte-Carlo Simulated Gram Vector).: For every \(n\in\mathbb{Z}\), define the randomized \(n\)-th vector as \[\mathbf{v}_{n}^{random}=\begin{bmatrix}\frac{\cos(\theta_{1})}{\sqrt{1}}\\ \frac{\cos(\theta_{2})}{\sqrt{2}}\\ \vdots\\ \frac{\cos(\theta_{N(n)})}{\sqrt{N(n)}}\end{bmatrix}\in\mathbb{R}^{N(n)},\] Figure 19. The entries of the vector \(\mathbf{v}_{n}\in\mathbb{R}^{N(n)}\) for the bad Gram point \(n=730119\) (blue) and the good Gram point \(n=730120\) (orange). Figure 20. The entries of the sorted vector \(\mathbf{v}_{n}^{\text{sorted}}\) for the bad Gram point \(n=730119\) (blue) and the good Gram point \(n=730120\) (orange). where each \(\theta_{k}\) is a random variable uniformly distributed over \([0,2\pi]\) for \(k=1,\ldots,N(n)\). Let \(\mathbf{v}_{n}^{sorted-random}\) be the vector obtained by sorting the entries of \(\mathbf{v}_{n}^{random}\). The _Monte-Carlo simulated \(n\)-th Gram vector_ is defined as the expectation \[\mathbf{v}_{n}^{Monte-Carlo}:=\mathbb{E}(\mathbf{v}_{n}^{sorted-random}).\] Essentially, the \(n\)-th Monte-Carlo vector \(\mathbf{v}_{n}^{Monte-Carlo}\) is expected to offer the "baseline behaviour" of \(\mathbf{v}_{n}^{sorted}\). Figure 21 shows the entries of the vector \(\mathbf{v}_{n}^{\text{sorted}}\) for the bad Gram point \(n=730119\) (blue) and the good Gram point \(n=730120\) (orange) together with \(\mathbf{v}_{n}^{\text{Monte-Carlo}}\) (green): Since we view \(\mathbf{v}_{n}^{Monte-Carlo}\) as representing the base standardized behaviour of \(g_{n}\), we accordingly expect that the difference \[\mathbf{v}_{n}^{\text{essential}}:=\mathbf{v}_{n}^{\text{sorted}}-\mathbf{v }_{n}^{\text{Monte-Carlo}}\] would reflect the inherent properties of the Gram point. We have: **Proposition 10.2**.: _For every \(n\in\mathbb{Z}\) the following holds:_ 1. \(\Sigma(\mathbf{v}_{n}^{\text{Monte-Carlo}})=0\)_._ 2. \(\Sigma(\mathbf{v}_{n}^{\text{essential}})=(-1)^{n}Z(g_{n})\)_._ _Where \(\Sigma(\mathbf{v}):=\sum_{k=1}^{N(n)}\mathbf{v}_{i},\) for a vector \(\mathbf{v}\in\mathbb{R}^{N(n)}\)._ Proof.: We have: 1. Follows immediately from the anti-symmetry of the Monte-Carlo vector \(\mathbf{v}_{n}^{\text{Monte-Carlo}}\). 2. From linearity \[\Sigma(\mathbf{v}_{n}^{\text{essential}})=\Sigma(\mathbf{v}_{n }^{\text{sorted}}-\mathbf{v}_{n}^{\text{Monte-Carlo}})=\Sigma(\mathbf{v}_{n }^{\text{sorted}})-\Sigma(\mathbf{v}_{n}^{\text{Monte-Carlo}})=\\ =\Sigma(\mathbf{v}_{n}^{\text{sorted}})=\Sigma(\mathbf{v}_{n}) =(-1)^{n}Z(g_{n}).\] Fig. 22 shows \(\mathbf{v}_{n}^{\text{essential}}\) for \(n=730119\) (blue) and \(n=730120\) (orange): Figure 21. The entries of \(\mathbf{v}_{n}^{\text{sorted}}\) for the bad Gram point \(n=730119\) (blue) and the good Gram point \(n=730120\) (orange) together with \(\mathbf{v}_{n}^{\text{Monte-Carlo}}\) (green). Remarkably, \(\mathbf{v}_{n}^{\text{essential}}\) indeed reveals the essential unique chaotic features of the Gram point \(g_{n}\), after removing from it the structured baseline. In particular, one can discern from the graph the abundant negativity of the bad Gram point \(n=730119\) as compared to the abundant positivity of the good Gram point \(n=730120\). In light of the above, we propose that a localized adaptation of the Monte-Carlo analysis presented herein, when applied to \(Z(g_{n})\), \(Z^{\prime}(g_{n})\), and their respective adjustments, may offer a constructive direction for further investigation and possible future proof of the repulsion property. ## 11. Edwards' Speculation on RH and its Shortcomings Up to this point, we have studied the discriminant of the linear curve \(\Delta_{n}(r)\) from an infinitesimal point of view, focusing on its properties in a neighbourhood of \(r=0\). However, our ultimate aim is to discuss the corrected Gram's law, which is principally concerned with the value of \(r=1\). In the following, we will elucidate how our local analysis can be pushed forward to lead to meaningful insights on the value at \(r=1\). In order to explain our approach for this transition from local to global, based on the \(A\)-space, let us return to the rough relation between the zeros of the core function \(Z_{0}(t)=\cos(\theta(t))\) and the zeros of \(Z(t)\), observed in Section 2. As mentioned, this relation has been noted by various authors over the years [4, 5, 11, 19]. Particularly, Edwards, in Section 7.8 of his seminal work [4], presents some intriguing "speculations" on the possible origins of the Riemann hypothesis. He even proposes that Riemann might have noticed the numerical fact that the zeros of \(\cos(\theta(t))\) serve as an initial, crude approximation of the zeros of \(Z(t)\), especially for sufficiently small \(t\) values. In this section, we seek to examine Edwards' approach in the context of our methodology, identify its shortcomings, and offer a more sensitive generalization based on the \(A\)-philosophy. Edwards conjectures that the roots of the RH may be traced back to a rather heuristic concept, described by him as the idea that _"one could go from a zero \(t_{n}^{0}\) of \(\cos(\theta(t))\) to a zero \(t_{n}\) of \(Z(t)\)"_ via some iterative numerical process. However, Edwards remains quite ambiguous about the nature of this process and expresses substantial Figure 22. \(\mathbf{v}_{n}^{\text{essential}}\) for the bad Gram point \(n=730119\) (blue) and the good Gram point \(n=730120\) (orange). scepticism about the existence of a universally applicable procedure of this kind. He particularly points out the extreme failure of Gram's law, for example the failure between \(g_{6708}\) and \(g_{6708}\) in Lehemer's graph as a key source of his doubts. In classical Newton's method, one typically starts with a crude initial guess \(t^{0}\) for a zero of a function \(F(t)\), then applies the iterative process \[t^{k+1}=t^{k}-\frac{F(t^{k})}{F^{\prime}(t^{k})},\] aiming to locate a true zero, given by \(t=\lim_{k\to\infty}(t^{k})\), see for instance [3, 20]. Consequently, the following question naturally arises: **Question** (Edwards' speculation - Naive version).: _For \(n\in\mathbb{Z}\) let \(t_{n}^{0}\) be the \(n\)-th zero of the core function \(Z_{0}(t)\). Can we use Newton's method starting from \(t_{n}^{0}\) to consistently converge to \(t_{n}\), the \(n\)-th real zero of \(Z(t)\)?_ Equivalently, we have: **Question** (Corrected Gram's law - Naive version).: _For \(n\in\mathbb{Z}\) let \(g_{n}\) be the \(n\)-th Gram point (extremal point of \(Z_{0}(t)\)). Can we use Newton's method starting from \(g_{n}\) to consistently converge to \(\widetilde{g}_{n}\), the \(n\)-th real extremal point of \(Z(t)\) and show that \((-1)^{n}Z(\widetilde{g}_{n})>0\)?_ Consider the following example: _Example 11.1_ (The Lehmer points \(t_{6708}\) and \(t_{6709}\)).: Let us apply Newton's method, starting from \(t_{n}^{0}\), to the two Lehmer points referred to by Edwards. These points are approximately given by \[t_{6708}\approx 7005.06\ \ ;\ \ t_{6709}\approx 7005.10. \tag{26}\] The first Newton iterations \(t_{6708}^{k}\) and \(t_{6709}^{k}\) for these two zeros are presented in Table 1: From the table, it is clear that despite the closeness of the Lehmer pair \(t_{6708}\) and \(t_{6709}\), the iterations of \(t_{6708}^{k}\) and \(t_{6709}^{k}\) do converge to \(t_{6708}\) and \(t_{6709}\), respectively. From our point of view, the following Fig. 23 shows the graphs of \(\Delta_{6708}(r)\) (blue) and \(Z_{N}(g_{6708};r)\) (orange) with \(0\leq r\leq 1\): Despite the extreme violation of Gram's law, as exemplified by the negative value of \(Z_{N}(g_{6708};r)\), the corrected version of Gram's law still holds, as demonstrated by the graph of the discriminant \(\Delta_{6708}(r)\). Let us note that \(g_{6708}\) is an isolated bad Gram point with an actually relatively large viscosity \(\mu(g_{6708})=6.41706\). However, although Newton's method appears successful in the above example, Edwards' concerns are actually far from being unjustified. Notably, even in algebraic cases, the convergence of Newton's method is well known to be closely tied to the properties of the discriminant. Hence, in our case, let us look for a violation of Newton's method where the discriminant is anomalous. The following is an example of an isolated bad Gram point with small viscosity, showing failure of Newton's method: _Example 11.2_ (Failure of Newton's method near \(g_{730119}\)).: Consider \(g_{730119}\) which is an isolated bad Gram point with relatively very small viscosity \(\mu(g_{730119})=4.46023\). Figure 24 shows the graphs of \(\Delta_{730119}(r)\) (blue) and \(Z_{N}(g_{730119};r)\) (orange) with \(0\leq r\leq 1\): In Fig. 24, we observe a stark deviation from Gram's law and its corrected version for \(r=1\). Unlike prior examples, the discriminant \(\Delta_{730119}(r)\) doesn't maintain positivity over the interval \(0\leq r\leq 1\). Specifically, it enters a region of negativity. This transition suggests a collision of two consecutive zeros in the vicinity of \(g_{n}\), as seen in Fig. 25: The domain of the graph in Fig. 25 spans from \(t=450613.58\) to \(t=450614.8\) with \(N=268\). The behaviour captured in this figure is crucial for interpreting the discriminant trends observed in Fig. 24. Specifically: (1) the collision of zeros, especially visible in (a); (2) the subsequent movement of these zeros, now as complex values, descending along the real axis in areas where the discriminant is negative; and (3) their eventual emergence lower on the real line, illustrated in (b). We will revisit this example when discussing the corrected connecting path in Example 12.1. The above example also reveals a peculiar aspect related to Newton's method: **Corollary 11.1** (Violation of straight-forward Edwards' speculation).: _Newton's method, starting from \(t_{n}^{0}\) for some integer \(n\), does not always converge to \(t_{n}\), the \(n\)-th real zero of \(Z(t)\)._ Proof.: Consider the zeros of the \(Z\)-function given as: \[t_{730120}=450613.7144\ \ \ ;\ \ t_{730121}=450613.8004.\] If we initiate Newton's method from the point \(t_{730120}^{0}=450613.9648\) we observe that \(lim_{k\to\infty}(t_{730120}^{k})=t_{730121}\) signifying that the process converges to the adjacent zero, \(t_{730121}\), rather than the intended zero, \(t_{730120}\). Hence, Newton's method does not always converge to the intended zero. This result implies that the straightforward application of Newton's method in identifying the zeros of \(Z(t)\) from the zeros of \(Z_{0}(t)\) may not always provide accurate results. It can lead to misidentification of zeros, as seen in the example where the method converged to an adjacent zero instead of the intended zero. Moreover, we see that for \(n=730119\) Figure 25. \(\ln\lvert Z_{N}(t;r)\rvert\) for the following values of \(r\): (a) \(0\) (blue), \(0.1\) (yellow), \(0.18\) (green), \(0.22\) (red), \(0.245\) (purple) and (b) \(0.96\) (blue), \(0.97\) (yellow), \(0.98\) (green), \(0.99\) (red), \(1\) (purple) for \(N=268\) in the range \(450613.58\leq t\leq 450614.8\). there exists a region within this interval for which \((-1)^{n}\Delta_{n}(r)\) is negative, contrary to previous examples. However, our dynamic version of RH does require us to obtain a one-to-one correspondence between the zeros of \(Z_{0}(t)\) and those of \(Z(t)\). In the subsequent section, we will propose a more sensitive approach to ensuring the validity of the corrected Gram's law, based on the A-philosophy assuming the G-B-G repulsion relation. ## 12. Correcting the Linear Curve - Overcoming Over-Tasking The corrected Gram's law requires to show that \((-1)^{n}\Delta_{n}(\overline{\mathrm{I}})\) is well-defined and positive. For good Gram points, as well as for the majority of bad Gram points, the linear curve in the \(A\)-parameter space satisfies \((-1)^{n}\Delta_{n}(\overline{r})>0\) for all \(0\leq r\leq 1\), thereby proving suitable for establishing the corrected Gram's law for these points. However, we observed that there exist bad Gram points \(g_{n}\) where the discriminant \(\Delta_{n}(\overline{r})\) for the linear curve may not consistently be non-negative. To affirm the corrected Gram's law for such Gram points, our strategy involves substituting the linear curve with a more sensitive curve \(\gamma\) in the multi-dimensional \(A\)-space for which \((-1)^{n}\Delta_{n}(r;\gamma)\) is anticipated to be non-negative for all \(0\leq r\leq 1\). In order to describe the construction of \(\gamma\) let us note that we observed in Example 11.2 for \(n=730119\) two simultaneous phenomena which occur as \(r\) increases: 1. The point \(g_{n}(r)\) undergoes a sideways shift. This shift is a direct consequence of the viscosity bound, causing the position of \(g_{n}(r)\) to vary laterally. 2. The value of \((-1)^{n}\Delta_{n}(r)\) experiences a decrease due to the fact that \(g_{n}\) is characterized as a bad Gram point. We observed that the decrease in \((-1)^{n}\Delta_{n}(r)\) for the linear curve happens at a faster initial rate than the lateral shift of \(g_{n}(r)\), a condition we refer to as _over-tasking_. This suggests that, in such cases, we should update the connecting curve by arranging for the lateral shift to occur before the decrease towards the axis takes place. Consider Fig. 26 Figure 26. The line \(Z_{N}(t;r)\) (purple) in parameter space connecting \(Z_{0}(t)\) to \(Z(t)\) in a colliding manner together with the corrected broken curve (cyan) connecting the core to \(Z(t)\) without passing through the negative region (red). Figure 26 suggests a schematic interpretation, in terms of the geometry of parameter space. To put this in more relatable terms, recall the formulation for the linear curve: \[Z_{N}(t;r):=Z_{0}(t)+r\cdot\sum_{k=1}^{N}A_{k}(t)\in\mathcal{Z}_{N},\] Here, each term \(A_{k}(t)\) is defined as: \[A_{k}(t):=\frac{1}{\sqrt{k+1}}\cos(\theta(t)-\ln(k+1)t).\] In this standard linear curve, all terms \(A_{k}(t)\) see an equal increment as we increase the value of \(r\). Once we are willing to take general curves into account, for which each parameter can be adjusted independently, there is a tremendous amount of possibilities arising. Indeed, for the \(n\)-th discriminant, the dimension of the parameter space is \(N(n)\)-dimensional which, of course, goes to infinity as \(n\to\infty\). Therefore, how do we determine which curve within this vast parameter space is best suited to adjust the linear curve? This is the point where our previous local analysis of the discriminant and specifically the repulsion property becomes crucial. Assuming that \(n\in\mathbb{Z}\) is an isolated bad Gram point, according to the repulsion property, we expect that among the indices, there is an enhanced tendency to alter the position of \(g_{n}(\overline{a})\) relative to the \(t\)-axis. This leads us to propose that one can partition the indices ranging from one to \(N(n)\) into two fundamentally distinct classes: 1. _Shifting indices:_ The index set \(I_{shift}\) which corresponds to terms exerting a significant influence on the lateral shift in \(g_{n}(r)\). 2. _Descending indices:_ The index set \(I_{descend}\) associated with terms that predominantly contribute to the reduction in \((-1)^{n}\Delta_{n}(r)\). This allows us to suggest a substantial dimensionality reduction by considering the following \(2\)-parametric system arising from this split of the indices: \[Z_{N}(t;r_{1},r_{2}):=Z_{0}(t)+r_{1}\cdot\sum_{k\in I_{shift}}A_{k}(t)+r_{2} \cdot\sum_{k\in I_{descend}}A_{k}(t)\in\mathcal{Z}_{N},\] We hence get the following simplified version of the corrected Gram's law: **Conjecture 12.1** (\(2\)-dim. corrected Gram's law for isolated points).: _Let \(g_{n}\) be an isolated bad Gram point. Then it is possible to find a curve \(\gamma(r)=(r_{1}(r),r_{2}(r))\) in the \(2\)-dimensional parameter space (12) such that \(\gamma(0)=(0,0)\), \(\gamma(1)=(1,1)\) and_ \[\Delta_{n}(r;\gamma):=Z_{N}(g_{n}(r_{1}(r),r_{2}(r));r_{1}(r),r_{2}(r))>0\] _for all \(0\leq r\leq 1\)._ In fact, we suggest \(\gamma_{\text{correct}}\subset\mathbb{R}^{2}\) should be composed of the following two stages: **The shifting stage:**: This involves increasing the shifting parameters alongside the descending parameters. The goal is to shift \(g_{n}(\overline{a})\) while maintaining \((-1)^{n}\Delta_{n}(\overline{a})\) constant. This stage starts at \(Z_{0}(t)\) and follows the non-linear level curve of \((-1)^{n}\Delta_{n}(\overline{a})\) with a value of \(1\), until an exit point is reached. **The descending stage:**: After \(g_{n}(\overline{a})\) is approximately in its intended position, all parameters increase linearly towards \(Z_{N}(t)\). This stage is a linear segment connecting the prior exit point to \((1,1)\). It's termed the 'descending' stage due to its potential to decrease the value of \((-1)^{n}\Delta_{n}(\overline{a})\). For the majority of bad Gram points, the shifting stage is negligible. As a result, the linear and descending curves are approximately identical. Let us thus consider the following example where a substantial shift is required: _Example 12.1_ (Correcting curve for \(g_{730119}\)).: In order to define the correcting curve for \(n=730119\) we should first identify the shifting parameters. As we saw in Fig. 14 the main surge in this case occurs in the domain where \(k\) is between \(1\) and \(\sqrt{N(n)}=15\). Table 2 shows the values of \(\cos(\ln(k)g_{n}),\sin(\ln(k)g_{n}),A_{k}(g_{n})\) and \(B_{k}(g_{n})\) within this range: By direct computation, the shifting parameters, for which \(sin(ln(k)g_{n})\) is especially large, are \(k=1,2,4,6,12\) and are marked red in the table. consider Fig. 27: \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(k\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \(\cos(\ln(k)g_{n})\) & -0.14 & 0.25 & 0.96 & -0.53 & 0.99 & -0.20 & 0.41 & 0.88 & 0.77 & -0.99 & 0.03 & 0.21 & 0.94 & 0.95 & -0.85 \\ \hline \(\sin(\ln(k)g_{n})\) & 0.99 & 0.97 & 0.28 & 0.85 & -0.11 & 0.98 & -0.91 & -0.48 & 0.64 & -0.11 & -1.0 & 0.98 & 0.33 & 0.30 & -0.53 \\ \hline \(A_{k}(g_{n})\) & -0.099 & 0.14 & 0.48 & -0.24 & 0.41 & -0.074 & 0.14 & 0.29 & 0.24 & -0.30 & 0.0082 & 0.058 & 0.25 & 0.25 & -0.21 \\ \hline \(B_{k}(g_{n})\) & 6.86 & 5.02 & 1.16 & 3.02 & -0.345 & 2.70 & -2.27 & -1.09 & 1.34 & -0.210 & -1.79 & 1.64 & 0.521 & 0.449 & -0.748 \\ \hline \end{tabular} \end{table} Table 2. \(\cos(\ln(k)g_{n}),\sin(\ln(k)g_{n}),A_{k}(g_{n})\) and \(B_{k}(g_{n})\) for \(k=1,...,15\). Figure 27. Graphs of \(\ln|Z_{N}(t;r_{1},r_{2})|\) along the following points of the shifting curve: \((0,0)\) (blue), \((0.25,0.05)\) (yellow), \((0.55,0.15)\) (green), \((0.75,0.25)\) (red), \((1,0.41)\) (purple) for \(N=268\) in the range \(450613.58\leq t\leq 450614.8\). Figure 27 shows the graphs of \(\ln\lvert Z_{N}(t;r_{1},r_{2})\rvert\) along the following points \((r_{1},r_{2})\) of the shifting curve: \((0,0)\) (blue), \((0.25,0.05)\) (yellow), \((0.55,0.15)\) (green), \((0.75,0.25)\) (red), \((1,0.41)\) (purple) in the range \(450613.58\leq t\leq 450614.8\) and \(N=268\). In particular, we see that as \(r_{1}\) transitions from zero (blue) to \(1\) (purple) the \(n\)-th extremal point transitions continuously to the left from \(g_{n}\) to \(g_{n}^{shift}\), while keeping the value of the discriminant fixed. The next Fig. 28 shows the graphs of \(\ln\lvert Z_{N}(t;r_{1},r_{2})\rvert\) along the following points \((r_{1},r_{2})\) of the descending curve: \((1,0.41)\) (blue), \((1,0.6)\) (yellow), \((1,0.8)\) (green), \((1,0.95)\) (red), \((1,1)\) (purple) for \(N=268\) in the range \(450613.58\leq t\leq 450614.8\). In this case, as \(r_{2}\) transitions from \(0.41\) (blue) to \(1\) (purple) the value of the \(n\)-th extremal point, that is the discriminant, decreases continuously as it transitions from \(g_{n}^{shift}\) to \(g_{n}^{final}\). As we can see, no collision of zero occurs along the corrected connecting curve, which is given by the shifting curve followed by the descending curve. In particular, this avoids the collisions forming for the linear curve, as seen in Example 11.2, as required. Let us note the following general remark about the shifting indices: _Remark 12.2_.: In general, if \(k\in I_{shift}\) is a shifting index then an increase in \(A_{k}(t)\) is expected to lead to a shift in \(g_{n}(\overline{a})\) as well as a local increase in the value of \((-1)^{n}\Delta_{n}(r;\gamma)\), as seen in Table 2 of the above example. Indeed, according to Theorem 7.1 terms in \(k\in I_{shift}\) have high \[B_{k}(t):=\frac{1}{\sqrt{k+1}}\ln\left(\frac{t}{2\pi(k+1)^{2}}\right)\sin( \theta(t)-\ln(k+1)t).\] values. Such terms significantly contribute to \(\nabla g_{n}\), which is also proportional to the Hessian \(H_{n}\). Since for these terms \(\sin(\theta(t)-\ln(k+1)t)\) is relatively large, correspondingly \(\cos(\theta(t)-\ln(k+1)t)\) must be relatively small. As a result, the first-order contribution of the \(k\)-th term to \(\Delta_{n}(r)\) is minor. In particular, this small first-order effect will be negligible Figure 28. graphs of \(\ln\lvert Z_{N}(t;r_{1},r_{2})\rvert\) along the following points \((r_{1},r_{2})\) of the descending curve: \((1,0.41)\) (blue), \((1,0.6)\) (yellow), \((1,0.8)\) (green), \((1,0.95)\) (red), \((1,1)\) (purple) for \(N=268\) in the range \(450613.58\leq t\leq 450614.8\). with respect to the effect of this \(k\)-th term on the second-order Hessian. Hence the \(k\)-th term leads to a shift of \(g_{n}\) as well as a positive inflation in the value of \((-1)^{n}\Delta_{n}(r)\). In conclusion, for the case of the Gram point \(n=730119\) we saw that: 1. The linear curve leads to a collision between the 730119-th and 730120-th zeros, occurring when the discriminant \(\Delta_{n}(r)\) vanishes, as seen in Example 11.2. 2. The corrected connecting curve, composed of its shifting stage and descending stage avoids such collisions, keeping \((-1)^{n}\Delta_{n}(r;\gamma)\) positive for all \(0\leq r\leq 1\), as demonstrated in Example 12.1. Conjecture 12.1 proposes that a similar connecting curve exists for any isolated bad Gram point. The key feature is that, by definition, the shifting curve is constructed to leave the discriminant fixed \((-1)^{n}\Delta_{n}(r;\gamma^{shift})=1\), as seen in Fig. 27. Hence, it is the role of the decreasing curve to transition the value of \((-1)^{n}\Delta_{n}(r;\gamma^{shift})\) from \(1\) to its eventual level for \(Z(t)\). This leads us to the following profound conjecture, which we view as the essence of the RH for isolated Gram points: **Conjecture 12.2** (RH Energy bound for isolated points).: _For any isolated Gram point the following bound holds_ \[(-1)^{n}\Delta_{n}(r;\gamma^{descend})>0 \tag{27}\] _for all \(0\leq r\leq 1\)._ Conjecture 12.2 is currently still far out of reach, partially because it intrinsically relies on the G-B-G repulsion as conjectured in Conjecture 8.1. Specifically, since the general separation of the indices into shifting and decreasing is fundamentally based on the infinitesimal G-B-G property, which by itself still requires formal proof as explained in Section 10. We refer to this as an 'energy bound' conjecture to intuitively convey that after the shifting stage has been exhausted, the decreasing curve is hypothesized to lack sufficient 'energy' to cause the discriminant to vanish, or in other words, to lead to a collision of the corresponding consecutive zeros. This is conjectured to be a deeply rooted characteristic of the \(Z\)-functions, playing a pivotal role in governing the corrected Gram's law for isolated bad Gram points. In particular, given the conjectures intimate connection with the fundamental properties of the zeros of the \(Z\)-function, the formidable challenge of proving it becomes understandable, as its validation would nearly amount to an affirmation of the RH itself. Let us conclude this section with the following three remarks: _Remark 12.3_ (The crucial role of the AFE).: In Remark 2.1, we have elaborated on the role of the more robust AFE (2) which involves a summation of terms up to \([\frac{t}{2}]\), in contrast to the Hardy-Littlewood classical AFE, where the summation involves substantially fewer terms. It is essential to emphasize that the difference between the two equations becomes particularly crucial concerning the energy bound (27). Specifically, there exist Gram points for which the energy bound fails when applied using the Hardy-Littlewood AFE. For this reason, it becomes necessary to define the discriminant as a function of the \(A\)-parameter space of dimension \([\frac{t}{2}]\), where the higher-order terms seem to act as regulators, preventing the discriminant from vanishing, an aspect that still requires further exploration. _Remark 12.4_ (The connecting curve as a non-linear optimization problem).: Conjecture 12.1 is concerned with finding a curve connecting \(Z_{0}(t)\) to \(Z_{N}(t)\) in a way that does not pass through the region \(\Delta_{n}(\overline{a})<0\). A conventional theoretical tool for solving such non-linear optimization issues optimally is the Karush-Kuhn-Tucker theorem. This theorem extends Lagrange multipliers to scenarios involving inequality constraints, as referenced in [12, 13]. If one is willing to ease on optimality and opt for naturality instead, such a curve is, of course, not unique. In particular, our conjectured approach, where a corrected connecting curve comprises shifting and descending stages, while intuitive might not be the exclusive approach. Let us mention that an alternative approach, which also merits investigation, might involve a three-stage construction of the connecting curve along the following lines: 1. Initially, follow the linear curve until the discriminant vanishes. If the discriminant remains non-negative for all \(0\leq r\leq 1\) no further correction is necessary. 2. In case of a vanishing discriminant, continue along the discriminant hyper-surface \(\Delta(\overline{a})=0\) until reaching a point of exit, determined by minimization of the Hessian. 3. Finally, proceed linearly within the \(A\)-parameter space towards \(Z_{N}(t)\), starting from the point identified in the previous step. While our discussion so far has mainly focused on isolated bad Gram points, the RH requires to obtain the corrected Gram's law for all Gram points, including those appearing in Gram blocks of arbitrary length. The motivation for considering the isolated case is that although it can be viewed as the next level of complexity after the trivial case of good Gram points, nevertheless it already reveals the fundamental properties and challenges arising in the general case. In particular, even in the isolated case we are left with the two new fundamental open questions of the repulsion and energy property, elaborated on in this work in Conjecture 8.1 and the Conjecture 12.2, respectively. In the following last remark we would want to expand on our expectations regarding the extension of our results to the general case: _Remark 12.5_ (The corrected Rosser law).: Historically, the concept of Gram block's was introduced in a naive attempt to introduce a 'correction of Gram's law', known as Rosser's law, postulating that a Gram block of length \(N\) is expected to contain exactly \(N\) zeros, see [17]. However, as for Gram's law, exceptions to this heuristic have been observed, the first violation occurring around the 13999826-th Gram point, as shown by Lehman in [14]. In view of our approach, the reason for this is that Rosser's rule, like Gram's law, is static as it fixes the position of the two good Gram points at its boundaries. In similar lines to the discussion in Section 2 we view Rosser's law as a property of the core \(Z_{0}(t)\) rather than that of \(Z(t)\). In particular, when taking into the account the dynamic transition from the core \(Z_{0}(t)\) to \(Z(t)\) we observe that the block as a whole does not need to be confined to its original position for \(Z_{0}(t)\) and could be shifted altogether. In such a case the corresponding shift in its boundaries needs to be taken into account, as well. The challenge arising for non-isolated Gram blocks is that the individual bad Gram points within the interior of the blocks can actually move in different places, which adds additional complexity. In particular, in such cases, we postulate that a separate collection of shifting parameters should be allocated for each of the bad Gram points within the block requiring a more sensitive tuning, to avoid collisions. Such Gram blocks will require a dimension reduction to an \(N\)-dimensional space, generalizing the 2-dimensional method presented above in the isolated case. Within this space it would be required to identify connecting curves avoiding the union of the discriminant hyper-surfaces of all the points of the block, assuring that no collisions of zeros occur as the block shifts as a whole. Moreover, we suggest that a viscosity bound should be generalized for Gram blocks as a whole. Again, taking into account that fundamental questions still remain open in the isolated case, the full description of the non-isolated case remains out-of-reach at this point. ## 13. Summary and Concluding Remarks The Riemann Hypothesis postulates that all the non-trivial solutions of the equation \(Z(t)=0\) must be real. In algebraic geometry one has the powerful invariant of the discriminant, which can be seen as a measurement for the realness of zeros of algebraic equations. In this work, we have endeavoured to extend the idea of the discriminant into the transcendental realm of the \(Z(t)\) function. By their very nature, discriminants act as an invariant for a family of functions. Building upon this, we introduced the novel concept of the \(A\)-parametrized space \(\mathcal{Z}_{N}\) whose elements are given by \[Z_{N}(t;\overline{a})=\cos(\theta(t))+\sum_{k=1}^{N}\frac{a_{k}}{\sqrt{k+1}} \cos\left(\theta(t)-\ln(k+1)t\right), \tag{28}\] where \(\overline{a}=(a_{1},\ldots,a_{N})\in\mathbb{R}^{N}\) for any \(N\in\mathbb{N}\). In the course of our study, we introduced the local discriminant for a pair of consecutive zeros, \(\Delta_{n}(\overline{a})\), defined within the parameter space of dimension \(N(n):=\left[\frac{|g_{n}|}{2}\right]\). This newly defined discriminant has been shown to unveil a wealth of significant new results regarding the zeros of \(Z(t)\). To summarize the new results proved in this work: 1. We demonstrated that our corrected Gram's law, \((-1)^{n}\Delta_{n}(\overline{1})>0\), is equivalent to the Riemann Hypothesis, as detailed in Theorem 4.1. 2. We have shown that the classical Gram's law arises as the first-order approximation of our corrected Gram's law for the linear curve \(Z_{N}(t;r)\) in parameter space, as described in Theorem 6.1. 3. An examination of the second-order Hessian of our corrected law, in this setting, revealed its relation to shifts of the Gram points along the \(t\)-axis. This connection is elaborated upon in Corollary 7.2. 4. Based on this discriminant analysis, we identified a previously unobserved numerical repulsion relationship: \(|Z^{\prime}(g_{n})|>4\,|Z(g_{n})|\) which is observed to hold for isolated bad Gram points. The observed repulsion hints at a natural partitioning of the parameter space into shifting and descending indices, suggesting a dimension reduction for the problem of identifying a corrected connecting curve for isolated Gram points. Consequently, this insight guided us in suggesting an optimization framework aimed at providing a universal validation of the corrected Gram's law, which we have illustrated via various examples. 5. Our analysis of the Davenport-Heilbronn function highlighted its distinct behaviour when compared with the \(Z\)-function. Notably, we have shown that the Davenport-Heilbronn function does not admit the repulsion property observed for \(Z(t)\) in Corollary 9.2. This sheds new light on the elusive question regarding the inherent differences between these two functions. Our study also unveiled a few fundamental open questions, such as the repulsion relation of Conjecture 8.1 and the energy bound of Conjecture 12.2. Collectively, the introduction of \(\Delta_{n}(\overline{a})\) together with the conjectures, empirical discoveries and theorems established in this work, contribute to constructing a robust, natural, long sought-after, plausibility argument for the Riemann Hypothesis. Furthermore, they introduce a new dynamical approach for the further study of the zeros of the \(Z\)-function and related functions.
2304.07290
Constructive semigroups with apartness -- a state of the art
This chapter aims to provide a clear and understandable picture of constructive semigroups with apartness in Bishop's style of constructive mathematics, BISH. Our theory is partly inspired by the classical case, but it is distinguished from it in two significant aspects: we use intuitionistic logic rather than classical throughout; our work is based on the notion of apartness (between elements of the set, and, later, between elements and its subsets). Following Heyting, at least initially, classical semigroup theory is seen as a guide that helps us to develop the constructive theory of semigroups with apartness. To have a structure, we need a set, a relation, and rules establishing how we will put them together. Working within classical or intuitionistic logic, in order to analyze algebraic structures, it is necessary to start with study on sets and ordered sets, relational systems, etc. A comparative analysis between presented classical and constructive results is also a part of this chapter. All proofs can be found in the Appendix.
Melanija Mitrovic, Mahouton Norbert Hounkonnou, Paula Catarino
2023-04-04T23:02:08Z
http://arxiv.org/abs/2304.07290v2
# Constructive semigroups with apartness - a state of the art ###### Abstract This chapter aims to provide a clear and understandable picture of constructive semigroups with apartness in Bishop's style of constructive mathematics, **BISH**. Our theory is partly inspired by the classical case, but it is distinguished from it in two significant aspects: we use _intuitionistic logic_ rather than classical throughout; our work is based on the notion of _apartness_ (between elements of the set, and, later, between elements and its subsets). Following Heyting, at least initially, classical semigroup theory is seen as a guide that helps us to develop the constructive theory of semigroups with apartness. To have a structure, we need a set, a relation, and rules establishing how we will put them together. Working within classical or intuitionistic logic, in order to analyze algebraic structures, it is necessary to start with study on sets and ordered sets, relational systems, etc. A comparative analysis between presented classical and constructive results is also a part of this chapter. All proofs can be found in the Appendix. **Keywords**: Set with apartness; co-ordered set with apartness; semigroup with apartness.
2305.09194
Characterizing network circuity among heterogeneous urban amenities
The spatial configuration of urban amenities and the streets connecting them collectively provide the structural backbone of a city, influencing its accessibility, vitality, and ultimately the well-being of its residents. Most accessibility measures focus on the proximity of amenities in space or along transportation networks, resulting in metrics largely determined by urban density alone. These measures are unable to gauge how efficiently street networks can navigate between amenities, since they neglect the circuity component of accessibility. Existing measures also often require ad hoc modeling choices, making them less flexible for different applications and difficult to apply in cross-sectional analyses. Here we develop a simple, principled, and flexible measure to characterize the circuity of accessibility among heterogeneous amenities in a city, which we call the pairwise circuity (PC). The PC quantifies the excess travel distance incurred when using the street network to route between a pair of amenity types, summarizing both spatial and topological correlations among amenities. Measures developed using our framework exhibit significant statistical associations with a variety of urban prosperity and accessibility indicators when compared to an appropriate null model, and we find a clear separation in the PC values of cities according to development level and geographic region.
Bibandhan Poudyal, Gourab Ghoshal, Alec Kirkley
2023-05-16T06:05:45Z
http://arxiv.org/abs/2305.09194v2
Characterizing the Directness of Accessibility Among Heterogeneous Urban Amenities with Network First Passage Distances ###### Abstract The spatial configuration of urban amenities and the streets connecting them collectively provide the structural backbone of a city, influencing its accessibility, vitality, and ultimately the emotional and physical well-being of its residents. Measures aiming to capture urban accessibility or vitality through structural factors must account for heterogeneity that is both spatial--the density and diversity of amenities across space--and topological--the connectivity among different types of amenities along the street network. Given that existing measures often only focus on these factors individually, here we develop a simple, principled, and flexible framework to characterize the directness of accessibility among heterogeneous amenities in a city, which we call the Class First Passage Difference (CFPD). The CFPD quantifies the excess travel distance incurred when using the street network to route between different pairs of amenity types, summarizing both the spatial and topological correlations among amenities in a city. Our method exhibits significant statistical associations with a variety of urban prosperity and accessibility indicators when compared to an appropriate null model that scrambles the correlations among the amenities. We also find a clear separation in the CFPD characteristics of cities according to their level of development and geographic region. Our framework provides a principled, interpretable, complementary perspective to existing indices of urban accessibility and vitality. ## I Introduction The layout of urban amenities and street network infrastructure in a city form its structural foundation, facilitating human mobility, the exchange of goods, services and ideas, and the visual character of the city [1, 2, 3, 4]. This urban structure in turn exerts a profound influence on the well-being and socioeconomic prosperity of urban residents [5, 6, 7, 8, 9]. Given the wealth of newly available data providing high resolution information about a wide range of urban amenities and infrastructure, there is great interest from researchers and government entities in identifying urban indices that succinctly summarize this data and separate structure from noise [10, 11, 12]. In her early pioneering work "The Death and Life of Great American Cities" [13], Jane Jacobs proposed that a mix of land uses, small block sizes, coexistence between old and new buildings, and high developmental density are the four major factors that determine the "viality" of a city--broadly, the capability for the city to promote a range of activities among diverse populations throughout the day, enhancing liveability and deterring crime and urban decay. Underlying this characterization of urban vitality is the concept of heterogeneity--how evenly different types of activities and amenities are distributed--and implicit in the above criteria are both spatial and topological notions of heterogeneity among urban amenities. For example, the concepts of land use mix and building coexistence encompass the (spatial) distance and (topological) adjacency between land partitions of different uses and buildings of different ages respectively [14, 15]. Jacobs additionally states that accessibility to urban amenities--particularly through walking, bicycling, or public transport--is a critical factor that facilitates vitality [16]. The accessibility of amenities in a city is naturally influenced by both their distance and adjacency along the infrastructure network, invoking the notions of spatial and topological proximity as well as circuity. Existing research has noted the importance of accounting for both spatial and topological correlations to understand a diverse range of urban phenomena, including housing prices [17], the emergence of a city center [18], human mobility preferences [19], and urban spatial segregation [20]. For example, in [21] a new framework to understand correlations in spatial socioeconomic data was proposed which endows the network of spatial adjacencies among government-designated regions with distance weights reflecting the Jensen-Shannon divergence between the data distributions associated with the regions. And in [22], regions obtained as connected components in street network percolation processes are identified with natural, socioeconomic, and administrative boundaries in Britain. Several studies have also suggested the importance of urban amenity configuration to vitality and accessibility, focusing either on the spatial proximity of amenities [23, 24, 25, 26, 27, 28, 29, 30, 31] or their adjacency along street infrastructure [32, 33, 34, 35, 36, 37]. Existing work that has focused on both the spatial and topological facets of accessibility or vitality have largely been restricted to a specific class of urban ameni ties such as transportation facilities [38; 39; 40; 41; 42], healthcare facilities [43; 44; 45], educational institutions [46], entertainment [47], and greenspace [48]. A few recent studies have aimed at capturing urban vitality or accessibility comprehensively using aggregate indices that capture multiple spatial and topological factors simultaneously [49; 50; 51; 52]. However, these studies either combine existing measures in a complex ad hoc manner or identify linear combinations of different measures empirically by optimizing the correlation with indicators such as social media activity or population in-flows. In a recent paper, Bassolas and Nicosia [53] develop a framework to measure the structural correlations and heterogeneity in a range of complex systems that relies on the concept of the mean first passage time for random walks on networks [54]. This method takes as input a network with metadata categorizing each node into one of a small number of classes and computes the class mean first passage time (CMFPT) between each pair of node classes, defined as the expected number of steps it takes for a random walk along the network to reach one class when starting at the other. This framework can be used to capture the topological nature of heterogeneity and correlations in complex systems, providing new insights into a variety of phenomena including the spread of epidemics and segregation [55]. However, as this method is aimed at capturing correlations in a wide range of complex networks, it is formulated without an explicit dependence on spatial density, so cannot be directly applied to understand urban accessibility and vitality. Additionally, although it allows for elegant analytical expressions, the dynamical formulation in [53] based on random walks is not clearly tied to real human movement along street networks, which provides the topological proximity of interest for cities. Although they are influenced by navigation heuristics and the surrounding built environment [56; 57], human trajectories in street networks are much more highly correlated with the shortest paths in the network than completely random walks. In this paper we develop a simple, principled, and flexible measure to characterize the configuration of amenities around a city's street infrastructure that captures both spatial and topological notions of heterogeneity and proximity among the amenities. Our measure is inspired by the CMFPT [53] and the detour factor [58] (also known as the route factor, circuity, or directedness), defined as the ratio of the network distance to the Euclidean ("crowfly") distance between points of interest in a street network with distance-weighted edges. We call our measure the class first passage difference (CFPD), which is computed as the average excess travel distance incurred when using the street network to route between a given pair of amenity classes along the shortest path. By averaging the CFPD values over all amenity classes, weighted by the class frequencies, we obtain an aggregate measure of the directness of accessibility of urban amenities that can be interpreted as the expected value of the CFPD when starting at a randomly chosen amenity. Compared to a null model where the amenity classes are randomly shuffled while fixing the class frequencies, we find that both the aggregated and individual CFPD measures exhibit statistically significant correlations with a range of urban prosperity and accessibility indicators across cities worldwide. We also find a clear ordering in the distributions of CFPD values over groups of cities determined by economic development and geographical region, results that are also highly robust relative to the null model. Our measure can succinctly summarize the density and correlations among different classes of urban amenities in a multifaceted manner as well as provide a simple measure of directness of amenity accessibility to complement existing methods that assess urban vitality and accessibility from a structural perspective. ## II Methods ### Data Description Using the OSMnx Python package [59] which calls the OpenStreetMap API [60], we collected open source data on street geometries and amenity locations for 371 cities worldwide (Fig. S1). Since we are particularly interested in the directness of accessibility to amenities through walking, bicycling, or public transport, we used the pedestrian layer of each street network in our study. We choose the default OSM amenity categories as amenity classes in order to avoid imposing our own beliefs about the amenity categories and take an accepted classification used in previous studies [61; 62; 63; 64; 65]. See Table 1 for details on the amenity classes and their typical frequencies across the cities for which data was collected. For the subset of the cities included in the Jones Lang LaSalle (JLL) report on global cities [66], we use the available classifications of city development level assigned according to each city's real estate, corporate occupier base, and commercial stock. (These classifications were also used for the analysis in [67], where they are described in further detail.) We aggregated the cities under the JLL categories "Super", "Matured", and "Transitional" into a single "Matured" category, and aggregated the categories "Developing", "Early Growth" and "Nascent" into the "Developing" category. Furthermore, we classified all 371 cities into two regions: (1) North America, Western Europe and Australia/New Zealand (195 cities); (2) Africa, Eastern Europe, South America, and Asia (176 cities). This dichotomy roughly corresponds with the Global North/South divide as well as the division between developed and developing countries according to International Monetary Fund [68]. The distribution of cities based on these two different types of classification is shown in Fig. S2. In Figure S3 we plot the city diameter, average shortest path length and the distribution of amenities split by the classifications. We find that Mature cities and Region 1 cities are more compact given the lower diameter and shortest path lengths. In contrast, the distribution of amenities across the classifications is more or less identical. As amenity accessibility and diversity is highly correlated with various aspects of socioeconomic and environmental well-being in cities [6], we also collected a variety of prosperity indicators across multiple socioeconomic and environmental facets with which we compare our measures. We collected data from several UN sources on the Gini coefficient, Internet access rate, public transportation access, GDP per capita, qualify of life index, poverty rate, infrastructure index, Local Online Service Index (LOSI)[70], public space access, and public space allocation for cities in our dataset. We also compare our measures with the Walk Score and Bike Score indices [71] which are widely used measures of the walkability and bikeability of cities across North America and Western Europe [72; 73]. The availability of data was different for different sources. For prosperity metrics from UN sources, the number of cities with available data varies from 49 (Gini Coefficient) to 153 (Transportation Access), whereas the Walk Score and Bike Score data was available for 114 US and Canadian cities as well as London (115 cities in total). All measures analyzed, along with the number of cities for which data was available, are detailed in Table S1 in the Supplementary Material. ### Mathematical Formalism Suppose there are \(N\) total amenities (alternatively, facilities or points of interest) indexed by \(i=1,...,N\), with \((x,y)\) coordinates \(\{(x_{i},y_{i})\}_{i=1}^{N}\). Each amenity \(i\) is grouped into one of \(C\) amenity classes such that \(c_{i}\in\{1,...,C\}\) is the amenity class of point \(i\). The amenity class \(c_{i}\) gives a generic categorization of the type of service provided at the point of interest \(i\). For example, amenities \(i\) such as cinema or museums may be classified with \(c_{i}=\) "entertainment" to indicate their broad categorization as amenities aimed at leisure entertainment [47]. We let \(n_{c}\) denote the number of amenities in class \(c\), so that \(\sum_{c=1}^{C}n_{c}=N\). The classification scheme mapping amenities \(i\) to classes \(c_{i}\) will in general have an impact on the results of our method, so constitutes an important choice for a practitioner using the method. As our method is applicable to any partition of the amenities into classes, the amenity classification scheme can be used to reflect the distinctions among amenities relevant to a particular application (e.g. bus, train, and subway stations in a transportation-focused analysis; or commercial and residential properties in a zoning-oriented study). In the example applications we present here, we use the pre-defined classes of amenities provided by OpenStreetMap [69], from which the data were collected (see Table 1 for a summary and Sec. II.1 for details). Along with the \(N\) amenities distributed in space, there is a street network \(G=(V,E,W)\) embedded in space that is in the primal representation [58]. In this representation, the nodes \(v\in V\) represent intersections, an edge \((u,v)\) exists between two nodes \(u,v\in V\) if and only if there is a street segment directly connecting their corresponding intersections, and the edge \((u,v)\) is endowed with the weight \(w_{uv}\) representing the distance of its corresponding street segment. Using the amenities and street network, we can define the crow-fly distance \(d_{E}(i,j)\) between two amenities \(i,j\) as the usual Euclidean distance, which is valid at small length scales. We can also define the network distance \(d_{G}(i,j)\) between the two amenities as the distance along the shortest street network path connecting \(i\) and \(j\). To get this path length, we project each of \(i\) and \(j\) onto the nearest points on the street network (which may or may not be vertices in \(V\)) and compute the shortest path between the two projected amenities along the street network using Dijkstra's algorithm [74], accounting for the distance required to project the points onto the network. We note that, by construction, \(d_{E}(i,j)\leq d_{G}(i,j)\), since routing along the street network will always incur some additional cost due to its limited coverage of space. In principle, one can also augment the network distance \(d_{G}\) by incorporating the cost of various inconveniences such as angular deviation, slope, or congestion [75]. Along similar lines, one could in principle transform both the distance measures \(d_{E}\) and \(d_{G}\) into travel times. In these cases, our measure would be interpreted in units of cost or time rather than distance. Furthermore, to compute \(d_{G}(i,j)\) one could use a different notion of network distance between amenities \(i\) and \(j\) such as random walk-based measures [54; 76]. For clarity of presentation, \begin{table} \begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{**Amenity Classes**} \\ \hline **Tags** & **Class** & **Frequency** \\ \hline bar, cafe, restaurant, etc & Sustenance & 1604 \(\pm\) 124 \\ \hline college, school, university, etc & Education & 599 \(\pm\) 40 \\ \hline bus station, fuel, parking, etc & Transportation & 3417 \(\pm\) 313 \\ \hline atm, bank, money transfer, etc & Financial & 335 \(\pm\) 26 \\ \hline clinic, hospital, pharmacy, etc & Health & 529 \(\pm\) 43 \\ \hline arts center, casino, cinema, etc & Entertainment & 348 \(\pm\) 23 \\ \hline office, police, post box, etc & Public Service & 310 \(\pm\) 36 \\ \hline bbq, bench, locker, etc & Facilities & 1333 \(\pm\) 141 \\ \hline recycling, waste basket, grit bin, etc & Waste & 661 \(\pm\) 85 \\ \hline apartments, dormitory, child- & Others & 549 \(\pm\) 37 \\ care, etc & & \\ \hline \end{tabular} \end{table} Table 1: Amenity classifications provided by OpenStreetMap [69], which are used for the example applications in Sec. III. The mean frequency and corresponding standard error across the 371 cities studied are listed next to each amenity class. we do not explore these options here. ### Class First Passage Difference (CFPD) In order to capture both spatial and topological notions of the directness of accessibility among amenities as well as the heterogeneity of these amenities, we can compare the distances \(d_{E}(i,j)\) and \(d_{G}(i,j)\) between pairs of amenities \(i,j\) that fall into different classes \(c_{i},c_{j}\). Intuitively, if \(d_{G}(i,j)\gg d_{E}(i,j)\) for many pairs \(i,j\) in classes \(r,s\) respectively, then the street infrastructure of the city is not providing direct access between amenities of types \(r\) and \(s\), which may have adverse effects on the vitality and accessibility of the city from the structural perspective. On the other hand, if \(d_{G}(i,j)\approx d_{E}(i,j)\) for most pairs \(i,j\) with classes \(r,s\), then we do not incur much extra travel cost in routing between the amenity classes Figure 1: **The Class First Passage Difference (CFPD).****(a)** For each origin amenity class \(\alpha\) (here, “Sustenance”), we compute its class first passage difference (CFPD) with a destination amenity class \(\beta\) (here, “Financial”), which represents the average excess travel distance required to route from an amenity in class \(\alpha\) to the nearest amenity in class \(\beta\) (Eq. 1). Each amenity \(i\) in class \(\alpha\) contributes to this average a term \(d_{G}(i,j^{*}(i,\beta))-d_{E}(i,j^{*}(i,\beta))\), which gives the difference between the street network distance \(d_{G}\) (solid red line) and the crow-fly distance \(d_{E}\) (dotted black line) between amenity \(i\) and the nearest amenity in class \(\beta\), \(j^{*}(i,\beta)\). **(b)** If we start at the amenity of interest (amenity \(i\) in the “Sustenance” class) and the nearest amenity of a particular class is accessible via a straight street segment (here, “Transportation”), the contribution of this amenity pair to the CFPD is 0. On the other hand, if the street distance to the closest amenity in the class is large compared to the crow-fly distance (here, ‘Financial’), then the CFPD contribution of this amenity pair is large and provides evidence of a lack of direct accessibility between the two amenities along the network. The CFPD between two amenities can be asymmetric (here, “Sustenance to Financial” and “Financial” to “Sustenance”, with corresponding variables in black and green respectively). **(c)** CFPD matrix of New York City (values in km). **(d)** CFPD matrix of New York City, after shuffling the class labels of the amenities while maintaining the overall class frequencies. Values indicate the mean and standard error over 100 realizations of amenity shuffling. The relative CFPD values are not preserved under this null model, indicating that the CFPD is reflecting correlations in amenity locations along the street network beyond the information provided by the spatial density and relative frequencies of the amenities. and \(s\). There are a few choices for quantifying the deviation of \(d_{G}(i,j)\) and \(d_{E}(i,j)\). Perhaps the most natural choice is the detour factor [58; 77] (also known as the route factor, circuity, or directedness), which measures the ratio \(d_{G}(i,j)/d_{E}(i,j)\). The detour factor can be averaged over all pairs \(i,j\) to get an idea of the typical relative excess travel cost incurred by traveling between a pair of nodes. At first glance, the detour factor appears to have the advantage of being scale-independent, since it divides out spatial scale factors in the numerator and denominator. However, in practice the detour factor tends to decrease as trips become longer [78], due to the relatively straight nature of long street routes. Therefore, the detour factor makes it appear as if short paths are very inconvenient, when in practice the additional travel distance incurred may be negligible. Moreover, the absolute spatial density of development is a critically important quantity that influences the well-being of a city [13], so by dividing out the spatial distance in the detour factor expression one ignores this important factor. Since the travel distance (or cost, time, etc, depending on the definitions of \(d_{E}\) and \(d_{G}\)) itself is a more relevant quantity than the relative deviation quantified by the detour factor, for our measure we opt for the difference \(d_{G}(i,j)-d_{E}(i,j)\) as the quantity of interest quantifying meaningful deviations due to street network routing. This quantity can be directly interpreted as the excess travel distance incurred when routing from \(i\) to \(j\) due to the street network connectivity and geometry. By aggregating the excess travel distance \(d_{G}(i,j)-d_{E}(i,j)\) over pairs \(i,j\) with \(c_{i}=r,\ c_{j}=s\), we can identify amenity classes \(r,s\) that have high directness of accessibility to each other (low excess travel distances), and vice versa. However, it is not realistic to simply take the average \(d_{G}(i,j)-d_{E}(i,j)\) over all pairs \(i,j\) such that \(c_{i}=r,\ c_{j}=s\), since--under the assumption of equivalence among the amenities within a given class--it is unlikely that any rational agent would choose to route to an amenity that is much farther than a closer alternative in the same class [79]. In the case of additional information about routing preferences, one can employ more sophisticated route choice models that provide non-zero weights to different alternatives within the same amenity class [79]. These models will increase the computational burden of the method, but in principle should not change the results substantially so long as the closest alternative is given the highest weight. Additionally, with population density information, one could weight the excess travel distance between a pair of amenities using gravity-type models of travel demand [80] to obtain an expected net incurred cost across all travellers. With the goal of presenting a purely structural measure capturing the directness of accessibility among heterogeneous urban amenities that incorporates minimal assumptions, we do not explore these extensions in this paper. With this in mind, in the absence of any real demand data we will assume that agents routing along the network are primarily interested in the closest amenity of each class. This allows us to define the _Class First Passage Difference_ (CFPD) between the amenity classes \(\alpha\) and \(\beta\) as \[\text{CFPD}_{\alpha\beta}=\frac{1}{n_{\alpha}}\sum_{i=1}^{N}\left[d_{G}(i,j^{ *}(i,\beta))-d_{E}(i,j^{*}(i,\beta))\right]\delta_{c_{i},\alpha}, \tag{1}\] where \(\delta_{c_{i},\alpha}\) is the Kronecker delta function restricting us to origin nodes \(i\) that are in class \(\alpha\), and \[j^{*}(i,\beta)=\underset{j:c_{j}=\beta}{\operatorname{argmin}}\{d_{E}(i,j)\} \tag{2}\] is the amenity \(j\) in class \(\beta\) that is closest to amenity \(i\) in space. We can see that in general \(\text{CFPD}_{\alpha\beta}\neq\text{CFPD}_{\beta\alpha}\), indicating that amenity class \(\beta\) may be more directly accessible along the streets to nodes in class \(\alpha\) than amenity class \(\alpha\) is to nodes in class \(\beta\). (The same asymmetry can be found in the CMFPT of [53].) A schematic illustrating the CFPD measure is shown in Fig. 1a,b. The CFPD information for a city can be summarized in a single \(C\times C\) matrix with indices \(\alpha,\beta\) corresponding to the value \(\text{CFPD}_{\alpha\beta}\). The diagonal values of this matrix, \(\text{CFPD}_{\alpha\alpha}\), correspond to the directness of accessibility among amenities within the same class, while off-diagonals \(\text{CFPD}_{\alpha\beta}\) correspond to the directness of accessibility among amenities of different classes. An example for New York City is shown in Fig. 1c, where the matrix elements are in units of km. As the figure indicates the diagonals of the CFPD matrix are in general lower than the off-diagonals, reflecting the agglomerative nature of many amenities [81], although there are heterogeneities in the level of clustering. For instance we find that Transportation facilities are most directly accessible from each other, differing by only 20 meters on average from their Euclidean distance, while the length of streets connecting Entertainment facilities are on average 110 meters greater than the crow-fly distance. By its definition, the CFPD matrix is affected by spatial density--as intended, since this is a key component of accessibility and vitality. However, in the experiments in Sec. III we are interested in identifying how the correlations among amenities in global cities impacts CFPD results while controlling for the variability in the spatial density as well as frequency of different amenity classes across the cities, as these may be affected by heterogeneity across different regions due to OSM sampling coverage, cultural differences, etc. We can therefore compare any CFPD results we obtain for real data with the results of simulations from a null model that preserves the overall spatial density of amenities and the frequency of each amenity class, but destroys the correlations among the amenities in space and along the street network. In the null model we consider, the positions of the amenities remain fixed but the tags of the amenities (e.g. "bar", "atm", "waste basket") are shuffled uniformly at random. Each tag is associated with an amenity class categorizing the amenity type (see Table 1), and shuffling the tags uniformly at random is equivalent to shuffling the amenity class labels uniformly at random while preserving their frequencies across the city. By examining a the average CFPD matrix over many realizations of this null model for New York City (Fig. 1d), we can see that spatial density and amenity frequency alone cannot explain the empirically observed CFPD values (Fig. 1c), since the relative values are not preserved. Instead, we can see that in the null model, the CFPD values are determined entirely by the frequency of the destination amenity class (i.e. the columns of the CFPD matrix are constant). This is because, regardless of our starting point, the distance to the nearest amenity of a certain class in the null model will be determined solely by the frequency of the amenity class--more frequent amenity classes will on average be closer--since all tags are being shuffled uniformly at random. From the CFPD matrix one can extract a number of useful aggregate measures, a few of which we explore in the next section. ### Aggregate CFPD Measures The CFPD matrix contains information about all pairs of different amenity classes in a city, allowing us to understand the directness of accessibility among specific amenity types. However, for large-scale comparisons across cities it is useful to extract a smaller set of values from this matrix that capture more aggregated notions of circuity. A natural measure we can compute from the CFPD matrix is its average, which gives an overall length scale of expected excess travel distance between amenity classes. However, there are a couple of issues with the interpretability of a simple average over CFPD matrix entries. Firstly, it treats all amenity classes on an equal footing, while some amenity classes may be much more frequent than others. Secondly, it includes competing contributions from the diagonals--which characterize the within-class accessibility and will favor amenity classes with high levels of agglomeration--and the off-diagonals--which characterize the between-class accessibility and will favor amenity class pairs that are highly mixed. As such, a low value of this average would simply indicate a high density of amenities, regardless of their class. (We will discuss further density-related considerations in Sec. II.3 Sec. III.) With this in mind, we can construct a measure of the overall directness of accessibility of amenities in a city by looking at an average over individual amenities rather than full amenity classes (to ensure appropriate weighting by class frequency), and only considering distinct amenity classes. The resulting measure, which we call the _Average Class First Passage Difference_ (ACFPD), can be written as \[\text{ACFPD}=\frac{1}{N}\sum_{i=1}^{N}\Big{(}\frac{1}{C-1}\sum_{ \beta\neq c_{i}}\Big{[}d_{G}(i,j^{*}(i,\beta)) \tag{3}\] \[-d_{E}(i,j^{*}(i,\beta))\Big{]}\Big{)},\] which can be interpreted as the expected excess travel distance from a randomly chosen amenity to the nearest amenity of a randomly chosen (different) class. This can be equivalently written in terms of the CFPD matrix as \[\text{ACFPD}=\frac{1}{C-1}\sum_{\alpha=1}^{C}\frac{n_{\alpha}}{N}\sum_{\beta \neq\alpha}\text{CFPD}_{\alpha\beta}. \tag{4}\] It is also useful to decompose the ACFPD into the contributions from each individual amenity class \(\alpha\), giving us the _Marginal Class First Passage Difference_ (MCFPD) \[\text{MCFPD}_{\alpha}=\frac{1}{C-1}\frac{n_{\alpha}}{N}\sum_{\beta\neq\alpha} \text{CFPD}_{\alpha\beta}, \tag{5}\] which evidently satisfies \(\sum_{\alpha}\text{MCFPD}_{\alpha}=\text{ACFPD}\). These measures are summarized in Table 2. \begin{table} \begin{tabular}{|p{56.9pt}|p{142.3pt}|p{142.3pt}|} \hline \multicolumn{3}{|c|}{**CFPD Measures**} \\ \hline **Measure** & **Definition** & **Interpretation** \\ \hline \(\text{CFPD}_{\alpha\beta}\) & Eq. 1 & “Class First Passage Difference”. Expected excess travel distance along the street network to reach the nearest amenity of class \(\beta\) from an amenity of class \(\alpha\). Characterizes the directness of accessibility for class \(\beta\) when starting from class \(\alpha\). \\ \hline \(\text{MCFPD}_{\alpha}\) & \(\frac{1}{C-1}\frac{n_{\alpha}}{N}\sum_{\beta\neq\alpha}\text{CFPD}_{\alpha\beta}\) & “Marginal Class First Passage Difference”. Expected excess travel distance from amenity of class \(\alpha\) to any distinct amenity type, weighted by the frequency of the amenities in class \(\alpha\). Characterizes the directness of accessibility for amenities in class \(\alpha\). \\ \hline \(\text{ACFPD}\) & \(\sum_{\alpha}\text{MCFPD}_{\alpha}\) & “Average Class First Passage Difference”. Expected excess travel distance from a randomly chosen amenity to any other amenity type. Characterizes the total directness of accessibility for all amenities in a city. \\ \hline \end{tabular} \end{table} Table 2: Definitions and interpretations of the CFPD-based measures used in this paper. ## III Results ### Correlations with Prosperity and Accessibility Indicators To examine the extent to which the directness of accessibility among amenities is associated with various facets of socioeconomic and environmental prosperity in cities, we compute the Spearman rank correlation between all pairs of marginal class first passage difference values (Eq. 5) and prosperity indicators discussed in Sec. II.1. In Fig. 2 we show the results of these experiments in the first ten rows. In Figures S4 and S5 we show scatterplots of the city indicators versus the MCFPD values for the Health and Education amenity classes. For each experiment, to ensure that amenity density and frequency are not the primary factors determining the correlation, we compare the true Spearman rank correlation value with 100 randomized realizations of the city where all other factors are fixed but the amenity class labels of the amenities are shuffled at random while preserving their frequencies (see Sec. II.3 for details). These random realizations have the same frequency and average spatial density across the city for each amenity class, but the spatial and topological correlations among the amenities is destroyed. We then compute an empirical permutation test p-value representing the fraction of random realizations with Spearman rank correlations higher than the true observed correlation. Asterisks indicate (MCFPD, Indicator) pairs for which the empirical p-value was less than 0.05, indicating that fewer than 5% of the randomized trials produced correlations higher than the observed correlation. We observe a striking overall trend in the results suggesting that the directness of accessibility among many amenity types is significantly associated with many of the prosperity indicators. For instance we find a positive correlation between the Gini coefficient and Poverty Rate with the MCFPD of a number of amenities, indicating that the less directly accessible the amenity classes Figure 2: **Associations among MCFPD values and prosperity/accessibility indicators across global cities.** The Marginal Class First Passage Difference (Eq. 5) was computed for all cities with available data for each indicator (number indicated in parentheses) and all amenity classes. Superscript \(*\) indicates a permutation test \(p\)-value of less than 0.05 over 100 draws from the null model where the class labels of the amenities are shuffled while maintaining the overall class frequencies. City indicators are defined in Table S1. are from one another, the more income inequality and poverty one finds in these cities. Meanwhile, we find a negative correlation with the other metrics such as quality of life and GDP per capita, indicating that as amenities become less directly accessible along the street infrastructure the indicators of prosperity and livability decline. We note the exception of the Transportation amenity class, which displays relatively weak correlations with prosperity indicators. This can be explained by the fact that Transportation constitutes the most frequent amenity class in each city--there are on average more than twice as many amenities in this class than the next most frequent amenity (see Table 1). Therefore Transportation amenities naturally will have high accessibility regardless of how other amenities are configured. Another reason for these lower correlations is due to the nature of the Transportation amenity class itself--many transportation-related amenities are related to parking, and cities designed for easier car accessibility are not necessarily those with the highest levels of social and environmental prosperity [82]. We also see deviations in the correlation patterns for two other amenity classes, Facilities and Waste. Indeed, the results suggest that the less accessible these amenities are from each other the better the outcome in terms of the prosperity metrics. We also repeat these experiments with the Walk Score and Bike Score indices (last two rows in Fig. 2), to examine whether the directness of accessibility as measured by the CFPD values is associated with the overall walkability and bikeability of a city according to these measures. Once again we observe negative correlations with all MCFPD values, indicating that lower excess travel costs are associated with higher levels of walkability and bikeability in the studied cities. The Facility class of amenities represents the only outlier in this analysis, which is likely due to its exclusion from the set of amenities considered in the Walk Score and Bike Score algorithms [83, 84]. Taken together, these correlations are consistent with the intuition that greater excess travel costs between amenities will be associated with lower levels of prosperity and accessibility. To check whether these effects are present when considering the average accessibility across all amenity classes in cities, in Fig. 3 and Fig. S6 we examine the associations between the indicators and the ACFPD values, finding that the trends mirror those seen for the MCFPD. Thus, whether one marginalizes over specific amenities or considers them as a whole, the association of the information contained in the CPFD with city indicators is robust. ### CFPD Distributions for Groups of Global Cities Next we disaggregate the results to understand how the CFPD varies across groups of cities at different stages of economic development--that is, Mature or Developing, or Region 1 and Region 2 corresponding to the Global North/South divide (see Sec. II.1). In Fig. 4 we plot the probability distributions for the CFPD according to the two development levels (panel a) and two world regions (panel b). We observe a clear separation in the distributions of the CFPD values in each case. For the development levels, we find that the Matured cities have systematically lower CFPD values than the Developing cities, and correspondingly amenities in Region 1 are more directly accessible than those in Region 2, despite the actual distribution of amenities being the same regardless of classification. Differences in the CFPD structure of these city groupings is also reflected in the equivalent of Fig. 2 plotted by maturity level (Fig. S7) and region (Fig. S8). While the qualitative trends are the same as those seen in Fig. 2 regardless of city classification, we find that the correlations are much more pronounced in Mature and Region 1 cities as compared to the Developing and Region 2 cities. In order to determine the statistical significance of the observed discrepancies between the CFPD distributions within each city subgroup in Fig. 4, we compute the Jon Figure 3: **Relationship between ACFPD and Accessibility Indicators.****(a)**-**(b)** Walk Score and Bike Score [71] versus ACFPD for 115 cities (Sec. II.1). Superscript \(*\) indicates a permutation test \(p\)-value of less than 0.05 over 100 draws from the null model where the class labels of the amenities are shuffled while maintaining the overall class frequencies. ckheere trend test statistic [85] assuming an a priori ordering of the distributions--Matured \(<\) Developing and Region \(1<\) Region \(2\) in panels (a) and (b) respectively. The Jonckheere statistic \(S\) measures the extent to which the values in the group hypothesized to have higher values (\(d_{high}\)) exceed the values in the group assumed to have lower values (\(d_{low}\)), and is given by \[S=\sum_{x\in d_{low}}\sum_{y\in d_{high}}[\mathbf{1}(x\leq y)-\mathbf{1}(x>y)], \tag{6}\] where \(\mathbf{1}(\cdot)\) is the indicator function. (This is equivalent to a rescaled Mann-Whitney test statistic for two groups and no ties between the groups, although the Jonckheere statistic is applicable to cases with more than two ordered groups.) In its standard form, the Jonckheere trend test is performed by computing the p-value associated with the \(S\) statistic under its corresponding standard normal approximation, in which case it has more statistical power than the Kruskal Wallis test for a priori ordered distributions [85]. Under this test, our results are highly statistically significant (\(p\ll 0.01\) for the \(S\) statistics of both panels (a) and (b)). However, we also perform an additional, more stringent test of statistical significance to ensure we can account for amenity density- and frequency-related effects. We compute the empirical p-value for the \(S\) statistic associated with the real data by comparing this value with the \(S\) statistics obtained for 100 null model simulations. Specifically, in each simulation we shuffle the amenity labels within each city and calculate its resulting set of CFPD values, then aggregate all these CFPD values for the two city subgroups of interest. We then recompute the \(S\) statistic for these two distributions of CFPD values, and repeat the simulation process for 100 trials. For both the development level comparison (panel a) and the world region comparison (panel b), we find empirical p-values of less than \(p=0.05\), indicating that in more than 95% of trials the null model resulted in a lower \(S\) statistic than that which was observed for the real data. We note that this provides rather strong evidence of the CFPD being a robust indicator of city prosperity, given that it tracks with the classification of cities done by external agencies (United Nations and JLL) using methods quite different than considered here [86; 66]. ## IV Discussion In this paper we develop a simple, principled, and flexible framework to characterize the directness of accessibility among amenities in a city based on the typical excess travel distance incurred to route between amenities of different types. Our method, which is built on what we call the Class First Passage Difference (CFPD), simultaneously accounts for the density, heterogeneity, and adjacency of amenities along an underlying urban street network, effectively integrating both spatial and topological correlations among these points of interest in a single measure. We observe strong correlations between our CFPD-based measures and various urban prosperity and accessibility indicators, as well as with the development levels and world regions of global cities. All of our results are robust when compared to a null model that scrambles the correlations among amenities while preserving their densities and frequencies throughout the city, confirming that the CFPD measures provide a more nuanced view of accessibility beyond the density and diversity of amenities. A comprehensive analysis of the accessibility and vitality of cities requires the consideration of many distinct competing factors [16; 40] and spatial scales [31; 32], underscoring the importance of developing interpretable measures that can parsimoniously describe multifaceted Figure 4: **Differences in CFPD values across groups of global cities.****(a)** The probability density P(CFPD) of Class First Passage Difference (Eq. 1) for cities in different development subgroups. **(b)** P(CFPD) for cities in different regional subgroups (Sec. II.1). The observed distributional orderings are statistically significant at the \(p=0.05\) level when compared to the null model where class labels of the amenities are shuffled while maintaining the overall class frequencies. structural correlations in cities that are relevant for accessibility and vitality. The results in our experiments suggest that the CFPD framework can provide a complementary but distinct view of urban structure to existing measures of accessibility and urban vitality by assessing the directness of accessibility among amenities along the street infrastructure conditioned on their spatial locations. Our measure is easy to compute for large-scale urban datasets and, when used in conjunction with the null model we present, provides a framework for assessing correlations among amenities that is robust to variations in their sampling density. Accessibility and diversity of amenities are key components of urban vitality [13], and circuity is a key component of accessibility along an infrastructure network [87], so by combining notions of circuity and diversity among urban amenities, the CFPD measure we present captures a fundamental contributor to urban vitality that is often overlooked in favor of measures based on amenity diversity or density alone. Our measures can be extended in a number of meaningful ways in future work. In this work, given an origin amenity, we have identified the destination amenity within each class as the amenity minimizing the Euclidean distance to the starting point. However, in principle one can use any number of travel cost measures (e.g. travel time) or desirability measures (e.g. popularity) to identify the destination amenity within each class. One can also use a non-deterministic mechanism such as a stochastic route choice model or biased random walk to choose the destination amenities to which we compute the excess travel costs for each origin. One can also perform the CFPD analysis using a more generalized travel cost measure for both the crow-fly and street network paths, resulting in a measure with units of cost rather than distance.
2306.10195
Nonadiabatic simulations of photoisomerization and dissociation in ethylene using ab initio classical trajectories
We simulate the nonadiabatic dynamics of photo-induced isomerization and dissociation in ethylene using ab initio classical trajectories in an extended phase space of nuclear and electronic variables. This is achieved by employing the Linearized Semiclassical Initial Value Representation (LSC-IVR) method for nonadiabatic dynamics where discrete electronic states are mapped to continuous classical variables using either the Meyer-Miller Stock-Thoss representation or a more recently introduced spin mapping approach. Trajectory initial conditions are sampled by constraining electronic state variables to a single initial excited state, and by drawing nuclear phase space configurations from a Wigner distribution at finite temperature. An ensemble of classical ab initio trajectories are then generated to compute thermal population correlation functions and to analyze the mechanisms of isomerization and dissociation. Our results serve as a demonstration that this parameter-free semiclassical approach is computationally efficient and accurate, identifying mechanistic pathways in agreement with previous theoretical studies, and also uncovering dissociation pathways observed experimentally.
Ken Miyazaki, Nandini Ananth
2023-06-16T22:01:29Z
http://arxiv.org/abs/2306.10195v3
Nonadiabatic simulations of photoisomerization and dissociation in ethylene using _ab initio_ classical trajectories ###### Abstract We simulate the nonadiabatic dynamics of photo-induced isomerization and dissociation in ethylene using _ab initio_ classical trajectories in an extended phase space of nuclear and electronic variables. This is achieved by employing the Linearized Semiclassical Initial Value Representation (LSC-IVR) method for nonadiabatic dynamics where discrete electronic states are mapped to continuous classical variables using either the Meyer-Miller Stock-Thoss representation or a more recently introduced spin mapping approach. Trajectory initial conditions are sampled by constraining electronic state variables to a single initial excited state, and by drawing nuclear phase space configurations from a Wigner distribution at finite temperature. An ensemble of classical _ab initio_ trajectories are then generated to compute thermal population correlation functions and to analyze the mechanisms of isomerization and dissociation. Our results serve as a demonstration that this parameter-free semiclassical approach is computationally efficient and accurate, identifying mechanistic pathways in agreement with previous theoretical studies, and also uncovering dissociation pathways observed experimentally. + Footnote †: preprint: AIP/123-QED ## I Introduction Nonadiabatic effects that result from the coupling of nuclear motion to electronic transitions are crucial to understanding the mechanism of various interesting photophysical and photochemical phenomena including internal conversion, intersystem crossing, and photodissociation.[1] Unfortunately, the high dimensionality of complex molecular systems makes real-time dynamic simulations on a pre-computed potential energy surface challenging and motivates the development of on-the-fly dynamic methods. The search for computationally efficient and accurate on-the-fly quantum dynamic simulations remain an outstanding challenge, particularly in systems where nonadiabatic effects play a central role. Although several methods have been developed for the simulation of nonadiabatic processes, only a handful lend themselves to on-the-fly implementations that require not just routine force calculations but also evaluation of the nonadiabatic coupling vector at each time step. Multiconfigurational Time-dependent Hartree (MCTDH) is, arguably, the most exact of these techniques relying on a weighted sum of Gaussian nuclear functions centered on a grid while solving the Schrodinger equation exactly for the electronic degrees of freedom.[2] Variational Gaussian methods are extensions of MCTDH where the nuclear wavefunction is replaced by a Gaussian wavepacket.[3] In multiple spawning and related methods,[4; 5] the number of nuclear basis functions increase through'spawning' events that occur when electronic states become near-degenerate, mimicking the quantum dynamical bifurcation of wavepackets. Formally, multiple spawning can converge to exact quantum dynamics if a sufficiently large number of wavepackets are included, and the transition matrix elements are evaluated exactly at each step.[6] However, in practice, it is necessary to limit the number of wavepackets spawned and approximate matrix elements as is done in approaches like _ab initio_ multiple spawning (AIMS).[3; 7; 8] Mixed quantum-classical methods like surface hopping and Ehrenfest dynamics further approximate nuclear motion by treating it classically. In surface hopping[9; 10] nuclear wavepackets with finite widths in the nuclear coordinates are replaced by independent classical trajectories. Each trajectory is allowed to 'hop' between surfaces according to a stochastic algorithm based on the amplitudes of each electronic state. Ehrenfest is a mean-field dynamics, where the potential energy for classical nuclear propagation is obtained by the weighted average of electronic state energies obtained at an instantaneous nuclear geometry.[11] While the non-interacting classical trajectories simplify the dynamics, both surface hopping and Ehrenfest are known to suffer from strong coherence and slow decoherence, necessitating additional _ad hoc_ modifications.[12; 13; 14; 15; 16; 17; 18] The methods discussed thus far focus on approximately time evolving wavepackets that can then be used to compute real-time correlation functions necessary to compare with experimental observables. Semiclassical (SC) and path-integral based methods[19; 20; 21] take an alternate approach, directly approximating the real-time quantum correlation function. As reviewed recently, there is a hierarchy of SC methods that include quantum effects and more specifically nonadiabatic effects to differing extents.[22] However, only the more classical-limit SC methods are sufficiently efficient to allow for on-the-fly dynamic simulations. In particular, the Linearized Semiclassical Initial Value Representation (LSC-IVR) approach[23; 24] is a classical-limit method that approximates the quantum correlation functions between two operators, \(\hat{A}\) and \(\hat{B}\), as a single phase space integral over a product of Wigner functions evaluated at time zero and classically time-evolved phase space configurations at later time \(t\) respectively. LSC-IVR is exact at time zero and for harmonic potentials, and offers a simple implementation. Recently, the symmetrical quasiclassical (SQC) method has been introduced[25; 26; 27] where the Wigner functions in LSC-IVR are replaced by 'window' functions that have been successfully used to study a range of model chemical systems,[26; 27; 28; 29]_ab initio_ dynamics using the so-called quasi-diabatic electronic states[30; 31] as well as using adiabatic electronic states with an approximate integration scheme.[29] Unfortunately, unlike LSC-IVR, SQC requires the choice of an _ad hoc_ window function that can significantly affect simulation accuracy.[30; 31; 32] In this paper, we implement a first on-the-fly nonadiabatic dynamic simulation with LSC-IVR using _ab initio_ classical trajectories in an extended phase space of nuclear and adiabatic electronic state variables. The classical Hamiltonian employed by these semiclassical methods is obtained by mapping discrete electronic state variables to continuous Cartesian variables. We explore two mapping protocols: the well-established Meyer-Miller-Stock-Thoss (MMST) approach[33; 34; 35; 36] and the more recently introduced spin mapping that shows great promise in model system studies.[37; 38] To enable on-the-fly simulations, we use an adiabatic electronic state representation[39; 35; 40] in conjunction with the 'kinematic' momentum integration scheme.[40] Trajectory initial conditions for the nuclei are sampled from an initial Wigner distribution, while electronic state variables are sampled to ensure that only a single excited electronic state is populated at time zero. We calculate the real-time population correlation function by generating classical trajectories from the mapping Hamiltonian, with the potential gradients, energies, and the nonadiabatic coupling vector calculated at each time step from a CASSCF(2o,2e) electronic structure calculation. We then analyze the resulting ensemble of trajectories to identify the mechanisms of photoisomerization and dissociation pathways in ethylene, and compare our results against previous theoretical and experimental studies. We conclude with a detailed discussion highlighting the significant advantages of the LSC-IVR approach for on-the-fly nonadiabatic simulations while also outlining outstanding challenges. ## II Theory ### LSC-IVR approximation to quantum correlation functions In the path integral representation of quantum mechanics, the real-time quantum correlation function, \[C_{AB}(t)=\text{Tr}\left[\hat{A}e^{i\hat{H}t/\hbar}\hat{B}e^{-i\hat{H}t/\hbar }\right], \tag{1}\] is expressed as a double sum over all possible forward and backward paths in coordinate space with an overall phase corresponding to the difference in action between the forward and backward paths. Truncating the difference in action to first order yields the LSC-IVR approximation[41; 23] to the correlation function in Eq. (1), \[C_{AB}^{LSC}(t)=\frac{1}{(2\pi\hbar)^{N}}\int d\mathbf{X}_{0}\int d\mathbf{P} _{0}A_{\mathcal{W}}(\mathbf{X}_{0},\mathbf{P}_{0})B_{\mathcal{W}}(\mathbf{X} _{t},\mathbf{P}_{t}). \tag{2}\] In Eq. (2), \(O_{\mathcal{W}}(\mathbf{X},\mathbf{P})\) represents the Wigner transform of \(\hat{O}\) defined as \[O_{\mathcal{W}}(\mathbf{X},\mathbf{P})=\int d\Delta\left\langle\mathbf{X}+ \frac{\mathbf{\Delta}}{2}\middle|\hat{O}\middle|\mathbf{X}-\frac{\mathbf{ \Delta}}{2}\right\rangle e^{-i\mathbf{P}\mathbf{\Delta}/\hbar}, \tag{3}\] where \(N\) is the number of system degrees of freedom, and \((\mathbf{X},\mathbf{P})\) are phase space vectors. An ensemble of trajectories are generated by sampling initial conditions from \(A_{\mathcal{W}}(\mathbf{X}_{0},\mathbf{P}_{0})\) and propagated for time \(t\) according to classical equations of motion generated by the Hamiltonian, \(H(\mathbf{X},\mathbf{P})\). The function \(B_{\mathcal{W}}\) is then evaluated at the time-evolved phase space variables \((\mathbf{X}_{t},\mathbf{P}_{t})\). Calculating the LSC-IVR correlation function in Eq. 2 is generally efficient, incorporating quantum effects like nuclear tunneling and zero-point energy at a computational cost similar to a classical simulation. ### Thermal Correlation Functions and Nonadiabatic Dynamics The LSC-IVR expression for a real-time thermal correlation function requires the Wigner transform of the density operator, \(\hat{A}\equiv\hat{\mathbf{\rho}}=e^{-\beta\hat{H}}\) (Eq. 3), where \(\beta=1/k_{B}T\). For a multi-state system, we assume the initial density operator is separable, \(\hat{\rho}=\rho_{e}\rho_{n}\), where \(\rho_{e}\) and \(\hat{\rho}_{n}\) are the electronic and nuclear density operators, respectively, and \(\text{Tr}_{e}\left[\hat{\rho}_{e}\right]=\text{Tr}_{n}\left[\hat{\rho}_{n} \right]=1\). Under this assumption, the Wigner transform of \(A\) can be written as the product of separate transforms: \(A_{\mathcal{W}}(\mathbf{R},\mathbf{P},\mathbf{x},\mathbf{p})=\left[\rho_{e} \right]_{\mathcal{W}}(\mathbf{x},\mathbf{p})\left[\hat{\rho}_{n}\right]_{ \mathcal{W}}(\mathbf{R},\mathbf{P})\) where \((\mathbf{R},\mathbf{P})\) are nuclear phase space vectors and the \((\mathbf{x},\mathbf{p})\) are the electronic state phase space variables. Assuming a thermal distribution of harmonic normal modes, it is possible to express the Wigner transform of nuclear density as \[\left[\hat{\rho}_{n}\right]_{\mathcal{W}} =\left[e^{-\beta\hat{H}_{n}}\right]_{\mathcal{W}}\] \[=\prod_{i=1}^{N-F}\frac{1}{2\pi}\exp\left[-\tanh\left(\frac{ \beta\alpha}{2}\right)\left(\frac{1}{\mu_{i}\omega_{i}}P_{i}^{2}+\mu_{i} \omega_{i}R_{i}^{2}\right)\right], \tag{4}\] where \(\mu_{i}\) and \(\omega_{i}\) are the reduced mass and frequency of the \(i\)-th vibrational mode, \(N\) is the total number of degrees of freedom in the system (electronic and nuclear) and \(F\) is the number of electronic states. For photo-initiated processes, the electronic density is defined as the projection operator onto a single initially occupied \(i^{\text{th}}\) electronic state, \[\left[\hat{\rho}_{e}\right]_{W}=\left[\left|i\right\rangle\left\langle i \right|\right]_{W}. \tag{5}\] In the SC framework, the discrete state-space electronic density matrix and the corresponding multi-state system Hamiltonian are treated by mapping them to a continuous Cartesian variable representation, and we describe two such schemes below. #### ii.1.1 MMST mapping: The MMST approach effectively maps the occupation of an electronic state to a single excitation quantum of a corresponding harmonic oscillator,[33] \[\ket{n}\bra{m}\rightarrow\hat{a}_{n}^{\dagger}\hat{a}_{m} \tag{6}\] where \(F\) is the total number of electronic states and the creation and annihilation operators of \(k\)th harmonic oscillator are \(\hat{a}_{k}^{\dagger}=(\hat{x}_{k}-i\hat{p}_{k})/\sqrt{2}\) and \(\hat{a}_{k}=(\hat{x}_{k}+i\hat{p}_{k})/\sqrt{2}\) in terms of electronic phase space variables. Using this mapping, the multistate Hamiltonian expressed in the adiabatic electronic state representation can be written as,[40; 42; 43] \[H =\frac{(\mathbf{P}+\Delta\mathbf{P})^{2}}{2\mu}+\sum_{n}^{F}\frac {1}{2}\left(p_{n}^{2}+x_{n}^{2}-\gamma\right)E_{n}(\mathbf{R})\] \[=\frac{\mathbf{p}_{\text{kin}}^{2}}{2\mu}+\sum_{n}^{F}\frac{1}{2} \left(p_{n}^{2}+x_{n}^{2}-\gamma\right)E_{n}(\mathbf{R}). \tag{7}\] We note that the MMST mapping is exact when \(\gamma\!=\!1\), and this is the value we use in this manuscript although some semi-classical simulations treat \(\gamma\) as a zero-point energy (ZPE) parameter that can be modified to increase numerical stability. In Eq. 7, \(E_{n}(\mathbf{R})\) is the adiabatic energy of the \(n\)-th electronic state, the kinematic momentum \(\mathbf{P}_{\text{kin}}=\mathbf{P}+\Delta\mathbf{P}\), with \[\Delta\mathbf{P}=\sum_{n\neq m}^{F}x_{n}p_{m}\mathbf{d}_{nm}(\mathbf{R}), \tag{8}\] where \(\mathbf{d}_{nm}(\mathbf{R})=\langle\phi_{n}|\frac{\partial}{\partial\mathbf{R }}|\phi_{m}\rangle\) is the nonadiabatic coupling vector between the electronic states \(|\phi_{n}\rangle\) and \(|\phi_{m}\rangle\).[40] It is further possible to construct an equivalent, exact, symmetrized form of the MMST Hamiltonian, \[H =\frac{\mathbf{P}_{\text{kin}}^{2}}{2\mu}+\frac{1}{F}\sum_{n}^{F} E_{n}(\mathbf{R})\] \[\quad+\frac{1}{F}\sum_{n,m}^{F}\frac{1}{4}\left(p_{n}^{2}+x_{n}^{ 2}-p_{m}^{2}-x_{m}^{2}\right)\left(E_{n}(\mathbf{R})-E_{m}(\mathbf{R})\right)\] \[=\frac{\mathbf{P}_{\text{kin}}^{2}}{2\mu}+V_{\text{eff}}, \tag{9}\] such that the equations of motion are independent of the ZPE parameter.[26; 40] In order to evaluate the electronic density matrix as defined in Eq. 5, we consider the Wigner transform of the projection operator. In the MMST framework, the phase space expression obtained through Wigner transformation differs depending on the quantum mechanical definition,[41; 44; 45] and we consider both forms in this work. Defining the projection operator in the singly excited oscillator (SEO) basis yields the so-called Wigner population estimator,[46] \[P_{\mathcal{W}}^{i} =\left[\ket{i}\bra{i}\right]_{\mathcal{W}}^{\text{SEO}}\] \[=2^{F+1}\left(x_{i}^{2}+p_{i}^{2}-\frac{1}{2}\right)\exp\left[- \sum_{j}^{F}\left(x_{j}^{2}+p_{j}^{2}\right)\right]. \tag{10}\] Expressing the projection operators using the mapping variable operators yields the SC population estimator,[44] \[P_{\text{SC}}^{i}=\left[\frac{1}{2}\big{(}\hat{x}_{i}^{2}+\hat{p}_{i}^{2}-1 \big{)}\right]_{\mathcal{W}}=\frac{1}{2}\big{(}x_{i}^{2}+p_{i}^{2}-1\big{)}\,. \tag{11}\] #### ii.1.2 Spin mapping As an alternative to MMST mapping, various schemes that map electronic states to spin variables have been previously proposed;[47; 48; 34] here we employ a recently introduced scheme that appears to out-perform MMST mapping in model system studies.[37; 38; 49] The spin mapping (SM) approach maps an \(F\)-level electronic system to an \((F-1)/2\) spin system,[37] and the corresponding Hamiltonian is obtained by recognizing that the \(F\)-level system Hamiltonian can be exactly expressed as a linear combination of the \(F^{2}-1\) spin angular momentum matrices that define a spin \((F-1)/2\) system and the identity operator, \[\hat{H}=H_{0}\mathds{1}+\sum_{i=1}^{F^{2}-1}H_{i}\hat{S}_{i} \tag{12}\] where \(H_{0}\) and the \(\{H_{i}\}\) are the expansion coefficients, and the spin angular momentum matrices, \(\{\hat{S}_{i}\}_{i=1}^{F^{2}-1}\), are traceless and orthogonal. The Hamiltonian in Eq. (12) can be written in terms of continuous phase-space Cartesian operators using the Stratonovich-Weyl transforms.[50; 51; 52] Interestingly, after symmetrization, in the adiabatic electronic state representation the resulting SM Hamiltonian is identical to the symmetrized MMST Hamiltonian in Eq. (9). In this manuscript, we focus on the \(W\)-representation of the Stratonovich-Weyl transform, which results in a correlation function analogous to that of LSC-IVR (Eq. 2), and that recent work suggests is the most successful choice of representation.[37; 38] As before, we define the initial electronic density matrix as a projection onto a single discrete electronic states. Since the specific form of the projection operator depends on the number of electronic states, we consider the case where \(F=3\) in line with our study of the photoisomerization of ethylene. Expressing the projection operator in terms of spin matrices, \[\ket{j}\bra{j}=C_{0}\mathds{1}+\sum_{i}^{F^{2}-1}C_{i}\hat{S}_{i}, \tag{13}\] and using the \(W\)-representation leads to expressions for the population estimators,[38] \[P_{\text{SM}}^{1} =\left[\ket{1}\bra{1}\right]_{\text{SM}}=\frac{1}{3}+\frac{1}{6} \big{(}2r_{1}^{2}-r_{2}^{2}-r_{3}^{2}\big{)} \tag{14a}\] \[P_{\text{SM}}^{2} =\left[\ket{2}\bra{2}\right]_{\text{SM}}=\frac{1}{3}-\frac{1}{6} \big{(}r_{1}^{2}-2r_{2}^{2}+r_{3}^{2}\big{)}\] (14b) \[P_{\text{SM}}^{3} =\left[\ket{3}\bra{3}\right]_{\text{SM}}=\frac{1}{3}-\frac{1}{6} \big{(}r_{1}^{2}+r_{2}^{2}-2r_{3}^{2}\big{)} \tag{14c}\] where \(r_{i}^{2}=x_{i}^{2}+p_{i}^{2}\) and \(\sum_{j}^{F}\left[\ket{j}\bra{j}\right]_{\text{SM}}=1\). ### Simulation details Excited state dynamics in the LSC-IVR framework are obtained from the population correlation function, \[C_{P_{j}}(t) =\frac{1}{(2\pi\hbar)^{N}}\int d\mathbf{R}_{0}\int d\mathbf{P}_{0} \int d\mathbf{x}_{0}\int d\mathbf{p}_{0}\] \[\times\left[\boldsymbol{\beta}_{n}\right]_{\mathcal{W}}\left( \mathbf{R}_{0},\mathbf{P}_{0}\right)\left[\boldsymbol{\beta}_{e}\right]_{M} \left(\mathbf{x}_{0},\mathbf{p}_{0}\right)P_{M}^{j}\left(\mathbf{x}_{t}, \mathbf{p}_{t}\right), \tag{15}\] where the Wigner transform of the nuclear density matrix, \(\left[\boldsymbol{\beta}_{n}\right]_{\mathcal{W}}\), is defined in Eq. (4) and used to sample the initial nuclear phase space variables at temperature \(T=300\)K, with the frequencies and the reduced masses calculated at the equilibrium geometry of the \(S_{0}\) state. For a system initially in a single excited state, the electronic density matrix in Eq. (15) can be expressed in terms of electronic population estimators, \[[\hat{\rho}_{e}]_{M}=\delta\left(P_{M}^{1}-1\right)\prod_{j\neq 1}^{2} \delta\left(P_{M}^{j}\right), \tag{16}\] where the subscript \(M=\mathcal{W},\,\mathrm{SC},\,\mathrm{SM}\) indicates the specific choice of mapping framework/estimator, and \(P_{M}^{i}\) is the population estimator for the \(i\)-th electronic state (with \(i=0,1,2\) for the \(S_{0}\), \(S_{1}\), and \(S_{2}\) electronic states of ethylene, respectively). Initial values for the electronic phase space variables are sampled using the focusing approximation [46; 53] such that the initial electronic state population is exactly 1 for the occupied \(S_{1}\) state and 0 for the unoccupied states (\(S_{0}\) and \(S_{2}\)). This is achieved by sampling initial electronic mapping variables for the \(i\)-th electronic state from a circle with radius \(x_{i}^{2}+p_{i}^{2}=r_{i}^{2}\); The radii for occupied and unoccupied states in each implementation are specified in Table 1, and are obtained by solving the corresponding equations for the population estimators provided in Eq. 10 for the Wigner estimator in the MMST framework, in Eq. 11 for the SC estimators in the MMST framework, and in Eq. 14 for \(W\)-representation in the SM framework. Finally, in Eq. (15), the electronic population estimator at time \(t\), \(P_{M}^{i}(\mathbf{x}_{t},\mathbf{p}_{t})\), is evaluated using the time-evolved electronic positions and momenta obtained by propagating trajectory initial conditions under the classical analog symmetrized mapping Hamiltonian defined in Eq. (9) for all three implementations. Classical equations of motion are integrated using the 4th order Adams-Bashforth-Moulton predictor-corrector integrator with a time step of 1 a.u. time (\(\sim 1/40\) fs) for a total time of 400 fs. The necessary classical forces at each time step are obtained from on-the-fly calculations of the adiabatic energies of the three electronic states, their gradients, and the nonadiabatic coupling vectors using Pople's 6-31+G\({}^{*}\) basis set and CASSCF(2o2e) with three states included in state averaging in the electronic structure package GAMESS.[54] ## III Result and discussion ### Photoisomerization We discuss the results of our on-the-fly ab initio LSC study of ethylene photoisomerization and dissociation in context with previous theoretical efforts using AIMS,[55; 56; 57; 58] surface hopping,[59] and SQC[30; 31] as well as experimental studies.[60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71] As with detailed earlier theoretical simulations of this system,[55] we include the ground state (\(S_{0}\)) and two excited states (\(\pi\rightarrow\pi^{*}=S_{1}\) and \(\pi^{*2}=S_{2}\)), and exclude the low-lying Rydberg states that are expected to be present but appear to play no significant role in the quenching process.[56; 57; 72] In FIG. 1, we plot the population correlation function for the three electronic states obtained using the Wigner population estimator in the MMST mapping framework. We find that the initial photo-excited \(S_{1}\) state decays to about 50% of its original population on a 60-80 fs timescale, in good agreement with previous theory[30; 31; 59] and experiment.[70] This is accompanied by a rapid but small population transfer to the higher lying \(S_{2}\) state that, in turn, decays back to the ground state in about 100 fs. Analysis of the trajectory ensemble employed in our LSC simulation yields insights into the coupled nuclear motions that drive electronic state transitions. In FIG. 2, we show the timeline on which key ethylene molecular structures appear as trajectories enter regions with a near-degeneracy between two electronic states, defined here as \(E_{S1}-E_{S0}<0.2\) eV and \(E_{S2}-E_{S1}<0.2\) eV. We find that there are four primary structures, as suggested by previous work.[55; 73] These structures are shown in FIG 3: twisted, pyramidized (pyra-midalization of one or both CH\({}_{2}\) groups), H-migration, and ethylidene. The twisted geometry dominates at early times (\(\sim 10\) fs) weakening the \(\pi\) electronic structure and enabling population transfer from the \(S_{1}\) to the doubly excited \(S_{2}\) state as seen in FIG. 1. The weakening of the \(\pi\) bond also facilitates the formation of the remaining three structures that occur at near-degeneracies between the \(S_{1}\) and \(S_{0}\) states. Pyramidalization and H-migration appear at relatively early times (\(<80\) fs) while the formation of ethylidene occurs at later times and persists, in keeping with experiments that see signatures of this structure up to 600 fs.[58] ### Photodissociation pathways In addition to isomerization, we observe a significant number of trajectories that describe photodissociation of ethylene upon excitation to \(S_{1}\) as documented in Table 2. This is in keeping with both previous experimental work[60; 61; 62; 63; 67; 68; 69; 70] and some theoretical simulations.[74; 75; 76] \begin{table} \begin{tabular}{c c c} \hline \hline \(M\) (Eq. 16) & \multicolumn{2}{c}{Sampling radius, \(r_{k}\)} \\ \cline{2-3} & Occupied & Unoccupied \\ \hline \(\mathcal{W}\) (Eq. 10) & 1.55892a & 1/2 \\ SC (Eq. 11) & 3 & 1 \\ SM (Eq. 14) & 8/3 & 2/3 \\ \hline \hline \end{tabular} \end{table} Table 1: The radii of the sampling functions for the three electronic population estimators (\(\mathcal{W}\), SC, and SM) for a system with three coupled electronic states. After photoisomerization, we see the elimination of molecular H\({}_{2}\) as the most significant channel for photodissociation. In analyzing this subset of trajectories, we can identify the structures that are the major precursors of H\({}_{2}\) molecule production: about 39% of the trajectories produce H\({}_{2}\) and acetylene (HCCH) from the ethylidene structure shown in FIG. 4a, while the majority (52%) produce H\({}_{2}\) and vinylidene (H\({}_{2}\)CC:) from the pyramidalized geometry shown in FIG. 4b. These findings reproduce the two channels for H\({}_{2}\) molecule elimination identified by experiments and the intermediate structures we report have been previously characterized by transition state calculations using electronic structure.[67; 68] A smaller but statistically significant number of our trajectories as reported in Table 2 also describe the elimination of 2 H atoms, the first leading to formation of the vinyl radical (H\({}_{2}\)CCH), and the subsequent H atom elimination yielding acetylene. We note that while this mechanism is in keeping with experimental studies,[67; 68] a very low but persistent yield (2%) of the vinyl radical has been observed suggesting that it is possible to stop with the elimination of a single H-atom. We find \(\sim 5\%\) of trajectories in our simulation correspond to the single H atom dissociation channel, which is in a reasonable agreement with this experimental observation. Although experimental measurements of the H\({}_{2}\):H branching ratio at 157 nm excitation differ slightly, \(0.44:0.56\) in Ref. [63] and \(0.3:0.7\) in Ref. [67], both consistently find higher yields of H atoms than H\({}_{2}\) molecule. The LSC-IVR simulations do not reproduce this ratio, yielding H\({}_{2}\):H \(=0.66:0.34\), an error we attribute the level of electronic structure theory employed rather than the dynamics as discussed extensively in the context of C-C bond Figure 1: The population of the three electronic states obtained from LSC-IVR simulation that employ the MMST mapping with Wigner estimator are shown. The initial photo-excited \(S_{1}\) state population is shown in red, the doubly excited \(S_{2}\) state population is shown in blue, and the ground \(S_{0}\) state population is shown in green. These results were obtained by averaging the number of the isomerization trajectories shown in Table 2. The error bars in the plot are obtained from the averages of 3 subsets of trajectories, each containing 48 trajectories. Figure 4: The structures at transition states leading to H\({}_{2}\) elimination: (a) ethylidene resulting in a H\({}_{2}\) molecule and acetylene (HCCH) and (b) pyramidalized structure resulting in a H\({}_{2}\) molecule and vinylidene (H\({}_{2}\)CC:). Figure 3: The representative geometries of conical intersections encountered in the quenching of ethylene excited state through isomerization: (a) twisted, (b) pyramidalized, (c) H-migration, and (d) ethylidene. Figure 2: Characterization of molecular structures observed in 300 trajectories from the conventional LSC-IVR simulation when the \(S_{2}\)-\(S_{1}\) and \(S_{1}\)-\(S_{0}\) energy gaps are small (\(<0.2\)eV). The observed structures can be classified into the four structures associated with the ethylene conical intersections (FIG. 3). The \(S_{2}\)-\(S_{1}\) small energy gaps are characterized mostly by twisted structure and the \(S_{1}\)-\(S_{0}\) gap by H-migration, pyramidalized or ethylidene structure. cleavage. Finally, we note that in Table 2, we show that 20% of our trajectories result in unphysical C-C bond cleavage for which, to our knowledge, there is no experimental evidence other than in pump-probe or high energy ionization experiments,[58; 71] where the cleavage is explicitly targeted. It is important to identify the source of this error as due to either the nature of the dynamics employed or the underlying electronic structure. To unravel this, we start by noting that the initial excitation energy in our simulation obtained with SA2-CASSCF(2o,2e) with 6-31\(+\)G\({}^{*}\) basis, \(\langle\Delta E_{S_{1}S_{0}}(t=0)\rangle\), is 9.77 eV as reported in Table. 2, a number significantly above the C-C bond dissociation energy of 7.7eV at this level of theory. Notably, our \(\langle\Delta E_{S_{1}S_{0}}(t=0)\rangle\) is also significantly above the experimentally quantified Franck-Condon excitation energy of 7.6 eV.[76] We find that using a more extensive basis (aug-cc-pVDZ) reduces the calculated excitation energy to 8.86eV, and further using second-order perturbative energy correction (XMCQDP) yields 7.63eV, a value in close agreement with experiment. Unfortunately, at present, we cannot calculate the nonadiabatic coupling matrix element (NACME) in GAMESS with the perturbation correction, so in order to test the dynamics we run an additional 50 trajectories using the aug-cc-pVDZ basis set. We find that the number of trajectories that exhibit C-C bond cleavage drops significantly from 20% to only 10%. This suggests that the underlying source of error leading to unphysical trajectories in this case can indeed be attributed to the level of electronic structure theory. The dependence of the nature of trajectories on the initial energy can also be seen by simply removing high energy trajectories: when we do not include trajectories with \(\Delta E_{S_{1}S_{0}}(t=0)>10\) eV, the fraction of trajectories exhibiting C-C cleavage drops by 10%, while that of isomerization increases by 10% and other categories remain almost unchanged. ### Comparing the three different variants of LSC-IVR In discussing the results of our _ab initio_ study of photoisomerization and dissociation in ethylene, we confined ourselves to interpreting the results from the LSC simulation with MMST mapping and Wigner estimators. We now motivate this choice and further provide a detailed discussion of the three variants of LSC explored here, paying particular attention to energy conservation, observed reaction channels, and electronic population dynamics. Table 2 summarizes the breakdown of the trajectory types obtained in the three different LSC implementations. We note that the energy conservation along a trajectory is generally considered poor in _ab initio_ implementations due to the self-consistent field calculations at every time step 31. For this reason, the effect of the initial conditions on the energy conservation of resulting trajectories can be masked by that of electronic structure calculations. In FIG. 5, we histogram the energy jump between consecutive time steps for all 300 trajectories generated in each LSC implementation. Recall that trajectory initial conditions for the nuclei are identical in all three implementations as is the Hamiltonian for dynamics; they only differ in the way the electronic mapping variables are sampled, and in the form of the population estimator employed at time \(t\). Interestingly, FIG. 5 does capture the dependence on initial conditions -- we find that 92% of the trajectories in MMST mapping with Wigner estimator exhibit energy jumps of less than 1 meV, whereas this numbers drops to 82% for the MMST with the Semiclassical estimator and to 88% for spin mapping. All three simulations also yield fragmention trajectories that correspond to H\({}_{2}\) molecule elimination and H-atom dissociation, two channels that have been observed experimentally. All three also overestimate the likelihood of C-C bond dissociation but as discussed we believe this error can be attributed to electronic structure rather than the dynamics themselves. All three LSC implementations exhibit qualitatively similar population dynamics and rates of quenching: the \(S_{2}\) state is quickly populated \(\sim 10\) fs after photoexcitation, and it takes 60 to 80 fs for 50% of \(S_{1}\) population to decay to the ground state, as shown in FIG. 6. In comparing the three approaches, we pay particular attention to the issue of negative electronic state populations: it is well known that the classical dynamics employed here to time evolve the electronic mapping variables preserves the sum of the individual state populations, but does not constrain individual state populations to take on values between 0 and 1. Effectively, the classical dynamics employed here fails to properly constrain the mapping variables to the quantum mechanically allowed phase space.[77] Although individual trajectories might explore unphysical values of state population in all three LSC implementations, we use the ensemble average populations shown in FIG. 6 to identify the 'best' choice; it is clear that LSC with MMST mapping and Wigner populations as well as spin mapping LSC yield ensemble average state populations that are between 0 and 1, whereas the SC estimator in the MMST mapping framework yields a significant negative value for the \(S_{2}\) state population. Based on previous studies, we exp Figure 5: The absolute value of jumps in the total energy between two consecutive time steps in 300 simulated trajectories are compiled in a histogram for the three LSC variants: MMST mapping with Wigner estimator (blue), SC estimator (red), and spin mapping (green). to significantly outperform the MMST mapping approaches, but the trajectory breakdown in Table 2 and energy conservation shown in FIG. 5 make it clear that is not necessarily the case. We find that both in terms of energy conservation and the overall number of C-C bond cleavage trajectories, MMST with Wigner estimator emerges as the better implementation. We further note that careful analysis of energy conservation in all three implementations yields no correlation between the frequency with which individual trajectories exhibit negative populations and trajectories that fail to conserve energy. While this is likely something that should be analyzed on a case-by-case basis, this finding does provide some reassurance that an individual trajectory exhibiting negative electronic state population at a given time does not always lead to unphysical behavior in terms of the overall system dynamics. ## IV Conclusion We make the case that employing _ab initio_ classical trajectories within the semiclassical LSC mapping framework shows great promise as an efficient and accurate on-the-fly simulation technique for the study of nonadiabatic processes. In the specific case of ethylene, we show that the use of an ensemble of trajectories in LSC allows us to directly calculate correlation functions, and to identify different, statistically significant reaction pathways with support from previous experimental observations and theoretical simulations. We also explore three different implementations of LSC that differ in the initial conditions used for the electronic mapping variables and the electronic state population estimators. We establish that although all three yield relatively similar population dynamics, MMST mapping with Wigner estimators emerges as the best choice in this study on the basis of energy conservation, positive ensemble average electronic populations, and the relatively small number of C-C bond cleavage trajectories. Given previous studies highlighting the favorable properties of spin mapping, however, further case studies must be made before this observation can be generalized. Finally, we discuss the limitations of this approach at present. Like all _ab initio_ dynamics, it is clear that the level of electronic structure plays a significant role in the overall accuracy of our findings. Setting that aside, we note that while the LSC implementation is streamlined and involves no free parameters such as hopping probabilities, decoherence corrections, or spawning thresholds, there are a few advances that would allow us to move towards even more efficient implementation. Most notably, the development of improved integrators that will allow us to implement dynamics under the symmetrized Hamiltonian with a larger time step, and a more rigorous study of the dependence of mapping variable dynamics on the initial conditions to minimize the number of trajectories that yield negative electronic state populations. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{LSC-IVR variants} & \multicolumn{2}{c}{Wigner + MMST mapping} & \multicolumn{2}{c}{SC + MMST mapping} & \multicolumn{2}{c}{Spin mapping} \\ \cline{2-7} & Count & \(\langle\Delta E_{S_{1}S_{0}}\rangle\) & Count & \(\langle\Delta E_{S_{1}S_{0}}\rangle\) & Count & \(\langle\Delta E_{S_{1}S_{0}}\rangle\) \\ \hline Isomerization & 144 & \(9.65\pm 0.44\) & 47 & \(9.62\pm 0.48\) & 73 & \(9.61\pm 0.46\) \\ H\({}_{2}\) elimination & 78 & \(9.78\pm 0.40\) & 128 & \(9.75\pm 0.42\) & 119 & \(9.75\pm 0.51\) \\ H atom elimination & 22 & \(9.76\pm 0.46\) & 74 & \(9.82\pm 0.47\) & 45 & \(9.71\pm 0.47\) \\ C-C cleavage & 56 & \(10.06\pm 0.41\) & 50 & \(9.93\pm 0.54\) & 62 & \(9.97\pm 0.38\) \\ Other & 0 & – & 1 & – & 1 & – \\ \hline Total & 300 & \(9.77\pm 0.45\) & 300 & \(9.78\pm 0.47\) & 300 & \(9.75\pm 0.48\) \\ \hline \hline \end{tabular} \end{table} Table 2: Results from the 3 variants of LSC-IVR are shown. Successful trajectories are identified as those that undergo experimentally observed pathways - photoisomerization, H\({}_{2}\) elimination, and H atom elimination. C-C bond cleavage is only seen at extremely high excitation energies. The 1 ”other” trajectory observed in the SC + MMST mapping and spin mapping LSC implementations corresponds to an outlier trajectory that results in the molecule separating into individual atoms. We also report the average \(S_{1}\) to \(S_{0}\) excitation energy at \(t=0\) for each subset of trajectories in eV as \(\langle\Delta E_{S_{1}S_{0}}\rangle\). Figure 6: We compare the thermal population correlation functions obtained from LSC simulations that employ the MMST mapping with Wigner estimator (”\(-\)”), SC estimator (“\(\blacktriangledown\)”), and the spin mapping (”\(\times\)”). The initial photo-excited \(S_{1}\) state population is shown in red, the doubly excited \(S_{2}\) state population is shown in blue, and the ground \(S_{0}\) state population is shown in green. These results were obtained by averaging the number of the isomerization trajectories shown in Table 2. ## Acknowledgments The authors acknowledge funding through NSF CAREER Award No. CHE-1555205. The authors thank Prof. Ben Levine for helpful discussions about the theory and implementation of AIMS.
2310.12064
Code Book for the Annotation of Diverse Cross-Document Coreference of Entities in News Articles
This paper presents a scheme for annotating coreference across news articles, extending beyond traditional identity relations by also considering near-identity and bridging relations. It includes a precise description of how to set up Inception, a respective annotation tool, how to annotate entities in news articles, connect them with diverse coreferential relations, and link them across documents to Wikidata's global knowledge graph. This multi-layered annotation approach is discussed in the context of the problem of media bias. Our main contribution lies in providing a methodology for creating a diverse cross-document coreference corpus which can be applied to the analysis of media bias by word-choice and labelling.
Jakob Vogel
2023-10-18T15:53:45Z
http://arxiv.org/abs/2310.12064v1
# Code Book for the Annotation of Diverse Cross-Document Coreference of Entities in News Articles ###### Abstract This paper presents a scheme for annotating coreference across news articles, extending beyond traditional identity relations by also considering near-identity and bridging relations. It includes a precise description of how to set up Inception, a respective annotation tool, how to annotate entities in news articles, connect them with diverse coreferential relations, and link them across documents to Wikidata's global knowledge graph. This multi-layered annotation approach is discussed in the context of the problem of media bias. Our main contribution lies in providing a methodology for creating a diverse cross-document coreference corpus which can be applied to the analysis of media bias by word-choice and labelling. coreference resolution, diverse coreference annotation, entity annotation, entity linking, media bias analysis, natural language processing **Code Book for the Annotation of Diverse Cross-Document Coreference of Entities in News Articles** **Jakob Vogel** M.A. Digital Humanities Institute for Digital Humanities, Faculty of Philosophy, Georg August University of Gottingen [email protected] ## 1 Introduction Coreference is the phenomenon of several expressions in a text all referring to the same person, object, or other entity or event as their referent. Thus, in a narrow sense, analyzing a document with regards to coreference means detecting relations of identity between phrases. The following example (1) illustrates such an identity relation, where coreferential expressions are printed in italics: (1) "_Joe Biden_ arrived in Berlin yesterday, but _the president_ did not come alone." In (1), the noun phrase _"Joe Biden"_ introduces a new entity while _"the president"_ relates back to that introducing phrase. Within this relation, the introducing phrase _"Joe Biden"_ is called the **antecedent** while the back-relating phrase _"the president"_ is called an **anaphor**. Both expressions are coreferential in the way that they refer to the same non-textual entity, namely to the actual'real-world' Joe Biden or at least to a corresponding mental concept. We can think of an antecedent and its anaphora as forming a **cluster** of **mentions** that as a whole represents its extra-textual referent within a textual document, as shown in Figure 1. As a task of natural language processing (NLP), coreference resolution has become quite efficient in detecting identity relations between phrases. However, reflecting on how we use language to refer to something, we are forced to realize that coreference in a broader sense is actually far more complex. We can address an entity or event by using a variety of expressions that are in fact not strictly identical to each other. Consider the following examples: (2) "_President Biden_ was clearly not satisfied with today's outcome. As _the White House_ stated this afternoon, efforts will be made to..." (3) "Even if _the young Erdogan_ used to be pro-Western, _Turkey's president_ nowadays often acts against Western interests." (4) "The AfD is circulating _a photo of Angela Merkel with a Hijab_, although _Merkel_ never wore Muslim clothes." In these given examples, the highlighted mentions mean 'almost' the same, but not completely. In (2), we are aware by world-knowledge that "_the Figure 1: Illustration of how a cluster can be formed from an antecedent and its anaphor(s). The cluster represents its referent, in this case Joe Biden, in a text. _White House_" is often used as a substitute expression for the current US president, although the former is a place which in strict terms cannot be identical to the president, who is a person. In (3), on the other hand, both mentions refer to the'real-world' person Erdogan, but at different time steps. Finally, in (4), a mention representing the person Merkel is juxtaposed with a mention representing a picture of Merkel. While these two mentions could refer to separate entities, the juxtaposition indicates a connection between both where the attributes of the first mention do influence the perception of the second mention. Hence, we would miss essential semantic connections if we chose not to mark them as coreferential. Having said that, the simple classification of two mentions into coreferential (identical) or non-coreferential (non-identical) does not seem to suffice the complexity of common text data. Instead, we need to allow for **diverse coreference** clusters that include finer-grained relations lying between identity and non-identity. We need to allow for **near-identity** relations to mark two mentions that are partially, but not totally, identical (Recasens et al., 2010). In news coverage, identity and near-identity references are extensively used to report on persons, organizations, and other entities of public interest. It is our goal to build up a corpus that contains annotated examples of such diverse forms of coreference. While diverse coreference occurs in all sorts of news media, we focus on digital print media, only. Furthermore, although in practice both entities and events can act as referent, we ignore references to events for now, as their annotation would go beyond the limits of our present scheme.1 The ordinary business of journalism is to write about current political affairs and other happenings of public interest. These happenings are normally reported by several newspapers at the same time. All of these news articles are considered documents that contain references to the same entities and together form a discourse about them. To include the whole picture of such intradiscursive references, we want our corpus to link document-level clusters with corresponding clusters of other documents of the same discourse. Hence, our corpus is to depict **cross-document coreference** data. On a discourse level, corresponding clusters form discourse entities that themselves can be linked to their non-textual referents by some knowledge graph identifier. For this project, we use Wikidata's Uniform Resource Identifiers (URIs) for entity linking. By doing so, world knowledge is included into the data. This allows for drawing connections even between different discourse entities that refer to a common referent, yet at a different time step or rather in the context of a different happening. Figure 2 illustrates the multiple layers of this annotation model. Footnote 1: Though at a later point, this scheme could be extended to also include the annotation of events (Linguistic Data Consortium, 2005; O’Gorman et al., 2016). In building a corpus for diverse cross-document coreference in news articles, we hope to provide a valuable resource for the evaluation of automated coreference resolution tasks. The contribution of this paper mainly lies in providing an answer to the question of how to create such a corpus. How can diverse coreference relations be annotated in a cross-document setup? We believe our scheme as we present it here tackles this problem efficiently, extensively, and unambiguously. Additionally, we would like to use the data resulting from our own annotations for further research in the area of media bias. Even if it plays no direct part in the outlined scheme, a lot of our choices how to annotate references were made because of this requirement to make the data usable for later media bias analysis. Eventually, we hope to contribute to the wider research question of how to identify media bias by word choice and labelling based on the usage of diverse coreference relations in news articles. The following section 2 will further elaborate on Figure 2: Illustration of our multi-layered annotation: within several discourses which all consist of multiple news articles reporting on the same happening, document entity clusters are extracted for each document. Those clusters are assigned a Wikidata URI. This ensures an unambiguous identification of each cluster, but it also links each cluster to all other clusters with the same referent within one discourse as well as across discourses. Finally, the linking also adds world knowledge to the annotated data. this connection between diverse coreference and the problem of media bias analysis. Despite its only subtle impact on our practical annotation instructions, that section means to highlight the theoretical background and motivation behind our project. The sections thereafter will then deal with the actual annotation process. Section 3 will guide coders through the setup and controls of Inception, our selected annotation software. Finally, section 4 will define annotation instructions in three passes while also outlining our typology of diverse coreference. The data we use for our own annotations consists of the text bodies of articles that report on the same happenings. All articles are in English and were published by one of the following US-American newspapers: HuffPost (categorized as "Left" by AllSides (2023) or "Skews Left" by Ad Fontes Media (2023), abbreviated in our data as "LL"), The New York Times (categorized as "Lean Left" by AllSides or "Skews Left" by Ad Fontes Media, abbreviated as "L"), USA Today (categorized as "Lean Left" or "Middle or Balanced Bias", abbreviated as "M"), Fox News (categorized as "Right" or "Skews Right", abbreviated as "R"), Breitbart News Network (categorized as "Right" or "Strong Right", abbreviated as "RR").2 Footnote 2: Looking at the political orientation of these newspapers, the data is unbalanced with an underrepresentation of politically centered media. However, at the current state of this project, the imbalance is unlikely to influence our analysis which does not yet target political orientation or media bias itself. Therefore, we will ignore this issue for now. ## 2 Diverse cross-document coreference and media bias analysis Media bias is a multifaceted phenomenon of news coverage that is one-sided, politically shaded, or in some other way non-neutral. It can occur in all sorts of news media, though we focus on digital print media, only. One specific type of media bias is **bias by word-choice and labeling**(Hamborg et al., 2019). Word choice describes the selection from a variety of possible expressions to refer to an entity. For example, in order to refer to the USA's current head of state, journalists could use one of the relatively neutral alternatives "Joe Biden", "Biden", or "the US president", or in theory, choose a clearly biased expression like "the dictator" (Kurmelovs, 2023). Labeling, on the other hand, describes the assignment of attributes to an expression, inter alia by adding adjectives. Examples for bias by labeling include "an anxious and uncertain president" or "crooked Joe Biden" (Luciano, 2023). Together, word-choice and labelling form a so-called **frame**(Hamborg et al., 2019). In news articles, frames are used in a variety of ways, either for the sake of linguistic diversity or to make certain, potentially biased statements about an entity. To test an article for such statements, all of an entity's frames need to be extracted and evaluated together. Hence, before an article can be properly analyzed with regards to if and how it uses biased frames of (certain) entities, we are first faced with the task of identifying such frames. The identification of all expressions that refer to the same entity is a matter of coreference resolution. To conclude, successful coreference resolution is a prerequisite to any further inquiry of media bias by word-choice and labelling. As already indicated above, automatic coreference resolution does show good results in extracting identity clusters from a document (Liu et al., 2023). However, we have seen that there exist near-identity relations between expressions, potentially even across documents, that would be mostly overseen by standard coreference resolution approaches (Zhukova et al., 2022). Hence, they would also be overseen by any media bias analysis that depends on coreference resolution. We hope that our building of a corpus for diverse cross-document coreference will contribute to the analysis of media bias by providing data that contains the full variety of frames used in news articles. Eventually, we would like to test how we can measure media bias by focusing on diverse coreference in news articles. To answer this last question, though, an additional layer of media bias annotation would have to be put upon our coreference data (Spinde et al., 2021, 2021). ## 3 Annotation tool The software we will use for annotation is called **Inception**(Klie et al., 2018). Inception is an open source annotation tool which can be freely downloaded from the authors' GitHub repository. Although for this project, every annotator will be provided with a ready-to-code version of the program with all necessary annotation layers and settings already implemented and some sample annotations included. This instance of Inception can be requested from the project administrator Jakob Vogel. ### Setup To set up Inception on your local computer, make sure you already received your personal instance of the software. If not, please contact the project administrator. Inception comes as a jar-file. In order to run it, you need to have the Java Runtime Environment (JRE) installed. Furthermore, make sure the file is set as executable. Then open the directory "Inception" in your command prompt and run: To get to the annotation GUI, log in with your personal account now and click on the highlighted project name "Diverse cross-document coreference". Then, in the left taskbar, click on "Annotation". A window opens that shows a list of all documents to be annotated. The first digit in every title is a discourse identifier that sorts all documents according to their topic, followed by an underscore and a newspaper abbreviation (see Introduction). You can annotate documents in chronological order or randomly, whichever you prefer. Click on one of the documents to start your annotation. ### User manual Inception offers a variety of functionalities of which only those relevant for our project are described here. For a full explanation of how to use Inception, please check the official documentation which can be accessed online or from within the Inception GUI by clicking on "Help" in the right upper corner. Every annotator's instance of Inception contains two basic layers of annotation. The first layer, called **Entity layer**, is triggered when a mention is marked by highlighting text with a simple press-hold-drag mechanism. This opens the layer's side panel. Here, annotators can fill in the Entity layer's three parameters: * **Entity-type**: a drop-down list to select a mention's entity-type by clicking on or typing the type's abbreviation. * **Global entity-name**: a mixture of free text-field and drop-down list to assign a global entity's name to a mention. If the name has already been used before, it can be selected as Figure 4: Screenshot of Inception window showing a list of all documents to be annotated. Figure 5: Screenshot of Inception window showing a not yet annotated document loaded into the annotation GUI. Figure 3: Screenshot of Inception window showing the user management settings. Make sure to create or activate your own user account here at first login. item from the drop-down list by again clicking or typing. If not, it can be freely typed which adds it as a new tag to the list. * **Wikidata**: a search field to type the name of an entity and find its respective Wikidata URI. The second layer, called **Relation**, is triggered when two already marked mentions are connected to each other, again simply by clicking and holding on one mention and dragging the mouse to the other mention. This layer only contains one parameter which is named **Label**. It is a drop-down list to select a relation-type for labelling the connection between both mentions. After the first annotations have been made, Inception starts to suggests spans and values for new annotations on the Entity layer. These suggestions are displayed in gray boxes. One click on a box accepts the suggestion and turns it into a proper annotation, a double-click denies the suggestion and makes the box disappear. The GUIs upper panel is mostly for navigating through the document. However, it also contains a button for resetting the document by deleting all annotations made so far and a button in the shape of a padlock to mark the annotation process of the document as finished. This button should be pressed at the very end of the annotation, though it is advisable to first annotate each document before marking all of them together as officially finished. Clicking on the gear wheel opens up the GUI's style settings. Here, annotators have the option to adjust panels' margin sizes, the colouring of annotations, and how many text rows are to be displayed simultaneously. Annotations are saved automatically which is why there exists no saving button in the GUI. ## 4 Annotation guidelines Annotators will read each article three times and focus on a different annotation task in each pass: in the first pass, only read the text to get an overview of it. Do not make any annotations, yet. In the second pass, mark mentions with identity-relations, assign an entity to them and link them to Wikidata. In the third pass, annotate near-identity and bridging relations between mentions. ### First pass: get familiar with the text Read the entire text carefully. Try to already pay attention to what entities are mentioned, but do not annotate them, yet. ### Second pass: annotate mentions with identity-relations Read the text for a second time. Identify potential coreference candidates. Wherever a referent is referred to by at least two identical mentions, annotate these and all subsequent mentions respectively. Do this as follows: * First check if a candidate is markable: * In general, only **noun phrases (NPs)* * are markable. This includes nominal phrases ("the president"), proper names ("Mr. Biden"), and quantifier phrases ("all member states"). * For reasons of efficiency, most pronominal NPs are excluded from annotation because they normally carry little variation with regards to how they are labelled (Zhukova et al., 2021). However, certain types of pronouns can be included not as head, but as modifier for another NP, e.g. demonstrative pronouns ("this man") and reflexive pronouns ("the president himself"). Figure 6: Annotating a mention of ”Donald Trump”: in the right panel, annotators can fill in values for the Entity layer’s three parameters Entity-type, Global entity-name, and Wikidata. Automatically suggested annotations are displayed in gray boxes above the text rows. Figure 7: Annotating a relation between two mentions: the mention ”North Korea” is connected to ”North and South Korea” with a meronymy-relation (MER). * Numbers like currency expressions ("E2.3 billion") and percentages ("19% of the votes") are included, but dates of any kind ("January 23", "1996", "this Sunday") are excluded for now. * Given coreferential conjunctions that mention several entities at once and, syntactically, cannot be split ("North and South Korea"), first mark everything that could be extracted as single-entity mention separately (possible for "South Korea", but not for "North"), then mark the entire conjunction. Use a MER-relation to connect mentioned entities with the conjunction (see description of the MER-relation in subsection 4.3). * Then check if the candidate you want to annotate is truly identical to other mentions of the same referent. To do so, compare it to the referent's most previous mention. In case no mention of the referent has been annotated so far, simply compare the two candidates triggering the annotation: * Identity between two mentions means that both refer to the same entity in almost the same way. In comparison to the first mention, the second one may provide additional information about the referent or only highlight a subset of its attributes, but new and old attributes may not contradict each other (Recasens et al., 2010). * When in doubt, ignore all modifiers and **focus on the heads* * of both mentions to check if they are identical. * If the candidate is markable and identical to previous mentions, start your annotation. First, mark the mention: * We annotate mentions with a **maximum span style**. This means that for each candidate, the NP's head and all of its pre- and post-modifiers are included in the annotation. More precisely, this includes articles ("a", the"), adjectives ("a worried president"), other NPs ("US president Joe Biden"), appositives ("Joe Biden, president of the United States"), prepositional phrases ("demonstrators in front of the White House"), and relative clauses ("Biden, who was elected president in 2020") (Hirschman and Chinchor, 1998). Any punctuation or white space at the very beginning or end of the span are excluded. * Additionally to maximum span style, we annotate with **nested style**, meaning a mention's span may overlap with or contain another mention. But remember not to mark any mention you discover, but only those who actually participate in coreference! * After selecting the correct span, assign an **entity-type* * to a marked mention by choosing from the layer's respective drop-down list. We distinguish between the following entity-types: PER, ORG, GRP, GPE, LOC, OBJ. * (PER): an individual actor. * (ORG): an official organization that is not government-related, e.g. "the WHO", "Fox News", "the opposition". * (GRP): a group of individuals acting collectively or sharing the same properties, e.g. "demonstrators", "unemployed beneficiaries", "the two leaders". * **Geo-political entity* * (GPE): a state, country, province etc. that comprises a government, a population, a physical location, and a nation (Linguistic Data Consortium, 2008). This includes clusters of GPEs, e.g. "Eastern Europe" or "the Arab League". Governmental organizations or locations that represent an entire GPE are also marked as GPE, e.g. "the US government", "US officials", "the Biden administration", "Washington", "the White House". * (LOC): a physical location that is not a GPE, e.g. "Los Angeles". This includes mentions like "Germany" or "the White House" when referred to not in a political way, but with a focus on its geographic, cultural, architectural and other locality attributes. Be aware that two mentions with the same textual representation but different entity-types are not to be marked as identical! Instead, most of such cases would imply a MET-relation. * (OBJ): an object or other concept that is mentioned, e.g. "Biden's hands", "a submarine", "the results". However, objects are static concepts. Do not confuse them with NPs that express events or other changes of state ("election", "negotiations", "Biden's statement") which we do not annotate! * Now it is time to assign the mention to an entity cluster. With this step, you create or extend a local coreference chain. At the same time, you link it with corresponding discourse entities across documents and globally with its actual referent. Footnote 2: This is the case that, in the present document, you already have annotated previous mentions of the same entity, you will also already have created a local coreference cluster. The cluster will already be linked to a global discourse entity and to a referent. To assign the current mention to that cluster, select the global entity's name from the respective drop-down list. The Wikidata field can be left empty.3 Footnote 3: This is to save time. As the cluster will already be linked, assigning a Wikidata entry to every additional mention would be redundant work. * If, on the other hand, no previous mentions have been annotated, you are faced with two identical mentions you want to create a new local cluster of. To do this, first fill in the fields of the first mention. * Begin with the Wikidata field and type in the referent's name. Inception now looks for a suiting Wikidata entry and displays a drop-down list with the search results. Select the correct entry from that list. To enhance search results, try to look for the entity's most neutral name, ignoring articles. Sometimes it is easier to look for the entry on the Wikidata website itself and then copy its name into the field. If no Wikidata entry exists, leave the field empty. * Assuming you have found a Wikidata entry, copy the text displayed in the Wikidata field into the Global entity-name field. By doing this, the name will automatically be added to the underlying tag set, meaning you will be able to select it from the drop-down list in subsequent annotations. However, if you have not found a Wikidata entry, copy the mention's text, again with maximum span style, into the Global entity-name field. Use this text as name for any following coreferential mentions. If the name has already been used for a semantically different entity in another document, add the document ID to the new name.4 Footnote 4: The following example illustrates this: let us assume you have annotated several mentions with the name ”_demonstrators_” in a previous document. Now, while annotating document ”0_L”, you are faced with an entity that would also have to be given the Global entity-name ”_demonstrators_”, although it refers to a semantically different group of people. In this case, do not change your annotations of the previous document, but do use the Global entity-name ”_demonstrators_0_L” in the current document. * Now turn to the second mention and annotate it based on the previous one. That is, assign the Global entity-name while leaving the Wikidata field empty. ### Third pass: annotate mentions with different relations Read the text for a third time. Wherever you see two mentions connected through a near-identity relation, make a respective annotation: * For every new mention that has not been marked in the second pass already, check if it is markable and annotate it with its correct span and entity-type as described above. However, leave the Global entity-name and Wikidata field empty. * When both mentions are marked with the correct span and entity-type, connect them with one of the following **near-identity relation-types**: MET, MER, CLS, STF, DEC, BRD (Recasens et al., 2010; Spala et al., 2019; Clark and Bangerter, 2004; Nedoluzhko et al., 2009). * (MET): In a MET-relation, in comparison to its antecedent, an anaphor highlights different facets of an entity. This includes facets like: * a certain role or function performed by an entity. Consider example (5). (5) "Although _Biden_ is _head of the Democrats_, he is also _president of all Americans_." Assuming "_Biden_" has already been annotated as part of a respective cluster in the second pass, "_head of the Democrats_" and "_president of all Americans_" would now be connected to "_Biden_" with a MET-relation. However, in this example, it is the juxtaposition of both roles in particular that makes this a case of metonymy. In a more regular context, naming one of these roles alone could be annotated in the second pass as identical mention, instead. * a location's name to refer to an associated entity, e.g. "_Washington_" as metonym for "_the US government_", "_China_" for "_the Chinese government_", "_Silicon Valley_" for "_the Tech industry_". * an organization's name to refer to an associated place, e.g. a bank's name like "_ECB_" to refer to the building that contains that bank's headquarters. * different forms of realization of the same piece of information, like in example (6), where the same content is manifested once as audible speech and once as written text. (6) "Though it is questionable whether he had actually written _the piece_ himself, Macron gave _a truly brilliant speech_ this afternoon." * representation, where one mention is a picture or other representation of an entity, as already seen in example (4). (4) "The AfD is circulating _a photo of Angela Merkel with a Hijab_, although _Merkel_ never wore Muslim clothes." * other facets, since this is no exhaustive list and metonymy is a dynamic phenomenon. * given two ID-clusters that are metonymous to each other (e.g. several mentions of "the US president" and several mentions of "the White House" which often participate in metonymy together), do not connect every single mention of the latter to a mention of the former, but only do this for the latter's first truly coreferential mention. * (MER): A MER-relation between two mentions indicates that: * one mention is a constituent part of the other in whatever direction, as in example (7). (7) "_President Biden_ expressed his concern about the ongoing... '_The US government_ will not...', he stated." * one mention refers to an object which is made of the stuff which the other mention refers to. (8) "The duty on _tobacco_ has risen once again, making _cigarettes_ as expensive as never before." * both mentions refer to overlapping sets. (9) "_AfD supporters_ demonstrated in front of the Reichstag this morning. Among _the crowd_ was..." * finally, a MER-relation can be used to specify entities mentioned in syntactically non-dividable conjunctions. Given such a conjunction, as "North and South Korea" in example (10), mark "South Korea" separately as it can be treated as independent noun phrase. The adjective phrase "North", however, cannot be marked. Instead, mark the entire conjunction and connect "South Korea" to it with a MER-relation (illustrated by the dotted underlining). Do the same for the first full mention of "North Korea" that follows in the text. If none follows, use a previous mention or, if there is none, ignore the "North"-mention. (10) "_North and South Korea_ have resumed negotiations... _North Korea_ seems..." * **Class** (CLS): a CLS-relation indicates an 'is-a' connection between two mentions. One mention thus belongs to a sub- or superclass of another. (11) "In way, _Trump_ only seized the opportunity. This is what _skilled politicians_ do." * **Spatio-temporal function** (STF): a mention refers to an entity that deviates in place, time (3), number, or person (12). (3) "Even if _the young Erdogan_ used to be pro-Western, _Turkey's president_ nowadays often acts against Western interests:" (12) "A historic meeting: _a pope_ and _a pope_ shaking hands." * **Declarative** (DEC): where two mentions X and Y are connected through verbal phrases like "X seems like Y", "stated that X was Y", "declared X Y", or other declarations as in (13), they can be connected with a DEC-relation. (13) "In his speech, he also spoke about _North Korea_ and called it _a fundamentally barbaric nation_." The DEC-relation thus includes definitions and descriptions of entities. This is especially the case when declarative clauses are used within quotes. However, when value-free declarative clauses like "X is Y" are used as quasi objective specifications of an entity, they might indicate an identity relation, instead. The same structure might be used to assign a super-class to the entity, making it a CLS-relation. * **Bridging** (BRD): for reasons of simplicity, we have included BRD in our subsumption of different relation-types under the term of near-identity. Despite of that, BRD is actually a separate phenomenon from both identity and near-identity. BRD connects two entities that are mostly independent of each other while nonetheless, the existence of one can be inferred by the existence of the other [10]. Technically, the BRD-relation could be used to mark all sorts of ontological connections between entities. This is not the purpose of this annotation scheme, though. Instead, we use BRD only where the mention of one entity influences the depiction of an associated entity or where one entity is modified by a possessive pronoun that refers to another entity. Example (14) illustrates both use cases: [(14)] "Unlike _Queen Elizabeth_, _Charles_ has not been shy about promoting _his political views_." Here, the NP "_his political views_" contains a modifying possessive pronoun, which is why it is to be annotated as bridging to "_Charles_". Additionally, the mention "_Charles_" can only be interpreted correctly as referring to Charles III (and not any other Charles) by its juxtabosition with the NP "_Queen Elizabeth_". Hence "_Charles_" is to be annotated as bridging to "_Queen Elizabeth_". * Deciding on what relation-type to choose can be difficult. When in doubt, follow these general guidelines: * use an identity relation rather than a near-identity relation (especially DEC). * when having to choose between near-identity relations, use MET rather than MER. * use MER rather than CLS. * use CLS rather than DEC. * use any near-identity relation that is not BRD rather than BRD. * When annotating near-identity and bridging, always connect an anaphoric mention to the nearest possible antecedent. But remember that antecedents normally appear before an anaphor. Only if necessary you may connect a mention to a subsequent expression (making their relation cataphoric). ## 5 Conclusion and future work Our proposed annotation scheme covers a multitude of coreferential relations. It gives a detailed explanation of how to mark coreferential mentions across documents, assign entity-types and names to them, connect them with each other, and link them to the Wikidata knowledge graph. The scheme thus represents a significant step toward more accurately capturing the complexities of coreference use. It furthermore provides a valuable resource for researchers both in the field of coreference resolution and media bias by word-choice and labelling. Having said that, our scheme leaves room for possible extensions to further advance research in those domains. First, the annotation of events could be included in our scheme. An interesting question that arises is whether the relationships as outlined here could be applied not only to entities, but to events all the same. A second possible extension would be to include a layer of media bias annotation to the scheme, enabling a direct comparison of diverse coreference usage and media bias by word-choice and labelling. Both proposed extensions could be easily added on top of our scheme. Having said that, the present form of our scheme already addresses many of the complexities of diverse cross-document coreference and offers a roadmap for capturing nuanced linguistic relationships, ultimately advancing our understanding of language and discourse in digital print media. ## 6 Acknowledgements Many thanks to my project supervisor Anastasia Zhukova who never became tired of my many questions and always knew how to help me out with good advise whenever I felt stuck.
2303.05420
Kernel Regression with Infinite-Width Neural Networks on Millions of Examples
Neural kernels have drastically increased performance on diverse and nonstandard data modalities but require significantly more compute, which previously limited their application to smaller datasets. In this work, we address this by massively parallelizing their computation across many GPUs. We combine this with a distributed, preconditioned conjugate gradients algorithm to enable kernel regression at a large scale (i.e. up to five million examples). Using this approach, we study scaling laws of several neural kernels across many orders of magnitude for the CIFAR-5m dataset. Using data augmentation to expand the original CIFAR-10 training dataset by a factor of 20, we obtain a test accuracy of 91.2\% (SotA for a pure kernel method). Moreover, we explore neural kernels on other data modalities, obtaining results on protein and small molecule prediction tasks that are competitive with SotA methods.
Ben Adlam, Jaehoon Lee, Shreyas Padhy, Zachary Nado, Jasper Snoek
2023-03-09T17:11:31Z
http://arxiv.org/abs/2303.05420v1
# Kernel Regression with Infinite-Width Neural Networks ###### Abstract Neural kernels have drastically increased performance on diverse and nonstandard data modalities but require significantly more compute, which previously limited their application to smaller datasets. In this work, we address this by massively parallelizing their computation across many GPUs. We combine this with a distributed, preconditioned conjugate gradients algorithm to enable kernel regression at a large scale (i.e. up to five million examples). Using this approach, we study scaling laws of several neural kernels across many orders of magnitude for the CIFAR-5m dataset. Using data augmentation to expand the original CIFAR-10 training dataset by a factor of 20, we obtain a test accuracy of 91.2% (SotA for a pure kernel method). Moreover, we explore neural kernels on other data modalities, obtaining results on protein and small molecule prediction tasks that are competitive with SotA methods. ## 1 Introduction Kernel methods are often contrasted with deep learning, but recent advances in machine learning have identified and developed exciting correspondences (Lee et al., 2018; Matthews et al., 2018; Jacot et al., 2018). While a useful method in its own right, kernel regression has been used to better understand neural networks and deep learning. More specifically, if the parameters of a neural network are treated as random variables whose distribution is set by the initialization, we can view the neural network as a random function. Then as the width of the network becomes large, the distribution of this random function is a Gaussian process with a specific covariance function or kernel. We refer to kernels that arise from this connection with infinite-width neural networks as _neural kernels_. The specific kernel is determined by the architecture, inference type, and other hyperparameters of the neural network. Moreover, the connection between neural networks and Gaussian processes has generated many high-performance kernels for diverse or nonstandard data modalities, such as images, sequences, and graphs. This performance often comes at a cost, as the kernels require significantly more compute than standard kernels such as RBFs. For example, computing the entire kernel for the CIFAR-10 dataset takes less than 1 GPU minute for an RBF kernel but around 300 GPU hours for the Myrtle kernel (Shankar et al., 2020; Lee et al., 2020). However, this increase in compute significantly decreases the test error rate from around 40% to 10%. The added demands of simply computing entries of the kernel is in addition to challenges posed from the cubic scaling in time and quadratic scaling in memory of inference for kernel regression with dataset size. Approximate inference methods frequently reduce memory requirements by recomputing kernel entries on the fly, which is infeasible for these expensive kernels. Such challenges have limited our understanding of infinite-width models to small datasets. In particular, while scaling laws have been studied across many orders of magnitude for neural networks, the same is not true of their corresponding kernels. Similarly, while it is common to augment training datasets in neural networks, significantly expanding their size, the benefits for kernel methods have not been as thoroughly explored. ### Contributions In this work, we address these two main computational challenges (computing large, complex kernels and using them for kernel regression) by parallelizing and scaling up existing algorithms to many more machines. This enables us to consider significantly larger datasets, currently up to five million examples, and therefore study how performance changes as more data are added over many orders of magnitude. While similar studies have been performed for neural networks, they have been lacking for neural kernels. In addition to scaling to larger datasets, we also consider high resolution images from the Tiny Imagenet dataset, where additional pixels also require more compute. Moreover, our approach is not restricted to image data. In fact, we obtain results for both protein sequence and small molecule datasets and demonstrate that neural kernels are a promising method for medium sized datasets from basic science. Our contributions include: * By massively parallelizing the computation of the neural kernels, we study kernels on significantly larger (by approximately two orders of magnitude) datasets; * We use a distributed, preconditioned conjugate gradients algorithm to perform inference for these kernels (with code provided in the supplementary materials); * We demonstrate scaling laws across several orders of magnitude for fully-connected, CNN-Vec, and Myrtle kernels on the CIFAR-5m dataset; * We study the loss in performance incurred by approximating the linear system from inference compared to conjugate gradients; * Using data augmentation to expand the original CIFAR-10 training dataset by a factor of 20, we obtain a test accuracy of 91.2% (SotA for a pure kernel method); * We explore other data modalities, obtaining results on protein and small molecule prediction tasks that are competitive with SotA methods. ### Related work The contributions of this work are made possible by recent advances in large scale inference for Gaussian processes and kernel regression and advances in the understanding of the relationship between GPs and deep neural networks. Wang et al. (2019) showed how to solve the large-scale linear systems within GPs using conjugate gradients (CG), which requires only matrix-vector operations with the kernel and doesn't require storing the full kernel in memory. Wang et al. (2019) and Maddox et al. (2022) identify the importance of using a pre-conditioner (a partially-pivoted Cholesky decomposition) to solve the system with finite precision. We use a similar CG algorithm to solve our systems and found that the pre-conditioner was critical to convergence, particularly for the non-stationary neural kernels. Due to their eigenspectra, we unfortunately found we required high precision on CPU. In this work, we use these methods to solve even larger systems, up to 5 million examples, but emphasize that the most computationally expensive component is computing the expressive neural kernels. Many methods have been developed to approximate Gaussian processes on larger datasets. Rahimi and Recht (2007) show that stationary kernels can be approximated using a finite set of random basis functions. Recently there has been progress in random-feature approximations for expressive, non-stationary kernels using sketching (Zandieh et al., 2021; Han et al., 2022). While this method provides efficient ways for approximating neural kernels, often there is speed-performance tradeoff (Han et al., 2022), thus our work focuses on exact computation. Stochastic variational inference (SVI) (Hensman et al., 2013) is a tremendously promising approach to scale GPs by optimizing a set of inducing points to approximate the full GP posterior. However, Wang et al. (2019) found that their SVI baseline underperformed exact GP inference in their experiments. We found that SVI was computationally infeasible due to taking gradients through the neural kernels, and using subsets of data as inducing points was less-effective than the full kernel. Other approaches, such as KISS-GP (Wilson and Nickisch, 2015; Stanton et al., 2021), cleverly interpolate over a grid of inducing points, taking advantage of Toeplitz structure, to drastically reduce computational cost. Rudi et al. (2017) develop FALKON, which uses Nystrom and random feature approximations with pre-conditioning to scale up kernel regression to millions of examples with theoretical guarantees on performance. This is extended to potentially billions of examples by Meanti et al. (2020), through distributed GPU acceleration and various optimizations. EigenPro and EigenPro2 (Ma and Belkin, 2017, 2019) accelerate and scale up kernel regression by developing pre-conditioned stochastic gradient descent. While we observe that neural kernels scale remarkably well in terms of predictive performance, using these methods could make them more computationally practical. However, we found that we have required the full kernel at high precision to obtain high performance. The correspondence between random functions from the initialization of infinite-width neural networks and Gaussian processes, called the neural network Gaussian process (NNGP), was first noted by Neal (1994) and analytically derived by Williams (1996) for single-hidden layer, fully-connected networks. More recently this was extended to deep neural networks for fully-connected (Lee et al., 2018; Matthews et al., 2018) and convolutional architectures (Novak et al., 2019; Garriga-Alonso et al., 2019). This correspondence to Gaussian processes can be extended to infinite-width networks trained by gradient descent via the Neural Tangent Kernel (NTK) (Jacot et al., 2018) and to many other architectures, (Yang, 2019, 2020) including self-attention layers (Hron et al., 2020) and simple graph neural networks (Du et al., 2019). The covariance functions or kernels associated to these Gaussian processes are determined by the neural network's architecture and other hyperparameters from initialization, and we refer to them as neural kernels. While these kernels are theoretically useful to better understand the functional prior of neural networks, including their uncertainty properties (Adlam et al., 2020), they have also been used more practically. The NTK for convolutional architectures has been applied to image data (Arora et al., 2019; Li et al., 2019; Shankar et al., 2020) and offers unique solutions to practical deep learning problems such as data distillation (Nguyen et al., 2021, 2021). The JAX-based (Bradbury et al., 2018) python library, Neural Tangents (Novak et al., 2020), has helped to enable these practical applications by providing efficient computation of neural kernels on CPU, GPU and TPU. ## 2 Approaches to large-scale kernel methods In this section, we review the different approaches to large-scale kernel regression. We discuss the particular challenges introduced by neural kernels and how we addressed them. Finally, we compare the performance for the CIFAR-5m dataset with 10-layer Myrtle NTK. There are two main approaches to applying kernel methods to larger datasets, i.e. to solve a large linear system. First, the original linear system can be replaced by an alternative linear system that has a particular algebraic form that can be solved exactly and efficiently. Second, the original linear system can be maintained but only solved approximately.1 These approximate solvers are often anytime algorithms, where the current solution can be returned at anytime--typically when its residual is sufficiently small. Footnote 1: Note that some approaches can be interpreted in both ways. Approximations to the linear system:Many methods fall under this category. Here we focus on four different types. First, there are the low-rank approximations, _Nystrom_ and _subset of regressors_, that can be inverted in time \(\mathcal{O}(r^{2}n)\) where \(r\) is the rank and \(n\) is the number of training examples [Williams and Rasmussen, 2006]. Figure 1: **(Top)** An upper diagonal of large kernel matrix is split into smaller, dimension 5,000\(\times\)5,000 blocks for multi-threading of IO. When computing the kernel, the blocks can be computed independently by machines that need not be co-located. Each block is then batched further (to meet device memory limitations) and computed by the Neural Tangents library on several GPUs. **(Bottom left)** It is natural to split the kernel by rows across the workers, then each worker can receive the vector from the host, compute the matrix product between the rows in its memory with the vector, and return this chunk of the result to the host. The host can then aggregate the results from the workers by concatenating. However, for larger systems with many workers, simply communicating a large vector to all workers becomes a bottleneck. Instead the kernel can also be partitioned over columns. This means that each worker \((j,i)\) only needs a subset of the vector entries \(\mathbf{v}_{i}\) and computes \(\mathbf{K}_{ji}\mathbf{v}_{i}\), and we found this dramatically decreased communication time. The cost of this approach is that slightly more complex aggregation is required on the host. **(Bottom right)** Classification error rate as a function of rank or partition size for four different kernel approximations. The kernel used is a 10-layer Myrtle NTK for 1.6 million examples of the CIFAR-5m dataset. The performance of CG is shown as a dashed line for comparison and exceeds all other results. Second, there is the _block diagonal_ approximation that sets all kernel entries outside of some blocks along the diagonal to zero. Due to its algebraic form, this kernel is easy to invert and takes time \(\mathcal{O}(r^{2}n)\) where \(r\) is the block or partition size. Even without parallelization, this reduces the running time by a factor of the number of blocks squared. Finally, the _Bayesian committee machine_(Tresp, 2000; Williams and Rasmussen, 2006) is an approximation to the likelihood of a Gaussian process that partitions the dataset, fitting separate regressors to the partitions under assumptions of independence. Here we use the posterior mean under this approximation, which can be computed efficiently. Since this is a transductive method, the time complexity depends on the size of the test set but is comparable to the methods above when the number of test examples equals the partition size. One advantage of these methods is that they do not require access to all kernel entries, and so can avoid the quadratic scaling from computing and storing the entire kernel. The primary additional hyperparameters in these methods are the rank or partition size. Typically increasing these hyperparameters improves performance at the cost of additional compute (see Fig. 1). However, in addition to their size, exactly which examples are included in the low-rank approximation or how the data are partitioned can affect performance. While several methods are available to select these examples more optimally, the most common approach is still to select uniformly at random, which we do here also. In addition, it was necessary to tune the jitter term in the _Nystrom_ approximation, since setting to the default value of \(10^{-6}\) we used elsewhere yielded poor results. Iterative solvers:Rather than solving the linear system to machine precision, we might be satisfied with a solution that has a sufficiently small residual--especially because the predictions are unlikely to change significantly. Conjugate gradients (CG) is one approach to this (Shewchuk et al., 1994; Wang et al., 2019; Meanti et al., 2020). CG iteratively finds the best solution in a subspace of dimension equal to the number of steps. This subspace is defined by a sequence of vectors that are conjugate with respect to the kernel. Each step of CG takes quadratic time, and in the worst case a linear number of steps must be performed, but this is normally not necessary. To avoid this worst-case performance, pre-conditioning is essential. We used the standard partial pivoted Cholesky pre-conditioner (Wang et al., 2019) and found it significantly decreased the number of steps required to reach a small residual. A key subroutine in CG is the computation of kernel-vector products and is the only place the kernel occurs directly. This presents opportunities for parallelization. In particular, we use a host CPU to run the main CG algorithm and act as a server to control worker CPUs who compute kernel-vector products (see Fig. 1, bottom left, for details). Additional challenges from neural kernels:Previous large-scale implementations of CG have traded off memory and time by never storing the whole kernel in memory and instead recomputing it on the fly. While this approach works well for simple kernels, such as RBFs, it is infeasible for neural kernels. In particular, computing the kernel once represents the majority of the compute compared to running CG for inference. Indeed reducing the computational burden of neural kernels is a topic of current research and can be faster without decreasing performance (Zandieh et al., 2021; Han et al., 2021), though often there is a speed-performance tradeoff (Han et al., 2021). The problem has not been completely solved. Moreover, there is evidence that even computing them at float32 precision compared to float64 precision degrades performance (Lee et al., 2020). Thus the approach we take here to obtaining the kernel for larger datasets is to parallelize their computation across many machines (see Fig. 1, top, for details). Fortunately very little communication or coordination is needed across machines, and preemption-safe routines are easy to design. The kernel can then be computed once and stored on disk in a collection of sub-blocks for faster, multi-threaded IO. During CG, the kernel must be stored either completely in RAM, which is fastest but requires many machines for large kernels, or can be stored on disk with very fast IO to load chunks of the kernel into RAM.2 Footnote 2: For example, our 5 million examples kernel computed in float64 precision, storage on disk alone is about 100 terabytes distributed over \(\sim\)500,000 blocks. The spectra of neural kernels can be incredibly ill-conditioned (Lee et al., 2020), which presents challenges for inference and reinforces the need for pre-conditioning. Often the spectra are very diffuse, spanning several orders of magnitude and showing power-law decay. Performance comparison:In Fig. 1, we compare the accuracy of the different methods for the 10 layer Myrtle NTK on 1.6 million training examples from the CIFAR-5m dataset.3 We find that as the rank or partition size increases, the performance of all methods improves with subset of regressors and Nystrom performing best. We did not increase above 50,000 as this is close to the limit of the largest linear system that can be solved on a standard CPU. Note that while the gap to CG is reduced, there is still a large reduction in performance. Footnote 3: See the next section for more details on this dataset. ## 3 Scaling laws for neural kernels Known as _neural scaling laws_, there has been substantial recent interest in how the performance of neural networks improves as more data are available for training and their number of parameters increases (Hestness et al., 2017; Kaplan et al., 2020; Rosenfeld et al., 2019; Rosenfeld, 2021; Bahri et al., 2021). Frequently, these quantities increase in tandem. Empirically, it has been found that performance improves as a power law, for which many works seek to estimate the exponent. Projecting the potential improvement in performance from scaling up neural networks is especially important in the large model era, where training such models is costly. It is natural to ask whether other models also scale as a power law, and if so, how their exponents compare to those of neural networks. Such studies help identify the origin of the power-law scaling, as currently it is unclear whether it originates from the data or the model choice. These comparison are particularly natural for neural kernels. Moreover, since neural kernels are nonparametric, their capacity scales automatically as more data are added. From the language of Bahri et al. (2021), dataset scaling for neural kernels is in the _resolution-limited_ regime, which is arguably a more interesting scaling regime. Scaling experiments for neural networks typically cover datasets over many orders of magnitude, which is challenging for kernel regression. In this section, we consider kernel regression for three different neural kernels: fully-connected neural networks, convolution neural networks without pooling operations, and the Myrtle convolutional networks. The kernels in that order increase in performance and computational burden, and all have been well tuned for the CIFAR-10 (Lee et al., 2020). Of course, CIFAR-10 is limited in size with only 60k examples (50k train, 10k test). To enable a larger scaling study, instead we use the CIFAR-5m dataset (Nakkiran et al., 2021), which consists of samples from a generative model trained on CIFAR-10. Using CIFAR-5m, we can extend our analysis over 2 more orders of magnitude while ensuring the additional data are sampled i.i.d. For generalization performance, in addition to evaluation on the i.i.d. CIFAR-5m held out data, we consider the CIFAR-10 and 10.1 test sets. Evaluation on these test sets is standard and allows for comparison against existing results. In Fig. 2, we observe persistent dataset scaling laws as we vary architecture and evaluation data. For extremely small dataset sizes (10-100), all cases commonly show a lower slope and then transition into a more consistent, higher slope, indicating power-law behavior across 4-5 orders of magnitude. We measure the dataset scaling exponent \(\alpha_{D}\) for the test loss and observe that, as one increases the complexity of the kernel, the scaling exponent increases. Also, for the same architecture, evaluation data that are closer in distribution to the training data have a higher scaling exponent. Since the training data are drawn i.i.d from CIFAR-5m, the order of in-distribution to out-of-distribution for the test sets should be thought of as CIFAR-5m, CIFAR-10, and then CIFAR-10.1.4 Footnote 4: This is because the generative model to generate CIFAR-5m was trained on the CIFAR-10 training set. Figure 3: **Dataset scaling for Myrtle on Tiny ImageNet.** At full resolution (64\(\times\)64), we achieve a best accuracy of 44.7%. Higher resolution inputs result in better classification accuracy and higher scaling exponent(\(\alpha_{D}\)). Figure 2: **Dataset size scaling for neural kernels on CIFAR-5m. Using CIFAR-5m as training data, we explore dataset scaling spanning 6 orders of magnitude. Each column corresponds to different neural architectures: 3-layer fully-connected neural network (FC3), 8-layer convolutional neural network with output vectorization (CV8), and 10-layer Myrtle convolutional neural network with average pooling (Myrtle). Evaluations on three different held-out test sets, CIFAR-5m validation, CIFAR-10 test set, and CIFAR-10.1 test set, are shown. Scaling exponents on mean squared error are larger (thus faster improvement with more data) for more complex and computationally intensive kernels and for more in-distribution held out data (CIFAR-5m \(>\) CIFAR-10 \(>\) CIFAR-10.1).** Tiny ImageNet: Towards ImageNet with neural kernels.With our ability to compute massive kernels and use them for inference, ImageNet is not far from reach. ImageNet at 1.2 million examples is actually smaller than some datasets considered here--although augmentation beyond horizontal flips maybe challenging. However, at least two issues remain. First, the large number of output classes. Compared to the 10 classes in CIFAR-10 or CIFAR-5m, a one-hot label of 1k classes expands the linear systems to be solved by a factor of 100. Second, the resolution of ImageNet in standard deep learning architecture is 224\(\times\)224, if not larger (c.f. EfficientNet Tan and Le (2019, 2021)). For neural kernels with pooling, such as the Myrtle kernel, compute scales quadratically with the total number of pixels. Thus compared to 32\(\times\)32 CIFAR-10 images, each data point requires \(7^{4}=2401\) times more compute. As a step toward ImageNet, we consider the Myrtle kernel for the Tiny ImageNet dataset (Le and Yang, 2015). Tiny ImageNet consists of 200 subclasses of ImageNet with 500 images per class, i.e. 100k total training images at a lower resolution of 64\(\times\)64.5 Dataset scaling results are shown in Fig. 3. Similar to CIFAR-5m scaling, from a sufficiently large dataset size there is consistent power-law scaling. To the best of our knowledge, this is the first evaluation of neural kernels on Tiny Imagenet. We achieve at best classification accuracy of 44.7% on the test set. Comparing to finite networks, among very few references quoting results without data augmentation, Jeevan et al. (2022) obtained 43.1% for ResNet-18 and 45.39% for ConvMixer-16, indicating our result from neural kernel is competitive to modern finite neural network architectures without data augmentation. Footnote 5: We experimented with downsizing to 32\(\times\)32 resolution. ## 4 Data augmentation Data augmentation is widely used in deep learning applied to image data. For example, for CIFAR-10 or ImageNet classification, most if not all models are trained using random crops and horizontal flips. More effective augmentation strategies such as AutoAug (Cubuk et al., 2019), RandAug (Cubuk et al., 2020) have been found and used in SotA vision models. Figure 4: **Neural kernel performance on CIFAR-10 with data augmentation**. Using horizontal flip and RandAug, we measure scaling of neural kernels’ classification error for 3 different architectures and 2 test sets (CIFAR-10 as well as CIFAR-10.1). Data augmentation factors smaller than 1.0 (light blue shaded region) denotes a subset of original training set, so additional i.i.d. data can be compared with augmented data. The augmentation factor 2\(\times\) includes horizontal flips, whereas factors larger than 2\(\times\) use RandAug combined with random crop and flips. While data augmentation consistently improves scaling, there is a noticeable change in slope around 1.0 for FC3 and Tuned Myrtle kernels (see Appendix C). For augmentation factors of 10\(\times\) and 20\(\times\), the Tuned Myrtle kernel achieves less than 10% classification error (8.8% with 20\(\times\)). (See Table 1 for detailed comparison.) The idea of data augmentation has a long history (Niyogi et al., 1998)-- including in SVMs where they are called virtual examples (Scholkopf et al., 1996). However, despite the broad application of data augmentation in deep learning, recently it has been used little in kernel methods due to their limitations on large datasets. For example, Li et al. (2019) and Shankar et al. (2020) used horizontal flips to double the training set size for convolutional kernels. In Lee et al. (2020), an ensemble of kernel predictors, where each training set was augmented randomly, were used as a means of data augmentation.6 Because of this, exactly how much of the performance gap between neural networks and kernel methods can be accounted for by data augmentation on image data remains unknown. Footnote 6: In effect, this is performing a block diagonal approximation on the augmented dataset. In this section, we use our massively parallelized computation of neural kernels and distributed CG solver to explore the effect of data augmentation in neural kernels to a regime far beyond prior work. In our experiments, we focus on the CIFAR-10 dataset. We call the ratio between the sizes of the original training set and the training set after augmentation the _augmentation factor_. Our augmentation strategy is to expand the 50k training set by applying various augmentations. Following Li et al. (2019), Shankar et al. (2020), a \(2\times\) augmentation factor is given by horizontally flipping all images. Beyond that, up to a \(20\times\) augmentation factor, we used RandAug (Cubuk et al., 2020) as used in Shankar et al. (2020). In particular for each image in the training set, we sample one (\(N=1\)) random augmentation op among ['FlipLR', 'Solarize', 'Color', 'Brightness', 'Contrast', 'Sharpness', 'Posterize', 'Equalize', 'Identity']7 and apply with magnitude \(M=2\). After applying a random augmentation, we apply a random crop of 4 pixels as well as a random flip. We do not use cutout augmentation. With the strategy, we are able to achieve an accuracy of 91.2% (see Fig. 4). To our knowledge this is the highest published accuracy for a kernel method. Note that while the gap in accuracy between kernels and finite neural networks is reduced, it still remains (see Table 1). \begin{table} \begin{tabular}{l l l l l l} \hline \hline Architecture & & Method & CIFAR-10 & CIFAR-10.1 & CIFAR-5m \\ \hline \hline \multirow{4}{*}{FC} & \multirow{2}{*}{Kernel} & DA Ensemble (Lee et al., 2020) & 62.4 & - & - \\ & & DA CG (20x, this work) & 64.1 & 49.1 & - \\ & & CIFAR-5m CG (2M, this work) & **69.9** & **54.1** & 78.6 \\ \cline{2-5} & \multirow{2}{*}{Finite NN} & DA, Small LR, no L2 (Lee et al., 2020) & 65.3 & - & - \\ & & DA, Large LR, L2 (Lee et al., 2020) & 69.4 & & \\ \hline \multirow{4}{*}{CNN-VEC} & \multirow{2}{*}{Kernel} & Flip (Li et al., 2019) & 70.5 & - & - \\ & & DA Ensemble (Lee et al., 2020) & 73.2 & - & - \\ & & DA CG (20x, this work) & 81.3 & 66.3 & - \\ & & CIFAR-5m CG (5M, this work) & **83.4** & **71.9** & 86.5 \\ \cline{2-5} & \multirow{2}{*}{Finite NN} & DA, Small LR, no L2 (Lee et al., 2020) & 83.9 & - & - \\ & & DA, Large LR, L2 (Lee et al., 2020) & 85.6 & & \\ \hline \multirow{4}{*}{CNN-Pool} & \multirow{4}{*}{Kernel} & CNN-LAP Flip (Li et al., 2019) & 82.2 & - & - \\ & & CNN-GAP DA Ensemble (Lee et al., 2020) & 84.8 & - & - \\ \cline{1-1} & & Myttle10-Gaussian Flip (Shankar et al., 2020) & 89.8 & 78.3 & - \\ \cline{1-1} & & Tuned Myrtle10 DA CG (20x, this work) & **91.2** & **79.4** & - \\ \cline{1-1} & & Myrtle10 CIFAR-5m CG (1.6M, this work) & 89.1 & 79.1 & 89.5 \\ \cline{1-1} \cline{2-5} & \multirow{4}{*}{Finite NN} & ResNet18 CIFAR-5m (Nakiran et al., 2021) & 89.0 & - & 89.4 \\ \cline{1-1} & & CNN-GAP DA, Small LR, no L2 (Lee et al., 2020) & 84.4 & - & - \\ \cline{1-1} & & CNN-GAP DA Large LR, L2 (Lee et al., 2020) & 86.7 & & - \\ \cline{1-1} & & Myrtle10 DA (Shankar et al., 2020) & 96.0 & 89.8 & - \\ \hline \hline \end{tabular} \end{table} Table 1: CIFAR-10 test accuracy for kernels and finite neural networks of the corresponding architecture type. Sequence and graph data In this section, we continue to develop neural kernels as a method in their own right. We consider their performance on structured data modalities other than images. Kernels are particularly exciting for such data modalities because we lack inductive biases to take advantage of, and kernel methods such as Gaussian processes are the standard approach for guided experimental design. First, we consider a protein function prediction benchmark motivated by protein design for targeted gene therapy. There is significant interest in engineering Adeno-associated virus (AAV) capsid proteins to cause the virus to integrate specific DNA into a target cell [14]. We perform kernel regression to predict the fitness of mutations across a 28 amino acid window of these proteins, by training and evaluating across all splits, ranging from 82k to 284k examples, of a dataset assaying VP-1 AAV protein variants [13]. Predictions are evaluated using Spearman correlation on the testsets. For more details see Sec. B.2 Second, we consider a small molecule dataset, called ogbg-molpcba, from the Open Graph Benchmark Hu et al. [2020, 2021], Wu et al. [2018]. This dataset contains approximately 400K molecules represented as graphs with 9-dimensional node features and 3-dimensional edge features that have been assayed in a variety of ways for biological activity. Not all assays were performed for each molecule, meaning the kernel regression must be adapted to handle missing data. To model this graph structured data, we derive kernels for deep graph neural networks, and fit these to the training set of 350K examples. Predictions are evaluated using mean average precision over the different assay tasks on the testset. For more details see Sec. B.3. Developing specific kernels for such data is critical, since traditional covariance functions do not capture task-relevant similarities in high-dimensional inputs with complex structure. The connection between neural networks and neural kernels in the infinite-width limit provides one promising direction to exploit the progress in neural network architecture design. For example, convolutional operations are common in neural networks used on protein sequence data. Similarly, for graph data like molecules, graph convolution operations have grown in popularity. Any such operation has a corresponding kernel that can be used in our approach to kernel regression. In this vein, for the AAV dataset we considered deep, 1D-convolutional architectures and for ogbg-molpcba we considered deep, graph convolutional architectures. The 1D-convolutional architecture has depth, weight and bias initialization variance, activation function, diagonal regularization, the type of pooling function, filter size, and whether to use the NNGP kernel or the NTK as hyperparamters. The graph convolution architecture has the same hyperparameters except filter size is replaced by whether self-loops are used. To tune hyperparameters, we used 1k trials of different hyperparameters on the Google Vizier service [12]. Tuning hyperparameters using the whole training and validation sets is not feasible, since the kernel must be recomputed each time. Instead, for the AAV dataset, we used only 2k examples from the training set and selected the hyperparameters with the best MSE (i.e. not Spearman correlation) on 1k examples from the validation set. For ogbg-molpcba, we used 2.5K examples for training and selected the hyperparameters with the best average mean precision on 2.5k examples from the validation set. For additional details on hyperparameters and their tuning, see Appendix C. After selecting hyperparamters, we ran inference on ogbg-molpcba and all splits of the AAV dataset (see Table. 2). In all cases, we find our method is either better or competitive with prior methods across all splits. Specifically, for ogbg-molpcba our simple GNN kernel achieved a average precision (AP) of 0.2651. The current SotA AP without additional data is 0.30128 Wei et al. [2021] and a finite-width GNN without additional tricks achieves AP of 0.2020 using Kipf and Welling [2016] and 0.2266 utilizing Xu et al. [2018]. Thus, while our model falls short of SotA on ogbg-molpcba, it improves substantially on a comparable finite-width GNN. ## 6 Limitations and future directions In this work we probe the performance of kernel regression across two dimensions of scale, in the number of examples and the complexity of the kernel. We distributed the computation of large neural kernels with up to 5 million examples and then used distributed conjugate gradients to solve the corresponding linear systems. In doing so, we not only set a new SotA for kernel methods on CIFAR-10 but also outperform highly tuned deep learning methods on a protein design problem and are competitive on a graph-structured, molecular screening problem. Computational considerations aside, we see that these neural kernels scale remarkably well in terms of predictive performance. Extrapolating from our scaling curves, it appears that significant improvements could still be made by adding more data. These infinitely-wide instantiations of deep neural networks offer some exciting properties and advantages over the parametric, SGD trained alternatives. They offer stable solutions with relatively little optimization and hyperparameter tuning. In the context of Gaussian processes, they represent a distribution over deep networks, with corresponding uncertainty that can be used across decision making tasks such as experimental design. Indeed, our SotA results on protein function prediction for targeted gene therapy and molecule toxicity prediction suggest that these kernels may be very well suited to exciting structured design problems. While computing quantities such as the marginal likelihood or test variances involves additional challenges, one can readily draw samples from the corresponding GP to be used, for example, with Thompson sampling approaches. The computational cost of computing the neural kernels and solving the corresponding linear systems remains a limitation to scaling further. While there is a significant literature on approximating the kernels with regard to their scaling in the number of examples, approximations of the pairwise kernel computation itself similar to Zandieh et al. (2021), Han et al. (2022) seems necessary. ## Acknowledgements We would like to give a special thanks to Jascha Sohl-Dickstein, who proposed block-wise kernel sharding, which enabled efficient communication over many compute nodes. We would also like to thank Andreea Gane, Kehang Han, and Benjamin Sanchez-Lengeling for helping us understand molecular and sequence datasets. We are also grateful to Roman Novak, Jeffrey Pennington, Samuel S. Schoenholz and Lechao Xiao for helpful discussions throughout the project.
2302.12275
Slow Dynamics and Kohlrausch Relaxation in Isolated Disordered Many-Body Systems
The Kohlrausch(-Williams-Watts) law of stretched exponential relaxation has been observed for more than a century and a half in diverse complex classical systems. Here we show that this law describes relaxation quite generically in closed (executing Schr\"{o}dinger dynamics), interacting disordered many-body systems across a range of system sizes using interaction range and disorder strength as primary tuning parameters. This we observe for both time-independent and periodically driven (Floquet) systems. Finite-size analysis indicates the persistence of this non-thermal relaxation regime in the thermodynamic limit thus defining a distinct dynamical regime. This regime exhibits a peak in the time-scale of the perceptible relaxation, upon crossing over from weak to strong disorder. We provide a simple picture of this behavior, which naturally accounts for its general occurrence. Formation of spin-glass -- one of the possible mechanisms for stretched relaxation appears incidental to the occurrence of Kohlrausch law in our context. Finally, we provide a simple non-Hermitian Hamiltonian formulation for the dynamics of a single spin embedded in the disordered chain. This provides an analytical formula that captures not only the Kohlrausch relaxation of the disorder averaged auto-correlation but also captures the largely diverse dynamics of an arbitrary target spin in the system. Our work hence also provides a concrete quantification of the ``pre-thermal slowness" in many-body disordered system.
Asmi Haldar
2023-02-23T19:00:13Z
http://arxiv.org/abs/2302.12275v1
# Slow Dynamics and Kohlrausch Relaxation in Isolated Disordered Many-Body Systems ###### Abstract The Kohlrausch(-Williams-Watts) law of stretched exponential relaxation has been observed for more than a century and a half in diverse complex classical systems. Here we show that this law describes relaxation quite generically in closed (executing Schrodinger dynamics), interacting disordered many-body systems across a range of system sizes using interaction range and disorder strength as primary tuning parameters. This we observe for both time-independent and periodically driven (Floquet) systems. Finite-size analysis indicates the persistence of this non-thermal relaxation regime in the thermodynamic limit thus defining a distinct dynamical regime. This regime exhibits a peak in the time-scale of the perceptible relaxation, upon crossing over from weak to strong disorder. We provide a simple picture of this behavior, which naturally accounts for its general occurrence. Formation of spin-glass - one of the possible mechanisms for stretched relaxation appears incidental to the occurrence of Kohlrausch law in our context. Finally, we provide a simple non-Hermitian Hamiltonian formulation for the dynamics of a single spin embedded in the disordered chain. This provides an analytical formula that captures not only the Kohlrausch relaxation of the disorder averaged auto-correlation but also captures the largely diverse dynamics of an arbitrary target spin in the system. Our work hence also provides a concrete quantification of the "pre-thermal slowness" in many-body disordered system. ## I Introduction Strong Disorder is believed to result in absolute localization in many-particle systems even in the presence of interactions - a phenomenon known as many-body localization (MBL) [1; 2; 3; 4; 5; 6]. MBL is believed to persist even in the presence of periodic drive and can exhibit interesting new phases, collectively known as Floquet MBL phases [7; 8]. A striking example of Floquet MBL phase is the discrete time crystalline (DTC) phase where the dynamics of a periodically driven system breaks the discrete time-translational symmetry of the time-periodic Hamiltonian (see e.g., [9; 10; 11] and references therein). In general, the actual stability of an MBL (and hence a DTC) phase in the thermodynamic limit is a matter of considerable debate [12; 13; 14; 15; 16; 17; 18; 19], though the stability of a DTC over experimentally relevant length and time-scale has been established already [20; 21; 22; 23; 24]. Here, we leave this debate to the side and address a different aspect of a Many-Body Disordered (MBD) system - what, if any qualitative difference in the relaxation behaviour of local correlations occur as a consequence of disorder. Our results not only provide a positive answer to this question, but also uncovers a connection between the non-thermal dynamics of an MBD system, and a broadly occurring yet somewhat enigmatic law that has been observed in the relaxation of complex systems in the classical world over more than a century and a half. The relaxation dynamics of a many-body system may slow down qualitatively due to disorder and inhomogeneity. In the classical regime, the mechanism of slowing down are believed to be quite different in different cases, but a surprisingly general law of relaxation applies under remarkably diverse circumstances. This is known as the famous Kohlrausch law of stretched exponential relaxation [25], where \[\text{Auto Correlation }\sim\exp\bigg{\{}-\left(\frac{t}{\tau}\right)^{ \gamma}\bigg{\}}, \tag{1}\] where \(0<\gamma\leq 1\), with \(\gamma=1\) giving the simple exponential relaxation. Examples include slow dynamics of disordered classical systems like structural glasses and supercooled liquids [26], spin glasses [27] and visco-elastic media [25]. The origin and ubiquity of the Kohlrausch law has remained an intriguing open problem of physics and chemistry in spite of powerful attempts to address it (see, e.g., [28; 29]). Here we show that the Kohlrausch law also appears in the zero-temperature Schrodinger dynamics of interacting quantum disordered many-body systems. The Kohlrausch form with its stretching exponent (\(\gamma\)) and the time-scale (\(\tau\)) provides a quantification of the well-known "slowness" of heating in the so called "prethermal regime" [30; 31; 32; 33] in the context of localized quantum many-body systems [24; 22; 34]. We demonstrate the occurrence of the Kohlrausch law in largely diverse settings - both with long and short range interactions, periodically driven (Floquet) Hamiltonians and static ones. For concreteness, we primarily focus on periodically driven quantum spin chains with power-law interactions, whose range can be tuned by varying the exponent. A short-range version of the model exhibits a stable DTC-MBL phase in appropriate parameter regimes in finite systems [35]. For this system we characterize the phenomenological features of the slow regime. Then we demonstrate those in other settings. We numerically uncover distinct early and late dynamical regimes, and in none of those we find any significant signature of the putative dynamical phase of "critical DTC" [10; 36] with scale-free relaxation in our one-dimensional geometry. Finally, we provide an analytical formulation based on a non-hermitian Hamiltonian approach to capture the dynamics of an arbitrary target spin embedded in the disordered system. The idea is to describe the localization behaviour of the system in terms of its action as a bath on an arbitrary spin embedded in it. While the approach is incapable of capturing multi-spin correlations, it captures the single-spin dynamics sufficiently accurately, and hence reproduces Kohlrausch relaxation of the disorder-averaged auto-correlation function with excellent accuracy. The plan of the paper is as follows. In Sec II we demonstrate the Kohlrausch law in a power-law interacting Floquet setting, where the degree of localization is controlled by tuning various parameters. We also illustrate the salient phenomenological aspects of the Kohlrausch regime, especially the occurrence of slowest perceptible dynamics (a "peak" in the relaxation timescale) between the strongly and weakly localized regimes. In Sec. III we demonstrate the phenomenology for next nearest-neighbour interacting systems - both Floquet DTC-type and static ones. In Sec. IV.1 we provide a simple physical picture explaining the occurrence of the peak. The picture is based on identifying roughly three different relaxation regimes separated by their characteristic time-scales. In Sec.IV.2, we resort to a non-hermitian Hamiltonian description of a single spin in the system. For this we provide an analytical expression for the dynamics of the single-spin auto-correlation function, which, upon disorder averaging, exhibits the Kohlrausch relaxation with remarkable accuracy. In Sec. V we investigate the occurrence of the putative critical DTC phase in one spatial dimension. ## II The Kohlrausch relaxation in long-range interacting driven MBD system ### Model and Observables In this section we consider the following Floquet-Unitary drive Hamiltonian in \(1d\). \[U_{F}=\exp\left[-i\sum_{ij}\frac{J_{ij}}{r_{ij}^{\alpha}}X_{i}X_{j}\right] \exp\left[-i(\pi+\epsilon)\sum_{i}Z_{i}\right] \tag{2}\] where \(J_{ij}\) is chosen uniformly from \([0,W]\), and \(X_{i},Z_{i}\) are Pauli operators on site \(i\). A cycle consists of the application of this unitary once on the wave-function. The evolution within a cycle can be viewed as an evolution over a period \(T=1+\phi\), where \(\phi=(\pi+\epsilon)\) under the following piece-wise local time-periodic Hamiltonian. \[H(t)=\\ H_{z}=\sum_{i}Z_{i},\quad nT\leq t<nT+\phi\\ H_{x}=\sum_{ij}\frac{J_{ij}}{r_{ij}^{\alpha}}X_{i}X_{j},\\ nT+\phi\leq t<(n+1)T. \tag{3}\] where \(n\) are positive integers including \(0\). The long-range interactions above can be realizable and often even natural in experimental setups like Figure 1: Kohlrausch law of stretched exponential relaxation of the disorder-averaged auto-correlation \(\overline{\overline{C}}\) from least (bottom) to most localized regime (top) for \(L=26\). Representative cuts across the multidimensional parameter space of the Floquet system (Eq. 2) are shown (several others taken – all show similar behaviour; data not shown). Dotted lines are numerical data and the solid lines are the fittings of the data with a stretched exponential function (Eq. 7). From Left to Right, as a function of **(a)**\(\alpha\) (keeping \(W=0.4\pi\), \(\epsilon=0.2\) ), **(b)**\(W\) (keeping \(\alpha=2.6,\epsilon=0.2\))and **(c)**\(\epsilon\) (keeping \(W=0.4\pi,\alpha=2.6\)), respectively. [20; 21; 24; 37; 38; 39; 40; 41]. Those setups are likely candidates for implementing variants of the Hamiltonian given in Eq. (3). The numerically exact simulation of the Floquet dynamics of the above form can be carried out by efficiently employing the Hadamard transform, reaching system sizes larger than those accessible via exact diagonalization (see, e.g. [42]). Here we consider the (site-resolved) auto-correlation function defined for spin \(i\) as \[C^{i}_{auto}(nT) =\langle X_{i}(0)X_{i}(t)\rangle,\ t=nT \tag{4}\] \[C_{auto}(nT) =\frac{1}{L}\sum_{i}C^{i}_{auto}, \tag{5}\] where \(X_{i}(t)\) is the operator \(X_{i}\) after \(n\) complete cycles in the Heisenberg picture. For \(\epsilon=0\), \(C_{auto}\) is trivially frozen to unity. For small but finite \(\epsilon\) and large enough \(W\) the above Hamiltonian has been argued to exhibit a DTC-MBL order for a short (effective) interaction range, i.e. large interaction exponent \(\alpha\) (see, e.g. [9; 10]). In a DTC, the sign of the auto-correlator flips every period, and this sub-harmonic oscillation persists in spite of the fact that the system is interacting and non-integrable. This happens for generic initial states, since each Floquet eigenstate is argued to be perturbatively close (in \(\epsilon\)) to a cat state formed by a simultaneous eigenstate of \(\{\sigma_{i}^{x}\}_{i=1,L}\) and its spin-flipped partner. It is known from numerical studies that regardless of whether there is an underlying stable MBL-DTC phase, the initial relaxation in an MBD system can slow down due to the disorders (see, e.g. [43; 44]). To capture this slowness, we track the dynamics of the time staggered DTC order after each cycle, given by \[\begin{split}\tilde{C^{i}}(nT)&=(-1)^{(n+1)}C^{i}_ {auto}(nT)\ \text{and}\\ \tilde{C}(nT)&=(-1)^{(n+1)}C_{auto}(nT).\end{split} \tag{6}\] The Kohlrausch relaxation is manifested in the disorder-averaged values of the above quantities denoted by \(\overline{\tilde{C^{i}}(nT)}\) and \(\overline{\tilde{C}(nT)}\) respectively. The overhead bar denotes the averaging over disorder realizations as well as over various random \(x-\)bit-string initial states. Figure 2: The Kohlrausch regime and its phenomenology: **(a)** Behaviour of the stretching exponent \(\gamma\) as a function of the exponent \(\alpha\) of the power-law interaction (fixing \(W=0.4\pi\), \(\epsilon=0.2\)). In the parameter regime marked with yellow, \(\gamma\) shows systematic deviation from unity as \(L\) is increased indicating stability of Kohlrausch law with increasing \(L\) in that region. **(b)** Perceptible relaxation time-scale \(\tau\) vs \(\alpha\) for various \(L\). A clear peak is observed in the intermediate values of \(\alpha\). **(c)** The residual auto-correlation, \(a\) (see discussion after Eq. 7), as \(t\rightarrow\infty\) obtained from evolution up to a finite time \(T_{obs}=10^{4}\)). **Lower Panel:**\(\gamma\) vs \(L\) for various variations of parameters:**(d)**\(\alpha\) (fixing \(W=0.4\pi\), \(\epsilon=0.2\)), **(e)**\(W\) (fixing \(\alpha=2.6,\epsilon=0.2\)) and **(f)**\(\epsilon\) (fixing \(W=0.4\pi\), \(\alpha=2.6\)). In each frame, the curves in shades of red shows increase in \(\gamma\) towards unity as \(L\) is increased (instability of the Kohlrausch regime with increasing \(L\)), and those in shades of blue shows departure of \(\gamma\) from unity with increasing \(L\), indicating stability of the Kohlrausch regime against increase in \(L\). **The Kohlrausch form for MBD Systems:** The presence of disorder in our system necessitates a formally slight but crucial modification of the standard form of the Kohlrausch law of stretched exponential form (Eq. 1): we need to add a constant to it (e.g., to take care of a stable localization within a finite system-size or a finite observation time). This gives us the following form. \[y(t)=a+(1-a)\exp\bigg{\{}-\left(\frac{t}{\tau}\right)^{\gamma}\bigg{\}}. \tag{7}\] The added constant \(a\) accounts for the residual (frozen) correlation as can be estimated from the data measured over a finite interval. This might or might not be the true residual auto-correlation within our resolution as \(t\rightarrow\infty\), depending on our measurement interval and/or the existence of a stable MBL phase (see the end of Sub-Sec. II.2.1 for further discussions on this). ### Main Numerical Results Here we summarize the main numerical results. In our system, the degree of localization can be tuned by three parameters. It can be increased by increasing the disorder strength \(W,\) while can be decreased either by increasing \(\epsilon\) (which enhances delocalization over the \(x-\)bit-string states) or increasing the effective range of interaction by decreasing the exponent \(\alpha.\) Fig. 1, shows various cuts in the parameter space representing the salient features of the relaxation: the disorder-averaged real-time dynamics has been shown (dotted lines). For all the cases, throughout the entire parameter regime, the numerical data is fitted (continuous lines) very accurately with the Kohlrausch law (Eq. 7): The largest absolute least square error for fitting is \(<10^{-5}.\) This fits much better than the other standard relaxations forms, namely, exponential and power-laws with and without additive constants. In Fig. 1\((a)\), plots for various values of \(\alpha\) are shown for a fixed value of \(\epsilon\) and \(W\). As \(\alpha\) is decreased, the dynamics switches from a strongly localized DTC-MBL regime for our finite system (where \(\tilde{C}\) is frozen close to its initial value) to the delocalized regime (where \(\tilde{C}\) decays rapidly to zero). In \((b)\) and \((c)\), the parameter regimes are traversed (from stronger to weaker localization) by increasing \(\epsilon\) and decreasing \(W\) respectively, keeping other parameters fixed. #### ii.2.1 Phenomenology in the light of Relaxation Parameters: The relaxation parameters (\(\gamma,\tau\) and \(a\)) are extracted from the fitting of the Kohlrausch form (Eq. 7) to \(\tilde{C}(t)\) measured up to a finite observation time \(T_{obs}.\) The aim of the fitting is to provide a simple yet sufficiently accurate (leading order) description of a complicated relaxation dynamics. The exact dynamics, of course have, several finer details and also various dynamical regimes. Hence there is apriori no reason to expect that the entire relaxation should be well approximatable by a single simple function. The values of the best fitting parameters as well as the suitable fitting functions can thus depend upon choice of the temporal regime under analysis (i.e., on \(T_{obs}\)). In our case, though the parameters do exhibit such a dependence (see Fig. 8), the functional form, quite interestingly, does not - Kohlrausch form always provides the most optimal (over the set of usual physical relaxation forms like exponential, power-law etc with or without an additive constant etc) and also a very accurate description in all regimes. Though we have fitted the whole regime with a single set of parameters in most cases here, the separation between the early time and late time regimes are actually quite pronounced (the short, early time dynamics and the longer late time dynamics does not fit equally well with a single set of parameters). This has been addressed in further details in Sec. V. In the following we discuss the phenomenology and the significance of the relaxation parameters. **The Stretching Exponent \(\gamma\) and the Stability of Kohlrausch Regime:** Though the Kohlrausch's stretched exponential relaxation does not probably look qualitatively very different from the thermal exponential relaxation, a unified understanding of the stretching has still remained elusive and enigmatic. The efforts in this direction (models with finite density of long-lived metastable states, hierarchical constraints etc) indicate that a unified explanation would require elements fundamentally different from those responsible for exponential relaxation [26]. Hence the fate of the stretching exponent \(\gamma\) in the thermodynamic limit, and identification of those regime where it is non-trivial in that limit, are issues of fundamental importance. In order to investigate the stability of the stretched exponential relaxation against the increase in system-size \(L\), we extracted the stretching exponent \(\gamma\) for various values of \(L\) from real-time dynamics data over \(10^{4}\) cycles averaged over disordered configurations. The results are shown in Fig. 2(a),(d),(e),(f). The Fig. 2(a) shows that there is a regime of \(\alpha\) (shaded with yellow) where the stretching exponent \(\gamma\) actually decreases systematically away from unity with increasing system-size \(L.\) This is what we identify as the "slow-regime" or "Kohlrausch relaxation regime". Away from the slow regime, \(\gamma\) increases steadily with \(L\), and as is intuitively believed (see, e.g. [43]), \(\gamma\) vs \(L\) might have unity as its asymptote, implying a thermal relaxation behaviour as \(L\rightarrow\infty.\) Between these two regimes, the trend is non-monotonic and ambiguous in our finite system simulation, and it is difficult to determine whether the two phases are separated by a transition or a crossover. The three frames in the lower panel of Fig. 2 shows the variation of \(\gamma\) with \(L\) both within (in shades of blue) and outside (in shades of red) the Kolhrausch regime. In Fig. 2(\(d\)), the results are shown for various values of \(\alpha\) for a fixed value of \(W\) and \(\epsilon\) (corresponding to those in \((a)\)). In Fig. 2(\(e\)) and (\(f\)) the parameter space are traversed by varying \(W\) and \(\epsilon\) respectively. The possibility of the existence of an intermediate phase/regime between a thermal and the non-ergodic phase in a disordered many body system has been indicated long ago (see, e.g. [45] for a review) and some of the dynamical consequences of such an anticipated regime have also been considered more recently [46; 47]. The Kohlrausch regime puts all those observations on a unified ground, yet it does not necessarily require a stable MBL phase as an asymptote. Strong enough disorder which gives the local integrals of motions [5; 6] a life-time \(>T_{obs}\) is sufficient. **Peak in Relaxation Time-scale \(\tau\):** Next we focus on another striking aspect of our phenomenology. The relaxation time-scale \(\tau\) does not monotonically increase as one moves from a relatively weakly localized regime to a more strongly localized one. Instead, \(\tau\) exhibits a lofty peak somewhere between the two said regimes. This is shown in Fig. 2(b). This might appear counter-intuitive at first, since it is sensible to associate slower dynamics (larger \(\tau\)) with a more strongly localized regime. However, for a finite \(T_{obs}\) and a finite fitting resolution, there could be a substantial fraction of spin which would appear completely frozen, and our finite time observation will not sense their participation in the relaxation and club it in the constant \(a\) (discussed in further details in the following). An explanation and interpretation of the peak and its behaviour with the change in parameters have been given in terms of a simple physical picture based on a few very reasonable physical assumptions in Sec. IV.1. **Interpretation of the Residual auto-correlation \(a\):** The parameter \(a\) provides a measure of the degree of localization as extractible from the data for dynamics over a finite \(T_{obs}.\) Our finite-time characterization of weak and strong localization is based on the value of \(a\) (\(a=0\) implies delocalization where \(a=1\) means perfectly frozen). However, it may be noted that \(a\) is neither the residual auto-correlation after time \(T_{obs}\) nor (necessarily) the remnant correlation observed after infinite time (i.e., for \(T_{obs}\rightarrow\infty\)). It is the asymptotic auto-correlation _estimated_ from the data obtained over a _finite_\(T_{obs}.\) If there were only one dynamical regime that could be captured accurately by a single fitting function with a fixed set of parameter values, then of course everything would be independent of \(T_{obs},\) and \(a\) could be interpreted as the residual correlation after infinite time. But as behooves a complicated many-body system, an MBD system exhibits various distinct stages of dynamics in the sense that a single function with a fixed set of parameter values cannot capture the dynamics over all time scales. Hence, \(a\) just gives an estimate of the part of the correlation that looks completely frozen over an observation time-scale \(T_{obs}\) within the accuracy with which the Kohlrausch description is expected to hold. Thus, a non-vanishing \(a\) extracted by finite-size analysis from a finite \(T_{obs}\) data does not necessarily imply existence of a stable MBL. ## III Short range variants: Floquet and static MBDs In this section we consider another Floquet-Unitary drive for a strictly short range system with interactions ranging up to the next nearest neighbour on the same lattice geometry as before. The evolution operator over a period is given as follows. \[U_{F} =\exp\left[-iH_{x}\right]\exp\left[-i(\pi+\epsilon)H_{z}\right]\] \[where,\ H_{z} =\sum_{i}Z_{i},\] \[H_{x} =-\sum_{i}J_{i}X_{i}X_{i+1}+J_{2}\sum_{i}X_{i}X_{i+2} \tag{8}\] \(J_{i}\)'s are chosen uniformly from \([0.1\pi,0.4\pi].\) A static version of the model in a transverse field is believed to show a localization-delocalization transition [48] as the disorder strength is tuned. We choose a parameter regime where the localization is strong enough, though, like in the earlier case, is not based on existence of a stable MBL phase. The evolution shows qualitatively same phenomenology as obtained for the long-range system, as shown in Fig. 3. Here the degree of localization is controlled by tuning the imperfection \(\epsilon\) (\(\epsilon=0\) is trivially localized). The left frame shows real-time dynamics fitted with the stretched exponential form in Eq. 7. The middle panel show the peak in the relaxation time-scale \(\tau\) as a function of \(\epsilon.\) The right frame shows the value of the stretching exponent \(\gamma\) as a function of \(\epsilon\) for various values of \(L.\) A stable regime for Kohlrausch relaxation is clearly observed (where \(\gamma\) decreases with increasing \(L.\)) The inset shows the variation of \(\gamma\) with \(L\) in (blue) and away (red) from the stable regime. Next, we consider the anisotropic Heisenberg model [49], in a disordered field, and study the relaxation dynamics of a state under this Hamiltonian: \[H=J\sum_{i}(X_{i}X_{i+1}+Y_{i}Y_{i+1})+J_{z}\sum_{i}Z_{i}Z_{i+1}+\sum_{i}h_{i} Z_{i} \tag{9}\] \(h_{i}\) are chosen uniformly from \([h,-h],\)\(J_{z}=0.5J,J=1.\) ## IV A phenomenological picture of the Kohlrausch scenario and a non-hermitian analytical approach ### A Simple Physical Picture In this section, we provide a cartoon of a qualitative picture underpinning the phenomenology entailing Kohlrausch relaxation described in Sec. II. A remarkable heterogeneity in the microscopic dynamics underpins the simple Kohlrausch law, as can be gleaned from a representative plot of the real-time dynamics of single spins chosen randomly from various samples (Fig. 5) for a set of parameter values for which the Kohlrausch law remains stable with increasing \(L\). Here we provide clarifications of central aspects of the phenomenon: we explain salient features of the Kohlrausch phenomenology (e.g., the occurrence of the peak in \(\tau,L-\)dependence of \(\gamma\), the behavior of the peak in the \(Tobs\to\infty\) as well as \(L\to\infty\) limit) employing a simple physical picture. **The Basic Ingredients: The Slow, The Fast, and The Frozen:** Our physical picture is based on a cartoon (Fig. 6 (a)-(c)). The observation window \(T_{obs}\) defines the reference timescale. Though there is a broad distribution of relaxation times, the cartoon is built on three variants in particular: spins that relax Figure 4: Stretched exponential relaxation for quench with a time-independent MBD Hamiltonian (Eq. 9). The inset shows the variation of the stretching exponent \(\gamma\) with the disordered strength \(h\). Figure 5: Real-time dynamics of single spins sampled completely randomly from various realizations, for \(\alpha=2.5,W=0.4\pi,\epsilon=0.2.\) The plot shows a high degree of variability in the local dynamics with different types of behavior appearing with appreciable probabilities. Figure 3: Kohlrausch relaxation in up to next-nearest neighbour interacting Floquet system (Eq. 8). **(a)** Auto-correlator for various values of \(\epsilon\), for a fixed disorder strength, for \(L=24.\) Dotted lines are actual data from the numerics, and solid lines are fitting of the data with a stretched exponential function defined at Eq. (7). **(b)** Time scale \(\tau\) as a function of \(\epsilon\). **(c)** Stretching exponent \(\gamma\) as a function of \(\epsilon.\) The **inset** shows the dependence of \(\gamma\) on the system size for at various values of \(\epsilon.\) The stable and unstable flows of Kohlrausch exponents with \(L\) are shown respectively in blue (\(\epsilon=0.25\)) and red (\(\epsilon=0.40\)). Similar behaviour can be observed as a function of the disorder strength for fixed values of \(\epsilon\) (data not shown). cartoon), slow spins - whose relaxation is tiny but discernible during the process of the observation (green arrows), and spins whose relaxation (if any) within \(T_{obs}\) is beyond our resolution for detecting any dynamics (blue arrows). The dynamics can be broadly classified into three regimes (as described below), and its features can be qualitatively captured in terms of the dynamics of these cartoon spins. The relative populations of these three variants of spins marks three regimes of dynamics in our cartoon, as described below. **The Strongly Localized Regime**: This is a regime where a large fraction of the auto-correlation is frozen (Fig. 6(a)). This corresponds to the topmost blue (almost frozen) curves in Fig. 1. In this regime, within \(T_{obs}=10^{4}\), the relaxation happens only due to residual small fraction (\(1-a\)) of fast (red spins) and slowly decaying (green spins) parts of the auto-correlation This corresponds, for example, to the large \(\alpha\) regime on the right of the peak of \(\tau\) in Fig. 2(b). Here the \(L-\)dependence of the stretching exponent does not exhibit any systematic trend within system-sizes accessible to us, and stability of the Kohlrausch law cannot be inferred. **The Kohlrausch (Slow) Regime and the Occurrence of the Peak in \(\tau\):** Fig. 6(b) shows the situation as we tune the system more towards the delocalized regime either by increasing \(\epsilon\) or decreasing \(\alpha\) and/or \(W.\) A considerable number of frozen (blue) spins are liberated marginally from the strong constraints (that had kept them frozen) to have restricted dynamics and thereby got converted into slow (green) spins. In comparison, the growth of the fraction of the fast (red) spins (presumably, mostly from the already existing slow spins) is not appreciable yet. The dynamics is hence dominated by the slow spins. This corresponds to the curves with intermediate relaxation rates (the pale orange plots in the midway of Fig. 1), and the region around the peak in \(\tau\) in Fig. 2(b). Here the growth rate of the red spins takes over the growth rate of the green spins as one moves towards a more delocalized regime, and the peak occurs roughly at the point where the number of red spins overtakes the Figure 6: The physical picture: **Top:** A sequence of cartoons of three relaxation scenarios. **(a)** Strongly localized, dominated by frozen auto-correlation (blue arrows), **(b)** Kohlrausch regime, dominated by slowly decaying auto-correlation (green arrows), and **(c)** mostly delocalized, dominated by rapidly relaxing auto-correlation (red arrows). **Bottom:** Histogram of single-spin auto-correlations \(C_{auto}^{i}\) (Eq. 5 ) sampled over an ensemble of randomly chosen 5000 spins, corresponding to the respective cartoon above. Three different colors show results after three successive decades of evolution. The mean single-spin auto-correlation of the ensemble at three different times is shown by the vertical lines. number of green (and rarely occurring blue) spins. This is the regime (marked by vertical yellow stripe in Fig. 2) where the Kohlrausch law is stable to increase in \(L\) -(i.e. \(\gamma\) drifts off from unity as \(L\) is increased). The stability is concurrent with the dominance of the slow spins. **The Delocalized Regime**: Here most of the spins are free to relax fast, resulting in dynamics which is very close to thermal (Fig. 6(c)). This corresponds to the most rapidly decaying red curves in Fig. 1 and the low \(\alpha\) regime on the left of the \(\tau\) peak in Fig. 2 (b). In this regime the stretching exponent \(\gamma\) systematically increases with \(L\) approaching unity, indicating an exponential relaxation for \(L\rightarrow\infty\). **Numerical Signatures of the Regimes:** The frames in the lower panel of Fig. 6 show the number distribution of spins over auto-correlation (i.e., single-spin auto-correlation versus the fraction of spins having that particular value of the auto-correlation), over an ensemble of 5000 spin taken from 5000 various disorder realizations, for each of which a random \(x-\)bit-string state (a simultaneous eigenstates of all \(\sigma_{i}s\)) is taken as the initial state. Each frame in the lower panel corresponds to the physical the situation depicted in the cartoon frame right above it in the upper panel. The plots illustrate the corresponding cartoons. Fig. 6 (d) represents the most strongly localized regime, where there is no significant change in the mean of the distribution (vertical lines representing the ensemble average of the auto-correlation with colors corresponding to various times) with time, especially in the later decades. Fig. 6(e) represents the Kohlrausch regime, where the change in the mean takes place through the later decades, but still a major fraction of the auto-correlation remains intact. Fig. 6(f) represents the fast relaxing regime, where rapid decay is observed throughout the entire duration, and also shifting the mean from 1 to 0. The consistency of the above behavior with the numerical results for the Kohlrausch phenomenology can be gleaned, at once by recalling, for example, the correspondences of (d), (e), and (f) with the three regimes - the one to the right of the yellow band (strongly localized), the one within the yellow band (the Kohlrausch), and the one to the left of the yellow band (delocalized) of frames (a), (b), and (c) of Fig. 2. ### Single-Spin Analytics: A Non-Hermitian Random Hamiltonian Approach Here we resort to a simple formulation suitable for capturing the key elements behind the Kohlrausch law and its associated phenomenology in an MBD system. Since the single-spin auto-correlation is the central object of interest here, the key idea is to focus on the dynamics of a single spin (an arbitrary target spin), treating the rest of the system as a bath. The effects of the disorder, interaction, and drive are thus encapsulated in the action of the bath on the target spin. Here we resort to a minimal description for the above setting, consisting of a spin evolving under a single-spin non-hermitian Hamiltonian. The non-hermiticity is essential for modeling aspects like dissipation, energy gain, and dephasing of the dynamics of the target spin. The stroboscopic dynamics under the time-periodic Hamiltonian can be viewed as stroboscopic observation of the dynamics under the time-independent Floquet Hamiltonian \(H_{eff}\) (where the evolution operator over a period \(U(T;0)=e^{-iH_{eff}}\)), the non-hermitian Hamiltonian, in principle, is to be obtained by tracing over all the degrees of freedom except the target spin, evolving under \(H_{eff}\) (time-independent). Thus for stroboscopic dynamics, it is sufficient to consider a time-independent non-hermitian Hamiltonian whose parameters encode the information of the rest of the system as well as the drive. The problem is soluble, and we treat the parameters in the solution as free ones to be extracted from the numerical results. To this end, we devise a simple single-spin non-hermitian Hamiltonian formulation as follows. #### iii.2.1 The Formulation We describe the dynamics of an arbitrary target spin in our system (observed stroboscopically at \(t=nT\)) by that governed by a time-independent non-hermitian Hamiltonian as follows. \[\tilde{H} =H_{h}+H_{nh},\text{ where}\] \[H_{h} =-\Omega\sigma^{x}\text{ and}\] \[H_{nh} =-i\left[a_{0}\mathcal{I}_{2\times 2}+a_{1}\sigma^{x}+a_{2} \sigma^{y}+a_{3}\sigma^{z}\right] \tag{10}\] Here \(\sigma^{x/y/z}\) are the Pauli matrices representing the components of the target spin, \(a_{i}\) (\(i=1,4\)) are random parameters related to the parameters of the corresponding random realization of the time-dependent many-body system in which the target spin is embedded. Note that when focused on a single spin, \(\Omega\) is just an overall scale factor. However, for an ensemble of spins embedded in a random many-body system, \(\Omega\) will be an overall relative scale of relaxations among the spins, e.g., a larger \(\Omega\) will correspond to a faster spin, while a smaller \(\Omega\) will correspond to a slower spin, given all other parameters are same between them. Our approximation consists of the assumption that \(\tilde{H}\) encapsulates the interaction of the target spins with the environment as well as the drive. This is a rather drastic approximation, and is suitable only for calculating single-spin observables. Since we are calculating auto-correlations in \(X\)-components of the lattice spins, we represent the \(x-\)component \(X_{i}\) of a target spin (say, at a site \(i\); see Eq. 3) to be diagonal in the \(\sigma\) representation, i.e., \((-1)^{n+1}X_{i}(nT)\leftrightarrow\sigma^{z}(nT)\) Note that, since we are representing the dynamics of a spin by a time-independent \(\tilde{H}\), it is necessary that we incorporate the factor \((-1)^{n+1}\) in the correlations to set the equivalence correctly. We switch to a re-defined set of parameters (after [50]) to link them more clearly to various physically intuitive (as has been discussed down the line) quantities as follows. \[a_{0}=\gamma,\ a_{1}=\gamma\beta,\ a_{2}=\nu,\ a_{3}=V. \tag{11}\] In general, the state of the target spin is expected to be represented by a density \(2\times 2\) matrix \(\rho(t)\) which satisfies the following Schrodinger equation. \[i\partial_{t}\rho(t)=-i[\tilde{H},\rho(t)]. \tag{12}\] Since \(\tilde{H}\) is non-hermitian, \(Tr[\rho(t)]\) is not necessarily conserved (here \(Tr[.]\) denotes trace). The expectation value of any observable \(\mathcal{O}\) is hence needed to be normalized as follows. \[\langle\mathcal{O}\rangle(t)=Tr[\rho(t)\mathcal{O}]/Tr[\rho(t)]. \tag{13}\] The Schrodinger equation Eq. (12) is exactly soluble [50] for any real set of values of the parameters in \(\tilde{H}\). The general solution reads: \[\begin{split}\rho_{mn}&=\frac{e^{-\Gamma t}}{2( \beta^{2}+\gamma^{2})}\left[A_{mn}\cos\left(\omega t\right)+B_{mn}\sin\left( \omega t\right)\right.\\ &\left.+C_{mn}\cosh\left(\Gamma t\right)+D_{mn}\sinh\left( \Gamma t\right)\right],\end{split} \tag{14}\] where \[\Gamma=2\Omega\gamma,\ \omega=2\Omega\beta. \tag{15}\] Here \(\rho_{mn}\), \((m,n\in\{1,2\})\) are the four elements of the density matrix \(\rho(t)\), and \(A_{mn},B_{mn},C_{mn}\) and \(D_{mn}\) are constants depending on the initial condition and the Hamiltonian parameters for a given disorder realization. This solution is general enough to cover various scenarios including (but not limited to) exponential and polynomial relaxations with or without oscillations, energy-conserving dynamics, pure dephasing, etc as special cases depending on the values of the Hamiltonian parameters even for a given initial state (see, e.g., [50]). This versatility of the exact solution makes it suitable for capturing the diversity of single-spin relaxation (as illustrated in Fig. 5). Here we focus on a fully polarized eigenstate of \(\sigma^{z}\) (\(X_{i}\)) with eigenvalue \(+1\), i.e., all spins up in the \(x-\)direction in the chain in Eq. 3. Under this initial condition, the time-staggered single-spin auto-correlation \[\tilde{C}^{i}_{auto}=(-1)^{n+1}\langle X_{i}(t)\rangle=\langle\sigma_{z}(t)\rangle, \tag{16}\] where \(t=nT\). The expression for \(\tilde{C}^{i}_{auto}\) (expressed in terms of \(\sigma^{z}\)) for the above-mentioned initial condition reads (using Equation 14) \[\langle\sigma^{z}(t)\rangle_{Analytic}=\frac{\left[(1+\gamma^{2}-\nu-V^{2}) \cos\left(\omega t\right)+(\beta V)\sin\left(\omega t\right)\right]\text{sech }\left(\Gamma t\right)+(\nu+\beta^{2}+V^{2}-1)+\gamma V\tanh\left(\Gamma t \right)}{\left[(\nu+\beta^{2}-1)\cos\left(\omega t\right)+(\beta V)\sin \left(\omega t\right)\right]\text{sech}\left(\Gamma t\right)+(1+\gamma^{2}- \nu)+\gamma V\tanh\left(\Gamma t\right)}, \tag{17}\] The expression contains the parameters \(\Omega,\gamma,\beta,\nu\) and \(V\), that are essential for characterizing a \(2\times 2\) non-hermitian matrix (Eqs. 10 and 11), and are expected to encode the necessary effects of the drive and the other spins on the relaxation of the target spin. It is easy to glean from Eq. (17) that \(\gamma\) sets the time-scale of energy loss/gain by the target spin, \(\beta\) provides the characteristic frequency of the damped oscillation executed by the target spin. Since energy is pumped into the system, there are generically exponentially growing terms in the subsystem dynamics, as well as exponentially decaying terms due to both drive and the loss of energy to the rest of the system. A steady state appears when those two rapidly changing terms counterbalance each other. In the above form (Eq. 17), the expression for \(\langle\sigma^{z}(t)\rangle_{Analytic}\) this is in-built, and averts the usual problem with divergences associated with individual exponentially increasing terms as \(t\rightarrow\infty\) and the associated numerical problems at the large but finite time: the magnitude of both \(\text{sech}\left(x\right)\) and \(\tanh\left(x\right)\) are bounded between \(0\) and \(1\). #### iii.2.2 Auto-Correlation Dynamics and the Kohlrausch law from the Analytics We fit the single-spin auto-correlation data obtained from exact numerics (as shown in Fig. 5) with the analytical expression for \(\langle\sigma^{z}(t)\rangle_{Analytic}\) in Eq. (17), for a large number of disorder realizations, and extract the values of the parameters \(\Omega,\gamma,\beta,\nu\), and \(V\) in each case. Plugging such a set of parameter values into the expression of \(\langle\sigma^{z}(t)\rangle_{Analytic}\), we get our analytical expression for \(\tilde{C}_{i}(t)\) with numerical values for its parameters. We average this expression over various disorder realizations. The large varieties of single-spin relaxations obtained numerically (Fig. 5) is captured by our analytic expression obtained from a simple formulation (Fig. 7, **Left** frame, main) for a few random samples. The inset shows the fit of the disorder-averaged numerical result by the disordered averaged analytical expression. They are both consistent with the Kohlrausch law to very good accuracy over multiple decades. The probability distribution \(P_{\gamma}\) of the relaxation rate \(\gamma\) (**middle** frame) also shows two clear peaks. The population at the peak at \(\gamma=0\) exhibits no relaxation at all (blue spins), while those within the width of it show slow relaxation (green spin), and the second peak represents spins with considerable relaxation. The bimodal structure appearing in the distribution of these two parameters with an appreciable width around the peak at zero is consistent with the classification of our spins into three categories, namely, the frozen (blue), the slow (green) and the fast (red) based on their relaxation rates. The distributions for \(\nu\), \(\beta\) and \(V\) are shown in Appendix A. Similarly, the distribution of \(\Omega\) (Fig. 7, **right** frame) exhibits two clear peaks - one at \(\Omega\approx 0\), another around \(\Omega\approx 0.06\). The peak at \(\Omega=0\) represents the spins frozen to our resolution (the blue spins). The populations within the width (suitably defined) of this peak represent the slow (green) spins. The second peak and those within its width represent spins with faster relaxations (the red spins in our cartoon). ## V Early vs late stages of relaxation: a numerical search for the "critical" dtc phase The existence of various stages of relaxation is not uncommon in interacting many-body systems. For example, a system described by a fractional Fokker-Planck equation can exhibit relaxation that crosses over from stretched exponential to an inverse power law (power law with negative exponent) [51]. Here we found, quite interestingly, that the stretched exponential form of Eq. 7 continues to fit both the early and late stages the best (compared to other more common relaxation forms like exponential and power-laws) and also remarkably accurately. The fitting parameters are distinctly different for the two stages distinguishing one from the other as shown in Fig. 8. One should however keep in mind, that the time-scale of the emergence of the late regime is still negligibly small compared to the characteristic time-scale of relaxation (if any) of the frozen (blue) spins (the latter being infinity to our resolution). The phenomenology of relaxation in the early and late stages is distinct as discussed below. Fig. 8 (top frame) shows, initially the stretching exponent \(\gamma\) is closer to unity (i.e., the relaxation is more thermal-like) for larger \(\alpha\) (shorter interaction range). But between \(T_{obs}=100\) and \(200\), the trend reverses, and \(\gamma\) drifts further from unity for larger \(\alpha.\) The distinction between the two stages of relaxation is most pronounced for the short-ranged systems and gradually levels off as the range of interactions is increased. This can be roughly understood as follows. In shorter-ranged MBD systems, for a sufficiently strong disorder, there is localization (regardless of whether it persists as \(t\rightarrow\infty\)) at the initial stage of the dynamics (assuming we start with a localized state). The early stage consists of dynamics within the associated localization length. This is followed by a much slower relaxation process involving equilibration between different localized regions. With an increase in the range of the interaction, the size of the localized patches increase. The relaxation dynamics hence involve cooperative dynamics of a larger number of spins. This makes the relaxation slower than the shorter-range Figure 7: Fitting of the exact numerical results for the time staggered single-spin auto-correlations with the analytical formula from the non-hermitian formulation. **Left:** main frame shows some representative fitting of the numerical data (colored) by the analytical function (solid black lines). The inset shows the fitting of the numerical disorder averaged relaxations with the disorder averaged analytical expression corresponding to the case \(\alpha=2.8,W=0.4\pi,\epsilon=0.2,L=26\) (Kohlrausch regime). The fitting is quite accurate and satisfies Kohlrausch’s law. **Middle:** frame shows the probability distributions \(P_{\gamma}\) of \(\gamma.\)**Right:** frame shows the probability distributions \(P_{\Omega}\) of \(\Omega.\) system at early times (see, e.g., [52]). The early stage is thus characterized by faster relaxation (larger \(\gamma\)) for the shorter-ranged systems. The late stage is marked, on the other hand, by the equilibration dynamics between the localized patches (beyond the localization length), which is much slower in shorter-range systems due to stronger localization compared to the longer-ranged ones. This is hence characterized by a smaller value of \(\gamma\) (more stretched relaxation) for shorter-ranged systems. The persistence of Kohlrausch law in almost all stages of relaxation is also interesting in view of a prediction of a dynamical phase called "critical" DTC in long-range interacting Floquet systems with unitaries of the form given in Eq. 2 in various spatial dimensions[36; 10]. A perturbative expansion of an effective Hamiltonian in a suitable frame under specific conditions predicts a scale-free relaxation of a DTC order parameter (the auto-correlation \(\tilde{C}\) we measured) at the early time [36; 10] in a power-law interacting setting whose one-dimensional version we studied (Eq. 2). The dynamical phase is called critical DTC, and is predicted to occur when the exponent \(\alpha\) of the power-law interaction equals the dimension of the lattice (\(\alpha=1\) for our setting). Focusing on the early-time data (the first two decades) we could not find the proposed critical DTC phase in spite of a careful scanning of the parameter space. Instead, we found Kohlrausch law persists across the board for \(\alpha=1\), as shown in Fig. 9. Figure 8: Dependence of the relaxation parameters with the observation time \(T_{obs}\) indicates the existence of two distinct dynamical stages. At the early stage (roughly corresponding to \(T_{obs}<500\)), the relaxation parameters are quite sensitive to the observation time \(T_{obs}\) (especially for the shorter-ranged systems). At the later times (the late stage), the relaxation parameters tend to saturate and their dependence on \(T_{obs}\) is much weaker or negligible. Figure 9: Short-time behaviors and fitting: Showing no sign of power-law decay. A log-log plot does not show a straight line and fits better (least squares being always \(<10^{-6}\)) with a stretched exponential function (We tried variants of exponential fits, which always gives least squares \(~{}10^{-2}\), data not shown). **Top Panel:** Main plot is for \(L=24,\epsilon=0.2,\alpha=1.0\) and the inset shows the system size scaling of the exponent \(\gamma\) for various values of \(W.\)**Bottom Panel:** Main plot is for \(L=24,W=0.4\pi,\alpha=1.0.\) The _inset_ shows the absence of power-law relaxation for a range of values of \(\alpha\) around \(\alpha=1.\) Discussions and outlook **Spin Glass Mechanism Plays No Essential Role in Kohlrausch Stretching of Relaxation:** Glass formation is known to one of the key mechanisms that give rise to stretched relaxation (see, e.g. [26]). In our spin set-up, formation of a quantum spin glass (see, e.g., [53] for a review) could hence have been a candidate behind the Kohlrausch relaxation. However, two essential ingredients of a spin glass are (a) disorder and (b) frustration [54; 55]. Our results indicate that Kohlrausch stretching of relaxation is in fact independent of the latter. In our setting, frustration is introduced if a fraction of the interactions \(J_{ij}\) are made negative in the long-range case - there is no frustration when all the bonds are ferromagnetic, while the presence of a sufficient fraction of anti-ferromagnetic bonds can introduce the frustration necessary to stabilize a spin-glass in a long-range interacting disorder system [27]. We find that Kohlrausch stretching is present even in systems with completely ferromagnetic couplings (\(J_{ij}\geq 0\)). As a strong example, we analytically show (see Appendix B) that taking any arbitrary bit-string state in \(x\) direction as the initial state, the dynamics with all \(J_{ij}>0\), in the interaction part \(H_{x}\) (Eq. 3), i.e., completely ferromagnetic interactions are exactly identical for that in which each \(J_{ij}\) has flipped sign (highly frustrated one). **On Plausible Saturation of the Peak with increasing \(L\):** The fraction of frozen spins observed within \(T_{obs}\) depends on \(L.\) All we assume is that it asymptotes to some non-zero value even as \(L\to\infty.\) This only requires the characteristic time for melting of the Local Integrals Of Motion/\(\ell-\)bits (see, e.g., [6]) to be larger than \(T_{obs}.\) Though we have not seen the saturation within the system size we can access numerically, on physical grounds, the assumption seems quite justified for strong disorder - the thermalization is shown to be extremely slow even where MBL is argued to be absent [17], and hence the frozen part of the correlation is expected to persist even for substantially long \(T_{obs}.\) As \(L\) is increased, the peak in \(\tau\) grows in height (as can be seen from Fig. 2(b)). However, for a finite \(T_{obs},\) the peak height will eventually saturate beyond some very large value \(L\sim L_{max}\) (not accessible to numerics). This may be understood as follows. We assume that the number density of spins with effective relaxation time-scale \(\tau\) is \(n(\tau),\) than the average separation between such spins will be \(1/n(\tau).\) Then it is easy to see that the average height of the peak can grow as \(L\) is increased till some \(L_{max}\sim 1/n(\tau_{max})\) where \(\tau_{max}\) is the smallest value of \(\tau\) for which \(\int_{\tau_{max}}^{\infty}\tau n(\tau)d\tau\approx 0\) (provided such a \(\tau_{max}\) exists). Till \(L\) reaches \(L_{max},\) increasing \(L\) might result in an inclusion of much slower spins (much larger \(\tau\)) with an appreciable probability. This might lift the average value of \(\tau\) and hence the peak height (as it does in our case). In the following, we argue the existence of a \(\tau_{max}\) in our setting. Since we assumed a finite \(T_{obs}\) and finite resolutions of the auto-correlation measurement, we cannot resolve a very large \(\tau\) from infinity, from the auto-correlation measurement (for example, for \(T_{obs}=10^{4},\) a spin with \(\tau=10^{15}\) will be considered as frozen with a resolution that cannot register any change in correlation taking place over a period of \(10^{-11}\times\tau\)). The cut-off value of \(\tau\) beyond which the change in auto-correlation cannot be registered thus serves as a natural \(\tau_{max}.\) We consider spins with \(\tau>\tau_{max}\) as completely frozen and hence non-participating in the measurable correlation decay (their contribution constitutes the atemporal part \(a\) of the auto-correlation). Hence, taking into account only the remaining spins, we have \(n(\tau)=0\) for \(\tau>\tau_{max}.\) Since this is indeed the case (\(a\neq 0\)) in the Kohlrausch regime where the peak occurs, a \(\tau_{max}\) exists in this regime for which \(\int_{\tau_{max}}^{\infty}\tau n(\tau)d\tau=0,\) and the peak will saturate beyond \(L_{max}\) associated to it. The peak also moves deeper into the more localized regime as \(L\) is increased. That could simply happen because the putative MBL phase seems to move further away with increasing system size [14; 17; 18] in most cases. However, since we are focusing on a finite \(T_{obs},\) by our assumption that there will always be a residual auto-correlation even in the thermodynamic limit for \(t<T_{obs},\) the peak should converge to some finite value as \(L\to\infty.\) Unfortunately, this saturation is also beyond the system sizes accessible to our numerics. **The Peak as \(T_{obs}\to\infty:\)** If there is a stable MBL for a finite value of the disorder strength, then the above picture in principle can hold even as \(T_{obs}\to\infty.\) This is because there will always be some blue spins, and it is plausible that as one moves towards a less localized regime from a more localized one, some of them will be liberated to give rise to the green spins. Of course, there will be a competition between the infinities: the infinity associated with the time-scale of stability of the MBL and the infinity associated with \(T_{obs}.\) On the other hand, if there is no stable MBL anywhere, then as \(T_{obs}\) is increased further, more blue spins will actually qualify as the green spins, and the peak height will increase, and eventually saturates to the longest delocalization time-scale as \(T_{obs}\to\infty.\) **Conclusions:** We showed that the century-old law for relaxation of complex classical systems (structural glass, super-cooled liquid, etc) - the famous Kohlrausch law - also appears ubiquitously in the domain of isolated MBD systems - both driven and time-independent - evolving under pure Schrodinger dynamics. This helps us quantify the slowness of the well-known prethermal relaxation in MBD systems. Finite-size analysis shows systematic stabilization of this law with increasing system size in certain parameter regimes, allowing us to define a dynamical phase characterizing the effect of disorder. The relaxation time \(\tau\) of auto-correlation over a finite observation time \(T_{obs}\) shows a peak between the weakly and most strongly localized (characterized by the frozen part of the auto-correlation as appears from a finite time observation) regimes. The phenomenon does not necessarily depend on the existence of a stable MBL phase in the thermodynamic limit at a finite parameter strength (e.g., disorder strength \(W\), power \(\alpha\) of the power-law interaction, etc) that controls the degree of localization, though, the existence is assured if there is a stable MBL phase. We provide a simple analytical formulation of the problem, targeting a single spin in the MBD system. The approach is based on random non-hermitian Hamiltonians. Our analytical formula provides an accurate description of the single spin dynamics even at the level of each individual target spin and consequently captures the Kohlrausch law very accurately upon disorder averaging. The characteristics of the MBD phase are captured through the behavior of the system as a bath to a random target spin, and are encapsulated in the distribution of the parameters in the single-spin non-hermitian Hamiltonian. **Outlook:** The existence of some sort of slow dynamical phase beyond the MBL regime is probably already in our indirect knowledge: It was known that one sees logarithmic growth of entanglement in the MBL phase [49], but now it is known that MBL is unstable in a substantial part of that regime [14; 15; 18]. Our results are hence expected to trigger a new direction of study toward understanding disordered material in terms of new dynamical phases. The diversity of single spin relaxations, especially in the moderately localized regime where the Kohlrausch law is stable, indicates that the many-body bath appears to exhibit a broad range of behaviors to various spins - ranging from sustained oscillatory to over-damped. Determining the distribution of such bath characteristics might be an interesting way to characterize an MBD system. Investigating the Markovianity of the dynamics and information scrambling in the Kohlrausch phase might reveal interesting facets of relaxation in MBD systems. **Acknowledgments:** The author is deeply grateful to V. Khemani and R. Moessner for several crucial discussions at various stages of the project starting right from its inception, as well as for their continuous encouragement and support. The author also thanks A. Chandran, A. Das, A. Polkovnikov and T. Prosen for many useful discussions. ## Appendix A The Probability Distributions of \(\beta\), \(\nu\) and \(V\) Here we provide the probability distributions of some of the parameters of the non-hermitian Hamiltonians obtained from the fitting of the analytical formula (Eq. 17) to the numerical results obtained for various disorder realizations. Figure 10: The Probability Distribution for \(\beta\), \(\nu\) and \(V\) corresponding to the case \(\alpha=2.8,W=0.4\pi,\epsilon=0.2,L=26\) (Kohlrausch regime). The results are obtained (for this Fig. as well as the later ones) by averaging over 5000 disorder realizations for each data point, and for each realization, a random \(x\)-bit-string state is taken as the initial state. ## Appendix B Ruling out Spin-Glass Mechanism as Essential in Kohlrausch Stretching: In order to strike out any role of frustration in the slow dynamics in our setting, we show that taking any arbitrary bit-string state in \(x\) direction as the initial state, the dynamics with all \(J_{ij}>0\), in the interaction part \(H_{x}\) (Eq. 3), i.e., completely ferromagnetic interactions are exactly identical for that in which each \(J_{ij}\) has flipped sign (highly frustrated one). Our proof subsumes exact dynamical equivalence between a system with random ferromagnetic interactions and that with Sherrington-Kirkpatrick (SK) spin glass interactions [56] in the above sense for each of the (\(2^{L}\)) eigenstates of the \(\{\sigma^{x}_{i=1,L}\}\) as the initial states having all kinds of energies. For a glassy system, a low energy initial state is generically expected to get trapped in extremely long-lived metastable states (surrounded by lofty energy barriers) during relaxation, while in a random ferromagnet, the dynamics do not slow down due to such long-lived glassy traps. Here we show that the slowing down happens equally in the case of a glassy \(H_{x}\) (being completely antiferromagnetic, long-ranged, and random [57]) as well as a purely random ferromagnetic \(H_{x}\), ruling out any essential role of the frustration in \(H_{x}\), in the stretching of the relaxation. To show this, we consider Floquet evolution of an observable \(O_{x}\) diagonal on \(x\)-basis under a unitary (over a period) of the form \[U = U_{z}U_{x},\text{ where}\] \[U_{x} = e^{-iH_{x}(\{\sigma^{x}_{i}\})},\ U_{z}=e^{-i(\pi+\epsilon)\sum_ {j}\sigma^{z}_{j}}, \tag{11}\] and show that its dynamics exactly maps to that of \(O_{x}\) evolved with \[\tilde{U} = U_{z}\tilde{U_{x}},\text{ where}\] \[\tilde{U}_{x} = e^{+iH_{x}} \tag{12}\] (that is, with a sign-flipped \(H_{x}\)), provided the initial states \(|\psi(0)\rangle\) (for evolution with \(U\)) and \(|\tilde{\psi}(0)\rangle\) (for evolution with \(\tilde{U}\)) satisfy the following condition: \[[a^{(0)}_{\alpha}]^{*}=e^{-i\gamma^{(0)}_{\alpha}}\tilde{a}^{(0)}_{\alpha}, \tag{13}\] for some \(\gamma^{(0)}_{\alpha}\), where \[|\psi(0)\rangle = \sum_{\alpha}a^{(0)}_{\alpha}|x_{\alpha}\rangle\text{ and}\] \[|\tilde{\psi}(0)\rangle = \sum_{\alpha}\tilde{a}^{(0)}_{\alpha}|x_{\alpha}\rangle, \tag{14}\] Here \(H_{x}\) denotes the interaction part that depends only on \(\sigma^{x}_{i}\)s, and \(|x_{\alpha}\rangle\) denotes the \(x\)-basis states. **Note** that this condition includes the same initial states (e.g., same random bit-string \(|x_{\alpha}\rangle\)) for the dynamics under \(U\) and \(\tilde{U}\). Taking \(H_{x}=-\sum_{i>j}J_{ij}/r^{\alpha}_{ij}\sigma^{x}_{i}\sigma^{x}_{j}\) with \(J_{ij}>0\) We prove this by induction. We assume the states after the \(n-\)th complete drive cycles are \[|\psi(n)\rangle = \sum_{\alpha}a^{(n)}_{\alpha}|x_{\alpha}\rangle\text{ and}\] \[|\tilde{\psi}(n)\rangle = \sum_{\alpha}\tilde{a}^{(n)}_{\alpha}|x_{\alpha}\rangle, \tag{15}\] with the coefficients satisfying \[[a^{(n)}_{\alpha}]^{*}=e^{-i\gamma^{(n)}_{\alpha}}\tilde{a}^{(n)}_{\alpha}, \tag{16}\] then one can show (see appendixB ) that after the \((n+1)-\)th cycle, the coefficients (for the expansion of \(|\psi(n+1)\rangle\) and \(|\tilde{\psi}(n+1)\rangle\)) will satisfy \[[a^{(n+1)}_{\alpha}]^{*}=e^{-i\gamma^{(n+1)}_{\alpha}}\tilde{a}^{(n+1)}_{\alpha }, \tag{17}\] **Note** that Eq. 17 automatically implies equality of the mod-squares of the coefficients of the individual \(x-\)basis states between the states evolved by \(U\) and \(\tilde{U}\) respectively at all time, hence the equality of \(\langle O_{x}\rangle(n)\) in both cases. We denote \[U_{x}|x_{\alpha}\rangle = e^{-i\phi_{\alpha}}|x_{\alpha}\rangle\text{ and} \tag{18}\] \[U_{z}|x_{\alpha}\rangle = \sum_{\beta}b^{\alpha}_{\beta}|x_{\beta}\rangle. \tag{19}\] Then we have \[|\psi(n)\rangle = \sum_{\alpha}a^{(n)}_{\alpha}|x_{\alpha}\rangle\text{ and}\] \[|\tilde{\psi}(n)\rangle = \sum_{\alpha}\tilde{a}^{(n)}_{\alpha}|x_{\alpha}\rangle \tag{20}\] \[= \sum_{\alpha}e^{i\gamma^{(n)}_{\alpha}}[\tilde{a}^{(n)}_{\alpha} ]^{*}|x_{\alpha}\rangle.\] Last step is obtained using Eq. (16). Now applying \(U\) on the expression of \(|\psi(n)\rangle\) above, we get \[|\psi(n+1)\rangle = \sum_{beta}a^{(n+1)}_{\beta}|x_{\beta}\rangle,\text{ where} \tag{21}\] \[a^{(n+1)}_{\beta} = \sum_{\alpha}a^{(n)}_{\alpha}e^{-i\phi_{\alpha}}b^{\alpha}_{\beta} \tag{22}\] Similarly, applying \(\tilde{U}\) on \(|\tilde{\psi}(n)\rangle\) in Eq. 20 \[|\tilde{\psi}(n+1)\rangle = \sum_{beta}\tilde{a}^{(n+1)}_{\beta}|x_{\beta}\rangle,\text{ where} \tag{23}\] \[\tilde{a}^{(n+1)}_{\beta} = \sum_{\alpha}[a^{(n)}_{\alpha}]^{*}e^{i\phi_{\alpha}}e^{i\gamma^{( n)}_{\alpha}}b^{\alpha}_{\beta}. \tag{24}\] Now the strategy is to put the above expressions in the expressions (Eq. 17) we want to prove and see if we get something correct (that is, an assignment of \(\gamma_{\alpha}\)'s that satisfies it) or something absurd (we find no such assignment possible). Eq. (17) to be true, we must have (using Eq. 24 in it), \[[a^{(n+1)}_{\beta}]^{*}=e^{-i\gamma^{(n+1)}_{\beta}}\sum_{\alpha}[a^{(n)}_{ \alpha}]^{*}e*i\phi_{\alpha}e^{i\gamma^{(n)}_{\alpha}}b^{\alpha}_{\beta}. \tag{25}\] Comparing this with the complex conjugate of Eq. 12, we must have \[\sum_{\alpha}[a_{\alpha}^{n}]^{*}e^{i\phi_{\alpha}}[b_{\beta}^{\alpha}]^{*}=e^{-i \gamma_{\beta}^{(n+1)}}\sum_{\alpha}e^{-i\gamma_{\alpha}^{(n)}}[a_{\alpha}^{n}] ^{*}e^{i\phi_{\alpha}}b_{\beta}^{\alpha}. \tag{16}\] Since \(\phi_{\alpha}\) should depend on the specifics of \(H_{x}\), in order to have a condition independent of that, we must have term-by-term equality for the above sum. That is, we must have (canceling out the common factors) \[e^{i(\gamma_{\beta}(n)-\gamma_{\alpha}(n))}=\frac{b_{\beta}^{\alpha}}{[b_{ \beta}^{\alpha}]^{*}}. \tag{17}\] At this stage, it is clear that this will be a feasible condition with \(\gamma_{\alpha}\)s independent of \(n\), so we drop \(n-\)dependence of \(\gamma_{\alpha}\)s. Now, \[b_{\beta}^{\alpha}=\langle x_{\alpha}|U_{z}|x_{\beta}\rangle. \tag{18}\] Let us denote by \(n_{\alpha\beta}\), the number of spins needed to be flipped to generate \(|x_{\alpha}\rangle\) from \(|x_{\beta}\rangle\). Then from the Taylor expansion of \(U_{z}\), it can be shown that when \(n_{\alpha\beta}\) is odd, then \(b_{\beta}^{\alpha}\) is imaginary hence \(\frac{b_{\beta}^{\alpha}}{[b_{\beta}^{\alpha}]^{*}}=-1\) and when \(n_{\alpha\beta}\) is even, then \(b_{\beta}^{\alpha}\) is real, hence \(\frac{b_{\beta}^{\alpha}}{[b_{\beta}^{\alpha}]^{*}}=+1.\) Thus, we can write \[\frac{b_{\beta}^{\alpha}}{[b_{\beta}^{\alpha}]^{*}}=e^{i\pi n_{\alpha\beta}}= e^{i(n_{\alpha}-n_{\beta})}. \tag{19}\] The proof of the last step of the above equation (the assignment) follows this part of the proof. This is equivalent for the present purpose to the individual assignments of \(\gamma_{\alpha}\)s as \[\gamma_{\alpha}=n_{\alpha}\pi, \tag{20}\] where \(n_{\alpha}=\) the number of spins required to be flipped to create \(|x_{\alpha}\rangle\) from some pre-assigned reference state \(|R\rangle\) (e.g., all-up \(x-\)basis state). This gives the concrete form of the necessary starting condition for the induction proof (Eq. 16, 17). The condition now reads: if after \(n-\)th cycles of evolution by \(U\) and \(\tilde{U}\) the coefficients satisfy the condition \[[a_{\alpha}^{(n)}]^{*}=e^{-i\pi n_{\beta}}\tilde{a}_{\alpha}^{(n)}, \tag{21}\] then the same must hold after \((n+1)-\)th step, i.e, we must have \[[a_{\alpha}^{(n+1)}]^{*}=e^{-i\pi n_{\beta}}\tilde{a}_{\alpha}^{(n)}, \tag{22}\] This statement can be proved immediately following the steps sketched above. Now it is easy to see that this satisfied if we take \(|\psi(0)\rangle=|x_{\alpha}\rangle\) (any \(x-\)basis state). This completes the proof. _Proof of the Assignment:_ Here we prove that \(e^{i\pi n_{\alpha\beta}}=e^{i\pi(n_{\alpha}-n_{\beta})}\). Let \(n_{\alpha}=\) the number of spins required to be flipped to create \(|x_{\alpha}\rangle\) from the all-up \(x-\)basis state, and \(n_{\beta}\) is defined likewise. Let \(n_{c}=\) the number of common spins that are flipped in both \(|x_{\alpha}\rangle\) and \(|x_{\beta}\rangle\) compared to the reference state \(|R\rangle\). Then \(\tilde{n}_{\alpha}=n_{\alpha}-n_{c}=\) No. of spins that are flipped in \(|x_{\alpha}\rangle\) (w.r.t \(|R\rangle\)) but not flipped in \(|x_{\beta}\rangle\). Similarly, \(\tilde{n}_{\beta}=n_{\beta}-n_{c}=\)No. of spins that are flipped in \(|x_{\beta}\rangle\) but not flipped in \(|x_{\alpha}\rangle\). Thus \(n_{\alpha\beta}=\tilde{n}_{\alpha}+\tilde{n}_{\beta}\). Now, all we care about is the parity (whether odd/even) of \(P(n_{\alpha\beta}).\) But from above we have \[n_{\alpha\beta}=\tilde{n}_{\alpha}+\tilde{n}_{\beta}=n_{\alpha}+n_{\beta}-2n_ {c}.\] Since \(2n_{c}\) is always even, we hence have \[P(n_{\alpha\beta})=P(\tilde{n}_{\alpha}+\tilde{n}_{\beta})=P(n_{\alpha}+n_{ \beta})\] But it for any integers \(n_{\alpha}\) and \(n_{\beta}\), it is evident that \(P(n_{\alpha}+n_{\beta})=P(n_{\alpha}-n_{\beta}).\) Thus we finally have \[P(n_{\alpha\beta}) =P(n_{\alpha}-n_{\beta})\] \[\implies e^{i\pi n_{\alpha\beta}} =e^{i\pi(n_{\alpha}-n_{\beta})}\text{ (proved)}. \tag{23}\]
2305.19387
Bloch Oscillations, Landau-Zener Transition, and Topological Phase Evolution in a Pendula Array
We experimentally and theoretically study the dynamics of a one-dimensional array of pendula with a mild spatial gradient in their self-frequency and where neighboring pendula are connected with weak and alternating coupling. We map their dynamics to the topological Su-Schrieffer-Heeger (SSH) model of charged quantum particles on a lattice with alternating hopping rates in an external electric field. By directly tracking the dynamics of a wavepacket in the bulk of the lattice, we observe Bloch oscillations, Landau-Zener transitions, and coupling between the isospin (i.e. the inner wave function distribution within the unit cell) and the spatial degrees of freedom (the distribution between unit cells). We then use Bloch oscillations in the bulk to directly measure the non-trivial global topological phase winding and local geometric phase of the band. We measure an overall evolution of 3.1 $\pm$ 0.2 radians for the geometrical phase during the Bloch period, consistent with the expected Zak phase of $\pi$. Our results demonstrate the power of classical analogs of quantum models to directly observe the topological properties of the band structure, and sheds light on the similarities and the differences between quantum and classical topological effects.
Izhar Neder, Chaviva Sirote, Meital Geva, Yoav Lahini, Roni Ilan, Yair Shokef
2023-05-30T20:01:52Z
http://arxiv.org/abs/2305.19387v1
# Bloch Oscillations, Landau-Zener Transition, and Topological Phase Evolution in a Pendula Array ###### Abstract We experimentally and theoretically study the dynamics of a one-dimensional array of pendula with a mild spatial gradient in their self-frequency and where neighboring pendula are connected with weak and alternating coupling. We map their dynamics to the topological Su-Schrieffer-Heeger (SSH) model of charged quantum particles on a lattice with alternating hopping rates in an external electric field. By directly tracking the dynamics of a wavepacket in the bulk of the lattice, we observe Bloch oscillations, Landau-Zener transitions, and coupling between the isospin (i.e. the inner wave function distribution within the unit cell) and the spatial degrees of freedom (the distribution between unit cells). We then use Bloch oscillations in the bulk to directly measure the non-trivial global topological phase winding and local geometric phase of the band. We measure an overall evolution of 3.1 \(\pm\) 0.2 radians for the geometrical phase during the Bloch period, consistent with the expected Zak phase of \(\pi\). Our results demonstrate the power of classical analogs of quantum models to directly observe the topological properties of the band structure, and sheds light on the similarities and the differences between quantum and classical topological effects. pacs: 03.65.-a, 03.65.-b, 03.65.-b, 03.65.+a, 03.65.+b ## Introduction Classical analogues of condensed and topological states of matter proved to be an invaluable tool in visualizing fundamental concepts and exploring the relations between quantum and classical dynamics. The topological classifications of states of systems, which originate in quantum lattice models, quantum Hall effect, quantum spin-Hall effect, topological insulators, and superconducting qubits [1; 2; 3; 4; 5; 6; 7; 8; 9; 10] were adapted to various classical systems such as photonic crystals [11; 12; 13; 14], phononic crystals [2; 15; 16; 17; 18; 2], classical mechanical oscillators [19; 20; 21; 22; 23; 24], and electrical resonator circuits [25; 26]. Many of these works focused on probing stable edge channels, which provide indirect evidence for the non-triviality of the band topology. The classification of topological invariants via direct bulk measurement, however, is limited [27; 28], and remains a challenge in both quantum and classical systems. Here we report a direct measurement of a topological and geometric phase by tracking the dynamics in the bulk of a classical system: a one-dimensional array of coupled pendula. The analysis of the measurement is based on an approximate mapping between the time evolution of the classical coupled oscillators and the quantum tight-binding or discrete Schrodinger equation of electrons on lattice potentials, as was recently demonstrated for the non-linear case [29; 30; 25]. The mapping is in the spirit of the mapping in Ref. [31], but ours is strictly local. In particular, our system is mapped onto the canonical Su-Schrieffer-Heeger (SSH) model [32; 33], a prototypical model for topological phase transitions in one dimension. This mapping is significant, as the SSH model has two phases: one with a trivial Zak phase [34; 35] that is adiabatically connected to an atomic limit, and the other with a Zak phase of \(\pi\), with an obstruction to the atomic limit. In addition, a mild monotonic change in the pendula self frequency along the array is mapped into an external electric Field. We experimentally realized such a system of \(\sim 50\) pendula. Using the mapping to the SSH model, we theoretically, computationally, and experimentally show that our system exhibits phenomena that are usually discussed in the context of the quantum dynamics of electrons on ultra-clean lattices. This includes Bloch oscillations [36], where, due to the electric field and the periodicity of the lattice, an electron's wave function oscillates in the presence of a uniform electric field instead of accelerating, and Landau-Zener (LZ) tunneling [37; 38], in which the electron's wavepacket can leak to a higher energy band, or be in a superposition of the two bands, due to the time-dependent energy difference between the bands. Finally, these observations enable us to extract the non-trivial topological phase winding of the SSH bands from bulk measurements in our system. Specifically, we use the fact that, due to Bloch oscillations, a wave-packet that is initially localized in reciprocal space samples the entire Brillouin zone during its dynamics, thereby possessing information on the geometrical phase and the Zak phase after a full Bloch period [8; 39]. The extraction of the geometrical phase is enabled due to the classical and macroscopic nature of our system, which allows direct measurement of wavepacket dynamics and the amplitude and phase evolution of each oscillator. Specifically, we extract the geometrical phase by comparing the wavepacket phase evolution in two experiments - one with trivial and the other with topological band structure. We accurately extracted the difference of the geometrical phase evolution between these two experiments over 400 periods of the fundamental oscillation of individual pendula, which lasted about 700 \(sec\), and which indeed ends in a Zak phase difference of \(\pi\) after one Bloch period, as theoretically expected. ### Classical realization of the SSH model We constructed a one-dimensional array, schematically shown in Fig. 1a, of \(N=51\) pendula hanging using v-shaped strings from a common horizontal fixed beam. All pendula had identical mass \(m=0.35\ kg\) yet varying string lengths \(r_{j}\), (\(j=1,\ldots,N\)), carefully-tuned to satisfy \[\frac{1}{r_{j}}=\frac{1}{r}\left[1+\alpha\left(j-\frac{N}{2}\right)\right]. \tag{1}\] The parameter \(r=0.76\ m\) is the length of the central pendulum, and \(\alpha=2.4\times 10^{-3}\) controls the spatial gradient in string length. The accuracy in tuning the length of each pendulum was \(<1\ mm\) (see Supplementary Material). Each pendulum is coupled to its two nearest neighbors by connecting adjacent strings by knots at alternating heights, close to the beam, as depicted in Fig. 1a, (see also Supplementary Material). This results in weak and alternating coupling between neighboring pendula, i.e. the coupling constitutes a relatively weak perturbation to the basic pendula oscillations. The pendula oscillations are kept at small angles during the dynamics, such that we can model the system as a set of simple harmonic oscillators coupled to each other with harmonic springs of small and alternating effective stiffnesses, \(\kappa_{j,j+1}\). The coupling \(\kappa_{j,j+1}\) is controlled by adjusting the height of the knots that couple adjacent pendula according to \[\kappa_{j,j+1}=\begin{cases}\kappa&j\;\mathrm{even}\\ \kappa^{\prime}&j\;\mathrm{odd}.\end{cases} \tag{2}\] Details on how the knot heights determine the couplings \(\kappa\) and \(\kappa^{\prime}\) appear in the Supplementary Material. We designed and performed three experiments, with different values of \(\kappa\) and \(\kappa^{\prime}\). Experiments 1 and 2 were designed to observe Bloch oscillations and the change in topological phase in the adiabatic LZ regime. In Experiment 1, the couplings were \(\kappa=0.07\;N/m\) and \(\kappa^{\prime}=0.035\;N/m\). In Experiment 2, \(\kappa\) and \(\kappa^{\prime}\) were switched to \(\kappa=0.035\;N/m\) and \(\kappa^{\prime}=0.07\;N/m\). Experiment 3 was designed to observe LZ tunneling and wavepackets that are in Figure 1: (a) Sketch of the mechanical system: pendula coupled to each other by knots (depicted by red and blue dots) connecting adjacent strings at varying and alternating heights \(\delta_{j,j+1}\). The pendula lengths \(r_{j}\) have a mild gradient according to Eq. (1). (b) The experiment started with a wave pattern implemented using a board that was cut according to the desired wave-packet. Then the board was abruptly removed and the pendula started to evolve freely. (c) The solution to Eq. (7) in Fourier space, exhibiting Bloch oscillations due to the external field: the initial wave packet (black) travels toward negative \(k\) values and follows the SSH lower energy band (blue). Then at \(k=-\pi\), depending on the parameters of Eq. (7), the wave continues its travel either adiabatically following the lower band, by jumping through LZ diabatic transition and following the upper band (red), or by following in a superposition of the two bands. a superposition of the upper and lower bands. There, the couplings were set to \(\kappa=0.082\)\(N/m\) and \(\kappa^{\prime}=0.064\)\(N/m\), in order to enhance the LZ transition. The three experiments were all performed in a similar fashion. Initially, all pendula rested in their minimum energy point, except ten adjacent pendula in the bulk of the lattice, that were given an initial offset from their rest position. As shown in Fig. 1b, these translations were given by a board, cut in advance to produce a particular initial wave pattern (see Supplementary Material). At time \(\tau=0\), the board was abruptly removed, and the system was allowed to evolve freely. The full evolution of the pendula in each experiment was video recorded from below (See Supplementary Video [40]). Finally, the position of each of the pendula as a function of time was extracted from the video. Figure 2 presents the results of Experiments 1 and 3, while Experiment 2 (not shown) produced results almost identical to those of Experiment 1. In Experiment 1, the energy stored in each pendulum performed slow oscillations: the wave-packet moved to lower pendula numbers and then moved back, and returned to the very same initial wavepacket at \(\tau=700\)\(sec\), only with smaller amplitude due to \(\sim\)70% energy loss due to friction. The fraction of energy that leaked to other pendula motion and did not return in the original wavepacket after \(700\)\(sec\), was \(5\%\) or less. Moreover, the inset clearly shows that in the middle of the oscillation, at \(\tau=350\)\(sec\) the wave was checkerboard-like: adjacent pairs of pendula were swinging out of phase, namely, the wave-packet was centered around \(k=\pi\) when the unit cell is a dimer. In Experiment 3, at that middle point, the wave split into two branches, which moved toward two opposite directions at later times. However the two partial waves reached a maximal translation at \(\tau=750\)\(sec\), and at that point a zoom-in shows that at one branch the pendula all swings in phase, while in the other branch neighboring pendula swing with alternating phases. The two branches then turned back and merged again at about \(\tau=1125\)\(sec\). As we now show, these observations can be understood via a mapping to the quantum SSH model, and are direct manifestations of Bloch oscillations and LZ transitions. ### Mapping to the Quantum SSH Model The dynamics of the pendula system is described by \(2N\) linearized Hamilton equations for the translation of each pendulum in the transverse direction, \(y_{j}(\tau)=r_{j}\theta_{j}(\tau)\), where \(\theta_{j}(\tau)\) is the pendulum's angle from the vertical axis, and the momentum \(p_{j}(\tau)\) of each of the pendula, as a function of time \(\tau\), \[\dot{y}_{j} =\frac{1}{m}p_{j}, \tag{3}\] \[\dot{p_{j}} =-\frac{mg}{r_{j}}y_{j}-\kappa_{j-1,j}\left(y_{j}-y_{j-1}\right)+ \kappa_{j,j+1}\left(y_{j+1}-y_{j}\right). \tag{4}\] Experimentally, after hundreds of oscillations of the pendula, we observed significant energy Figure 2: (a, b) The pendula displacements in Experiments 1 (a) and 3 (b). Each line shows a pendulum displacement as a function of time and is shifted vertically according to the pendulum index. Bloch oscillations are clearly seen in both experiments. In Experiment 1, the wave follows adiabatically the lower band, while in Experiment 3 after one Bloch period, the system is in a superposition of waves in the lower and upper bands. The insets show zoom-ins of the pendula displacements at certain times, in which the state of the partial wave (its mean \(k\) value, and the band) can be identified (see text); (c, d) Discrete spatial Fourier transform of the pendula motion at each time, projected onto the two theoretical eigen-states of the SSH problem, Eq. (11). These represents the partial wave in the lower (bottom) and upper (top) bands. dissipation of more than 50% due to friction. However, it appeared as an overall global decay of the amplitude of all pendula and did not affect any of the effects or the measurements of the topological phases as discussed below. This can be deduced from the agreement of the experimental results with both the analytical theory and simulations. Therefore, in our theoretical analysis, energy dissipation is not considered. Crucial to the accuracy of the mapping to the SSH model, we design the system such that the natural frequency of the pendula, \(\sqrt{\frac{g}{r}}\), fulfills \(\frac{g}{r}\gg\frac{\kappa+\kappa^{\prime}}{m}\), and the gradient is small, \(\alpha\ll 1\). Namely, that \(\sqrt{\frac{g}{r}}\) is significantly larger than any other frequency scale in the problem. Under these conditions, we find that if one introduces a complex dynamic variable \[\psi_{j}\equiv e^{i\omega_{0}\tau}u_{j}, \tag{5}\] such that \[u_{j}\equiv\left(\sqrt{\frac{gm}{2r}}y_{j}+i\sqrt{\frac{1}{2m}}p_{j}\right), \tag{6}\] and where \(\omega_{0}\equiv\sqrt{\frac{g}{r}}+\sqrt{\frac{r}{g}}\frac{1}{m}\left(\kappa+ \kappa^{\prime}\right)\), then to a very good approximation, \(\psi_{j}\) evolves according to (see derivation in Supplementary Material): \[i\dot{\psi}_{j}\cong\frac{Ea}{2}\left(j-\frac{N}{2}\right)\psi_{j}-t_{j-1,j} \psi_{j-1}-t_{j,j+1}\psi_{j+1}. \tag{7}\] This equation is identical to the Schrodinger equation for the SSH model with lattice constant \(a\) in the presence of an electric field \(E\). The alternating hopping terms are related to the system's parameters by \[t_{j,j+1}=\begin{cases}\sqrt{\frac{r}{g}}\frac{1}{2m}\kappa\equiv t&j\;{\rm even },\\ \sqrt{\frac{r}{g}}\frac{1}{2m}\kappa^{\prime}\equiv t^{\prime}&j\;{\rm odd}. \end{cases} \tag{8}\] Given Eq. (7), the combination \[Ea=\sqrt{\frac{g}{r}}\alpha \tag{9}\] is related to the Bloch frequency by \(Ea=\frac{dk}{d\tau}\), where \(-\pi<k<\pi\) is the dimensionless wave vector that corresponds to the discrete Fourier transform with respect to unit cell numbers (note that the unit cell is a dimer). The accuracy of Eq. (7) depends on the original system's parameters; Specifically, Eqs. (3) and (4) have two small parameters. The first is \(\alpha\), and the second can be defined as \(\epsilon\equiv\sqrt{\frac{g(\kappa+\kappa^{\prime})}{rm}}\), which is the ratio between the pendulum and the couplings self frequencies. Both should be made as small as possible for a faithful mapping to the SSH model. ### Experimental Measurement of Bloch Oscillations and LZ Transition In order to compare the phenomena seen in Fig. 2,a,b to the prediction of Eq. (7) we analyse them in Fourier space by performing two Fourier transforms over the odd-\(j\) "a" sites and the even-\(j\) "b" sites. On the theoretical level, Eq. (7) with zero electric field leads to two energy bands with eigenvalues \(\omega_{k}=\pm v_{k}\) and eigenstates in the \((a,b)\) dimer inner space \(\xi_{k,1/2}\equiv\frac{1}{\sqrt{2}}(1,\pm e^{i\varphi_{k}})^{T}\) where \(v_{k}e^{i\varphi}=t+t^{\prime}e^{-k}\) (see Supplementary Material for details). Due to the small electric field, during the Bloch oscillation, an initial wave-packet changes the value of its central wavenumber \(k\) in momentum space at a constant rate, periodically sampling the whole Brillouin zone \(-\pi<k<\pi\). At \(k=\pi\) the gap is minimal, and depending on the rate \(\frac{dk}{d\tau}=Ea\), the wave packet evolution can vary, from following adiabatically the lower band, to jumping to the upper band through the LZ transition, to being in a superposition of the two bands; see illustration in Fig. 1c. The Bloch oscillations observation is more convincing in \(k\)-space with respect to the unit cell (the dimers) number. Thus, each of the three experiments was further analyzed by taking the complex values of \(u_{j}\) obtained from Eq. (5) using the measured values of \(y_{j}\) and \(p_{j}\), and performing, at each point in time, two discrete Fourier transforms of the wavefunctions in the odd "a" sites \(u_{2l+1}\) and of the even "b" sites \(u_{2l+2}\) and defining \[u_{k,a}\equiv\sum_{l=0}^{N/2-1}u_{2l+1}e^{-ilk},\quad u_{k,b}\equiv\sum_{l=0}^ {N/2-1}u_{2l+2}e^{-ilk}. \tag{10}\] We project the resulting vector \((u_{k,a},u_{k,b})^{T}\) onto the analytically derived lower- and upper-energy eigenstates of the SSH model \(\xi_{k,1/2}\) \[u_{k,1} \equiv \frac{1}{\sqrt{2}}\left(u_{k,a}+e^{-i\varphi_{k}}u_{k,b}\right),\] \[u_{k,2} \equiv \frac{1}{\sqrt{2}}\left(u_{k,a}-e^{-i\varphi_{k}}u_{k,b}\right), \tag{11}\] where \(\varphi_{k}\) is estimated from the experimental \(\kappa\) and \(\kappa^{\prime}\), and their mapping to \(t\) and \(t^{\prime}\). The result is shown in Fig. 2c for Experiment 1 and in Fig. 2d for Experiment 3. One can see that the wave-packet started its evolution almost entirely in the lower band. Indeed, the initial condition \(u_{j}(\tau=0)\) implemented by the cut board was designed to be a Gaussian in the lower band, centered at \(k=0\) with \(\Delta k\approx 0.35\) small compared to the Brillouin zone dimensionless size of \(2\pi\). At early times, the Gaussian in \(k\)-space stayed in the lower band and shifted to lower values of \(k\) at a constant rate, until reaching the middle of the Bloch cycle, corresponds to \(k=\pm\pi\), as can be clearly observed from the zoom-in in the insets of the corresponding figures in \(j\)-space. The wave-packet continued to evolve along the lower band in Experiment 1 and 2, whereas in Experiment 3 the wave split and some of its amplitude was transferred into the upper-band via a LZ transition. This could be easily spotted in \(j\)-space, because the continuing Bloch oscillation at later times made the two parts of the wave-packet move in different directions, due to the opposite signs of the group velocities \(\frac{d\omega_{k}}{dk}\) of the two bands. Furthermore, the zoom-ins in \(j\)-space clearly show the two different inner dimer states of the two bands at \(k=0\) - the state (1,-1) for the upper band and the state (1,1) for the lower band. In this respect, the real space evolution of the wave-packets was coupled to the pseudo-spin degree of freedom defined by the sublattice (i.e, the upper and lower bands) - similar to the situation in the Stern-Gerlach experiment. ### Validation of the Mapping with Numerical Simulations Simulating the system can further demonstrate the strength and robustness of the mapping for displaying quantum-analog effects in classical mechanical systems, over a wide range of parameter values. We simulated the system of coupled pendula by solving numerically Eqs. (3,4) for various sets of parameters \(N\), \(\alpha\), \(r\), \(\kappa\), and \(\kappa^{\prime}\). We used the 4-5 Runge-Kutta method with both absolute and relative tolerances of \(10^{-9}\). We checked the accuracy of the mapping by quantitatively predicting the details of Bloch oscillations and the LZ transition. Figure 3a verifies the validity of the theoretical expression for the Bloch frequency \(\frac{dk}{d\tau}=\sqrt{\frac{q}{r}}\alpha\), by showing excellent agreement as the parameters \(\alpha,g,r\) were varied over considerable ranges. Other features of the oscillations, such as their amplitude, are also well predicted by the mapping to the SSH model, see Supplementary Material for details. The LZ diabatic transition probability is given by \(P_{D}=\exp{(-2\pi V^{2}/d)}\), where \(V\) is the off-diagonal element and \(d\) is the rate of change of the diagonal part of the Hamiltonian in the LZ model [37; 38]. Given the energy bands in the SSH model and the mapping in Eq. (8), we can express \(P_{D}\) using the pendula parameters (see derivation in the Supplementary Material), \[P_{D}=\exp\left[-\frac{\pi r\left(\kappa-\kappa^{\prime}\right)^{2}}{2mg\alpha \sqrt{\kappa\kappa^{\prime}}}\right]. \tag{12}\] Figure 3 shows results of simulations of Eqs. (3) and (4) using \(N=200\) pendula, using an initial Gaussian wavepacket in the lower band, for various values of \(g,r,\kappa\) and \(\kappa^{\prime}\). The relative energy transfer to the upper band after one Bloch oscillation, at \(\tau=T_{B}=2\pi/\frac{dk}{d\tau}\), observed in the simulations, \[P_{D,\mathrm{sim}}\approx\frac{\sum_{k}\left|u_{k,2}(\tau=T_{B})\right|^{2}}{ \sum_{k}\left|u_{k,1}(\tau=0)\right|^{2}} \tag{13}\] was compared to the theoretical expression in Eq. (12). The agreement between the simulation and the analytical formula extends over almost three orders of magnitude with no fitting parameters and over a considerable range in \(g,r,\kappa,\kappa^{\prime}\) and \(\alpha\) thus demonstrating the robustness and insensitivity of the mapping to changing parameters. When \(\epsilon\) is less than 0.2, the error in predicting the Figure 3: (a) The simulated rate of Bloch oscillations compared with the theoretical prediction \(\frac{dk}{d\tau}=\sqrt{\frac{a}{r}}\alpha\) for a significant range of the values of the parameters \(g\), \(r\), and \(\alpha\). The solid line marks the identity. The experimentally measured value (red) of the Bloch period agrees with the theoretical prediction, with the error-bar marking the spread between Experiments 1, 2, and 3. (b) The LZ diabatic transition probability as extracted from experiments and simulations, Eq. (13), vs. the theoretical prediction, Eq. (12). The simulations covered variations in all parameters, \(g\), \(r\), \(\kappa\), and \(\kappa^{\prime}\). The solid line marks the identity. Inset: relative deviation of the simulated values from the prediction. For reasonable values of \(\epsilon=\sqrt{r(\kappa+\kappa^{\prime})/gm}\), which control the accuracy of the mapping of the pendula system to the SSH model, the deviation is 5% or less. LZ transition probability is a few percent or less. As a side note, this accuracy is why both the analytical calculations and the simulations were proven to be useful in designing the experiments in this work. In Fig. 3 we also present the measured values of \(Ea\) and of \(P_{D}\) from our experiments, again with good agreement to the theoretical results. Error-bars were estimated from the spread of the results in the three experiments and from the accuracy of determining the parameters in the experiment, see the Supplementary Material for more details. ### Geometrical Phase Evolution and Band Topology When the values of \(\kappa\) and \(\kappa^{\prime}\) are switched (such was the situation between Experiments 1 and 2), the topology of the bands of the SSH Hamiltonian switches from trivial to non-trivial. The topology of the band is imprinted in the evolution of the wave-packet in the form of a global \(\pi\) phase difference after one Bloch period. Contrary to the situation in the quantum case, the global phase of the classical pendula oscillations variables \(u_{j}\) (and of \(\psi_{j}\)) was easily measurable. Therefore, in our classical system, it is possible to fix a gauge and extract the whole evolution of the geometrical phase as a function of the quasi-momentum \(k\) during the Bloch oscillation period that leads to this \(\pi\) phase shift. This is achieved by extracting the phase of the complex wave function component in Fourier space, \(u_{k(t),1}\) in Fig. 2c, at the maximum of the wavepacket as a function of \(\tau\). To understand the relation between this phase of the wavepacket maximium and the topological phase, we solve the dynamic equation of the SSH model in the limit of adiabatic evolution. In the Supplementary Material, we show that in the case of an adiabatic evolution in the lower band, the solution of Eq. (7) is such that the maximum of the wave packet moves in momentum space along the line \(k(\tau)=(-Ea\tau+\pi)\ mod\ 2\pi-\pi\) (the wavepacket maximum in Fig. 2c). Along this line, \(u_{k(\tau),1}\) has an additional phase term \(e^{i\phi(k,\tau)}\), which is added to the basic oscillations \(e^{i\omega_{0}t}\) and is given by \[\left.\phi\right|_{k=-Ea\tau}=\phi_{0}-\frac{N}{4}Ea\tau-\int_{0}^{\tau}\omega _{k^{\prime}=-Ea\tau^{\prime}}d\tau^{\prime}+\frac{1}{2}\left.\varphi_{k} \right|_{0}^{-Ea\tau}. \tag{14}\] Only the last term, which is the geometrical phase evolution from wave number \(k=0\) to \(k=-Ea\tau\), changes between Experiments 1 and 2, namely when switching \(\kappa\leftrightarrow\kappa^{\prime}\). This is because in the first term, \(\phi_{0}\) depends only on the initial condition, the second term is related to the global potential being centered around the \(l=N/4\) unit cell, and the third term is the dynamic phase evolution along the line \(k^{\prime}=-Ea\tau^{\prime}\), \(\tau^{\prime}\in[0,\tau]\). This expression depends on the specific choice of gauge - the choice of the global phase for the eigenstate \(\xi_{k,1}\equiv\frac{1}{\sqrt{2}}(1,e^{i\varphi_{k}})^{T}\) at every \(k\), as manifested in Eq. (11) above. We conclude that the only difference in the phase \(\phi\) between Experiments 1 and 2 originates from the changes in the geometrical phase. Figure 4b shows the phase difference of \(u_{k(\tau),1}\left(\tau\right)\) along the line \(k(\tau)=-Ea\tau\) (the maximum of the wave packet) in the two experiments. The complex values of \(u_{a,-eE\tau}\) and \(u_{b,-eE\tau}\) in each experiment were extracted from the Fourier transform of the measured pendula displacements, followed by 2D interpolation in Fourier space. While the phase in each experiment alone completed about 2500 radians during the first Bloch period (inset), the evolution of the phase _difference_ between the two experiments followed to a very good approximation the theoretically predicted difference of the geometrical phase evolution, \(\frac{1}{2}\arg(\kappa+\kappa^{\prime}e^{-iEat})\), and completed a \(\pi\) phase shift when completing a full Bloch oscillation, within a measurement accuracy of 0.2 radians. Note that the phase difference is gauge independent, as long as the same gauge is used in both experiments. Information about the local geometry and global topology of the band can also be extracted from a single experiment by comparing the phase evolution of \(u_{k,a}\left(\tau\right)\) and \(u_{k,b}\left(\tau\right)\). The phase difference of the two entries of \(\xi_{k,1}\) equals \(\varphi_{k}\), independent of the choice of gauge. In Fig. 4a we plot the phase difference between \(u_{k,a}\left(\tau\right)\) and \(u_{k,b}\left(\tau\right)\) in the two experiments. This phase difference is shown to follow the theoretical curve \(\arg(\kappa+\kappa^{\prime}e^{-iEat})\), which results in a different winding number for switching \(\kappa\) and \(\kappa^{\prime}\). The small ripples of the experimental curves, which appear to dominate the accuracy in measuring the geometrical phase and the winding are attributed to a small but finite LZ diabetic transition (1-2\(\%\)). Similar ripples were seen in simulations that we performed with similar conditions and can, in principle, be improved by future experiment with e.g. a larger band gap. ## Discussion We have shown that a system of coupled pendula with alternating couplings and a mild gradient in the pendula's lengths obeys remarkably accurately the Schrodinger equation of the topological SSH model in an external electric field. As such, its dynamics exhibits Bloch oscillations, LZ transitions that are followed by entanglement between the band and spatial degrees of freedom, and the appearance of the non-trivial topological phase of the bands. These features that are usually attributed to microscopic quantum systems now appear on a macroscopic classical system, which is easily observed in full detail through direct measurement of the evolution of its "wave function". This enabled us to quantify the Bloch oscillations rate and the LZ diabatic transition probability that both perfectly agree with theoretical predictions. Finally, to our knowledge, we are the first to directly extract the topology of the bands from an experimental measurement in the bulk of a macroscopic system - both within one evolution of the system by measuring the relative phase of the component of the dimer, and by comparing the phase evolution in two experiments performed on the topological and the trivial states of the system. Our measurement of the Zak phase was remarkably accurate - an accuracy of \(0.2\) radians after \(800\pi\) radians evolution - a relative error of less than \(10^{-4}\). This enabled us to easily observe the accumulated Zak phase of \(\pi\). The fact that these phenomena could be accurately measured in a simple, macroscopic, mechanical system is not trivial in several aspects; First, the mapping we used is local but approximate, in contrast to the one used in Ref. [31] for the classification of the topological phases, which was exact but not local. Here we maintained the locality in order to have a simple observation of the wave-function evolution. Second, we proved that, unlike the quantum case, the significant dissipation of energy to heat in our experiment did not affect the coherence of the wave. Moreover, we note that there were additional degrees of freedom in the motion of each pendulum - such as motion along the axis of beam (on top of the oscillations in the perpendicular direction that we focused on), and rotation of each pendulum due to the unbalanced force moment acting on each pendulum from its coupled neighbours. Apparently those did not affect the coherence either, even to the above mentioned degree of accuracy. These results open the way to quantitatively explore further quantum-like phenomena in classical systems. Future theoretical and experimental works can explore the non-linear nature of physical pendula. Another possible direction of study is to examine the effect of point defects or imperfect lattices on the topological effects. In addition, note that the SSH one-dimensional topological insulator is only one example among many quantum lattice models with non-trivial features; our work can be generalized quite naturally to other systems by considering different higher-dimensional lattices of oscillators. This includes single and double-layer honeycomb lattices, and 2D or even 3D topological insulators. ###### Acknowledgements. We thank Ari Sirote, Baruch Meirovich, Dafna Shokef, Hadas Shokef, Maital Silver, Raziel Katz, Tomer Sigalov and Yaara Shokef for technical assistance, and Lea Beilkin, Eran Sela, Nissim Ofek, and Yakir Hadad for useful discussions. This research was supported in part by the Israeli Ministry of Science and Technology Grant No. 3-15671. RI is supported by the Israeli Science Foundation under grant No. 1790/18 and U.S.-Israel Binational Science Foundation (BSF) Grant No. 2018226. YS is supported by the Israeli Science Foundation under grant No. 1899/20.
2303.13256
Analyzing Innermost Runtime Complexity Through Tuple Interpretations
Time complexity in rewriting is naturally understood as the number of steps needed to reduce terms to normal forms. Establishing complexity bounds to this measure is a well-known problem in the rewriting community. A vast majority of techniques to find such bounds consist of modifying termination proofs in order to recover complexity information. This has been done for instance with semantic interpretations, recursive path orders, and dependency pairs. In this paper, we follow the same program by tailoring tuple interpretations to deal with innermost complexity analysis. A tuple interpretation interprets terms as tuples holding upper bounds to the cost of reduction and size of normal forms. In contrast with the full rewriting setting, the strongly monotonic requirement for cost components is dropped when reductions are innermost. This weakened requirement on cost tuples allows us to prove the innermost version of the compatibility result: if all rules in a term rewriting system can be strictly oriented, then the innermost rewrite relation is well-founded. We establish the necessary conditions for which tuple interpretations guarantee polynomial bounds to the runtime of compatible systems and describe a search procedure for such interpretations.
Liye Guo, Deivid Vale
2023-03-23T13:38:32Z
http://arxiv.org/abs/2303.13256v1
# Analyzing Innermost Runtime Complexity ###### Abstract Time complexity in rewriting is naturally understood as the number of steps needed to reduce terms to normal forms. Establishing complexity bounds to this measure is a well-known problem in the rewriting community. A vast majority of techniques to find such bounds consist of modifying termination proofs in order to recover complexity information. This has been done for instance with semantic interpretations, recursive path orders, and dependency pairs. In this paper, we follow the same program by tailoring tuple interpretations to deal with innermost complexity analysis. A tuple interpretation interprets terms as tuples holding upper bounds to the cost of reduction and size of normal forms. In contrast with the full rewriting setting, the strongly monotonic requirement for cost components is dropped when reductions are innermost. This weakened requirement on cost tuples allows us to prove the innermost version of the compatibility result: if all rules in a term rewriting system can be strictly oriented, then the innermost rewrite relation is well-founded. We establish the necessary conditions for which tuple interpretations guarantee polynomial bounds to the runtime of compatible systems and describe a search procedure for such interpretations. ## 1 Introduction In the step-by-step computational model induced by rewriting, time complexity is naturally understood as the number of rewriting steps needed to reach normal forms. Usually, the cost of firing a redex (i.e., performing a computational step) is assumed constant. So the intricacies of a low-level rewriting realization (e.g., a concrete rewriting engine implementation) are ignored. This assumption does not pose a problem as long as the low-level time complexity needed to apply a rule is kept low. Additionally, this abstract approach has the advantage of being independent of the specific hardware platform evaluating the rewriting system at hand. In this rewriting setting, a complexity function bounds the length of rewrite sequences and is parametrized by the size of the starting term of the derivation. Two distinct complexity notions are commonly considered in the literature: derivational and runtime complexity, and they differ by the restrictions imposed on the initial term of derivations. On the one hand, derivational complexity imposes no restriction on the set of initial terms. Intuitively, it captures the worst-case behavior of reducing a term to normal form. On the other hand, runtime complexity requires basic initial terms which, conceptually, are terms where a single function call is performed on data (e.g., integers, lists, and trees) as arguments. If programs are expressed by rewriting, their execution time is closely related to the runtime complexity of the associated rewrite system. Similarly related are programs using call-by-value evaluation strategy and innermost rewrite systems. Therefore, by combining these two concepts, we obtain a connection between the cost analysis of call-by-value programs and the runtime complexity analysis of innermost term rewriting. More importantly, due to the abstract nature of rewriting, it is feasible to forgo any specific programming language detail and still derive useful term rewriting results that may carry over to programs. For an overview of the applicability of rewriting to program complexity the reader is referred to [2, 19]. Therefore, a rewriting approach to program complexity allows us to fully concentrate on finding techniques to establish bounds to the derivational or runtime complexity functions. A natural way to determine these bounds is adapting the proof techniques used to show termination to deduce the complexity naturally induced by the method. There is a myriad of works following this program. To mention a few, see [3, 4, 6, 13, 20, 14] for interpretation methods, [5, 12, 24] for lexicographic and path orders, and [11, 21] for dependency pairs. In this paper, we follow the same idea and concentrate on investigating the existence of upper bounds to the innermost runtime complexity for applicative systems. The termination method on which we base our complexity analysis framework upon is tuple interpretations [16]. Tuple interpretations are an instance of the interpretation method. Thus, we seek to interpret terms in such a way that the rewrite relation can be embedded in a well-founded ordering. More precisely, we choose an interpretation domain \(A\) which is a set together with a well-founded order \(>\) over \(A\) and interpret terms as elements of \(A\) compositionally. This interpretation of terms is such that whenever a rewriting is fired, i.e., \(s\to t\), the interpretations \(\llbracket s\rrbracket\) and \(\llbracket t\rrbracket\) of \(s\) and \(t\) satisfy \(\llbracket s\rrbracket>\llbracket t\rrbracket\). Hence, a rewriting step on terms implies a strict decrease on \(A\). The well-foundedness of such domains together with this compatibility requirement on reduction guarantee that no infinite reduction sequence on terms exists. The defining characteristic of tuple interpretations is to allow for a split of the complexity measure into abstract notions of cost and size. When distilled into its essence, the ingredient we need to express the concepts of cost and size is a product \(\mathcal{C}\times\mathcal{S}\) of a well-founded set \(\mathcal{C}\) -- the cost set -- and a quasi-ordered set \(\mathcal{S}\) -- the size set. Intuitively, the cost tuples in \(\mathcal{C}\) bound the number of rewriting steps needed to reach normal forms, which is in line with the aforementioned rewriting cost model. Meanwhile, the size tuples in \(\mathcal{S}\) are more general. We can use integers, reals, and terms themselves as size. Following the treatment in [16], the construction of cost-size products is done inductively on the structure of types. So we map each type \(\sigma\) to a cost-size product \(\mathcal{C}_{\sigma}\times\mathcal{S}_{\sigma}\). Hence, in this paper, our first-order term formalism follows a type discipline. In order to extend the usability of our techniques, we would like to not only exhibit bounds to the runtime complexity function but also determine sufficient conditions for its feasibility, that is, the existence of polynomial upper bounds. In the eighties Huet and Oppen [15] conjectured that polynomial interpretations are sufficient to evince feasibility, which was disproved by Lautemann [17] in the same decade. Indeed, polynomial interpretations induce a double exponential upper bound on the derivation length, as shown by the seminal work of Hofbauer and Lautemann [14]. Feasibility can be recovered by imposing additional conditions on interpretations. To the best of our knowledge, Cichon and Lescanne [6] were the first to propose such conditions even though their setting is restricted to number theoretic functions only. Similar results are proved in [4], where the authors provide rewriting characterizations of complexity classes using bounds for the interpretation of data constructors. These same conditions appear in the higher-order setting, see [2, 16]. In the present paper, we follow a similar approach to that in [4] and show that we can recover those classical results by bounding size tuples in interpretations. Tuple interpretations do not provide a complete termination proof method: there are terminating systems for which interpretations cannot be found. Consequently, it does not induce a complete complexity analysis framework either. Notwithstanding, it has the potential to be very powerful if we choose the cost-size sets wisely. A second limitation is that the search for interpretations is undecidable in general, which is expected already in the polynomial case [18]. Undecidability never hindered computer scientists' efforts on mechanizing difficult problems, however. Indeed, several proof search methods have been developed over the years to find interpretations automatically [3, 7, 8, 13, 25]. Contribution.We provide a formal definition of cost-size products (Definition 1) and use it to interpret types in Definition 3. Cost-size products provide an interpretation domain for cost-size tuple algebras, Definition 6. In Lemmas 2 and 4 we show the soundness of this approach. In Definition 5 we introduce a type-safe application operator on cost-size products and prove its strong monotonicity, an important ingredient to show the Compatibility Theorem 1. We establish the termination of Toyama's system in Example 3, showing that Theorem 1 correctly captures innermost termination in our setting. We provide sufficient conditions so that feasible bounds on innermost runtime complexity can be achieved in Lemmas 7 and 8. Outline.In Section 2, we fix notation and recall basic notions of rewriting syntax, basic terminology on the complexity of rewriting, and review our notation for sets, orders, and functions. In Section 3, we tailor tuple interpretations to the innermost setting and prove the innermost version of the compatibility theorem. We proceed to establish complexity bounds to the innermost runtime complexity in Section 4. In Section 5, we present preliminary work on automation techniques to find cost-size tuple interpretations. We conclude the paper in Section 6. ## 2 Preliminaries TRSs and Innermost Rewriting.We consider simply typed first-order term rewriting systems in curried notation. Fix a set \(\mathcal{B}\), whose elements are called _sorts_. The set \(\mathcal{T}_{\mathcal{B}}\) of _types_ is generated by the grammar \(\mathcal{T}_{\mathcal{B}}\mathrel{\mathop{:}}=\mathcal{B}\mid\mathcal{B} \Rightarrow\mathcal{T}_{\mathcal{B}}\). Each type is written as \(\iota_{1}\Rightarrow\cdots\Rightarrow\iota_{m}\Rightarrow\kappa\) where all \(\iota_{i}\) and \(\kappa\) are sorts. A _signature_ is a set \(\mathcal{F}\) of symbols together with an arity function \(\mathsf{ar}\) which associates to each \(\mathsf{f}\in\mathcal{F}\) a type \(\sigma\in\mathcal{T}_{\mathcal{B}}\). We call the triple \((\mathcal{B},\mathcal{F},\mathsf{ar})\) a _syntax signature_. For each sort \(\iota\), we postulate a set \(\mathcal{X}_{\iota}\) of countably many variables and assume that \(\mathcal{X}_{\iota}\cap\mathcal{X}_{\iota^{\prime}}=\emptyset\) if \(\iota\neq\iota^{\prime}\). Let \(\mathcal{X}\) denote \(\bigcup_{\iota}\mathcal{X}_{\iota}\) and assume that \(\mathcal{F}\cap\mathcal{X}=\emptyset\). The set \(\mathbb{T}\) of _pre-terms_ is generated by the grammar \(\mathbb{T}\mathrel{\mathop{:}}=\mathcal{F}\mid\mathcal{X}\mid(\mathbb{T}\, \mathbb{T})\). The set \(T(\mathcal{F},\mathcal{X})\) of _terms_ consists of pre-terms which can be typed as follows: (i) \(\mathsf{f}\colon\sigma\) if \(\mathsf{ar}(\mathsf{f})=\sigma\), (ii) \(x\colon\iota\) if \(x\in\mathcal{X}_{\iota}\), and (iii) \((s\,t)\colon\tau\) if \(s\colon\iota\Rightarrow\tau\) and \(t\colon\iota\). Application of terms is left-associative, so we write \(s\,t\,u\) for \(((s\,t)\,\,u)\). Let \(\mathsf{vars}(s)\) be the set of variables occurring in \(s\). A _ground term_ is a term \(s\) such that \(\mathsf{vars}(s)=\emptyset\). A symbol \(\mathsf{f}\in\mathcal{F}\) is called the _head symbol_ of \(s\) if \(s=\mathsf{f}\,\,s_{1}\ldots s_{k}\). A _subterm_ of \(s\) is a term \(t\) (we write \(s\unlhd t\)) such that (i) \(s=t\), or (ii) \(t\) is a subterm of \(s^{\prime}\) or \(s^{\prime\prime}\) when \(s=s^{\prime}\,\,s^{\prime\prime}\). A _proper subterm_ of \(s\) is a subterm of \(s\) which is not equal to \(s\). A _substitution_\(\gamma\) is a type-preserving map from variables to terms such that the set \(\mathsf{dom}(\gamma)=\{x\in\mathcal{X}\mid\gamma(x)\neq x\}\) is finite. Every substitution \(\gamma\) extends to a type-preserving map from terms to terms, whose image on \(s\) is written as \(s\gamma\), as follows: (i) \(\mathsf{f}\gamma=\mathsf{f}\), (ii) \(x\gamma=\gamma(x)\), and (iii) \((s\,t)\gamma=(s\gamma)\,\,(t\gamma)\). A relation \(\rightarrow\) on terms is _monotonic_ if \(s\to s^{\prime}\) implies \(t\,\,s\to t\,\,s^{\prime}\) and \(s\,\,u\to s^{\prime}\,\,u\) for all terms \(t\) and \(u\) of appropriate types. A _rewrite rule_\(\ell\to r\) is a pair of terms of the same type such that \(\ell=\mathsf{f}\,\,\ell_{1}\ldots\ell_{k}\) and \(\mathsf{vars}(\ell)\supseteq\mathsf{vars}(r)\). A _term rewriting system_ (TRS) \(\mathcal{R}\) is a set of rewrite rules. The _rewrite relation_\(\rightarrow_{\mathcal{R}}\) induced by \(\mathcal{R}\) is the smallest monotonic relation on terms such that \(\ell\gamma\rightarrow_{\mathcal{R}}r\gamma\) for all rules \(\ell\to r\in\mathcal{R}\) and substitutions \(\gamma\). A _reducible expression_ (redex) is a term of form \(\ell\gamma\) for some rule \(\ell\to r\) and substitution \(\gamma\). A term is in _normal form_ if none of its subterms is a redex. A TRS \(\mathcal{R}\) is _terminating_ if no infinite rewrite sequence \(s\rightarrow_{\mathcal{R}}s^{\prime}\rightarrow_{\mathcal{R}}s^{\prime\prime} \rightarrow_{\mathcal{R}}\cdots\) exists. Every rewrite rule \(\ell\to r\)_defines_ a symbol \(\mathsf{f}\), namely, the head symbol of \(\ell\). For each \(\mathsf{f}\in\mathcal{F}\), let \(\mathcal{R}_{\mathsf{f}}\) denote the set of rewrite rules that define \(\mathsf{f}\) in \(\mathcal{R}\). A symbol \(\mathsf{f}\in\mathcal{F}\) is a _defined symbol_ if \(\mathcal{R}_{\mathsf{f}}\neq\emptyset\); otherwise, f is called a _constructor_. Let \(\mathcal{D}\) be the set of defined symbols and \(\mathcal{C}\) the set of constructors. So \(\mathcal{F}=\mathcal{D}\cup\mathcal{C}\). A _data term_ is a term of the form \(\mathsf{c}\ d_{1}\ \ldots\ d_{k}\) where \(\mathsf{c}\) is a constructor and each \(d_{i}\) is a data term. A _basic term_ is a term of type \(\mathsf{\iota}\) and of form \(\mathsf{f}\ d_{1}\ \ldots\ d_{m}\) where \(\mathsf{\iota}\) is a sort, \(\mathsf{f}\) is a defined symbol and all \(d_{1},\ldots,d_{m}\) are data terms. We let \(T_{b}(\mathcal{F})\) denote the set of all basic terms. **Example 1**: We fix \(\mathsf{nat}\) and list for the sorts of natural numbers and lists of natural numbers, respectively. In the below TRS, \(\mathsf{0}\!:\!\mathsf{nat}\), \(\mathsf{s}\!:\!\mathsf{nat}\Rightarrow\mathsf{nat}\), \(\mathsf{nil}\!:\!\mathsf{list}\) and \(\mathsf{cons}\!:\!\mathsf{nat}\Rightarrow\mathsf{list}\Rightarrow\mathsf{list}\) are constructors while \(\mathsf{add},\mathsf{minus},\mathsf{quot}\!:\!\mathsf{nat}\Rightarrow \mathsf{nat}\Rightarrow\mathsf{nat}\), \(\mathsf{append}\!:\!\mathsf{list}\Rightarrow\mathsf{list}\Rightarrow\mathsf{ list}\), \(\mathsf{sum}\!:\!\mathsf{list}\Rightarrow\mathsf{nat}\) and \(\mathsf{rev}\!:\!\mathsf{list}\Rightarrow\mathsf{list}\) are defined symbols. \[\mathsf{add}\ x\ \mathsf{0} \to x \mathsf{sum}\ \mathsf{nil} \to \mathsf{0}\] \[\mathsf{add}\ x\ (\mathsf{s}\ y) \to\mathsf{s}\ (\mathsf{add}\ x\ y) \mathsf{sum}\ (\mathsf{cons}\ x\ q) \to\mathsf{add}\ (\mathsf{sum}\ q)\ x\] \[\mathsf{append}\ \mathsf{nil}\ l \to l \mathsf{rev}\ \mathsf{nil} \to\mathsf{nil}\] \[\mathsf{append}\ (\mathsf{cons}\ x\ q)\ l \to\mathsf{cons}\ x\ (\mathsf{append}\ q\ l) \mathsf{rev}\ (\mathsf{cons}\ x\ q) \to\mathsf{append}\ (\mathsf{rev}\ q)\ (\mathsf{cons}\ x\ \mathsf{nil})\] \[\mathsf{minus}\ x\ \mathsf{0} \to x \mathsf{quot}\ \mathsf{0}\ (\mathsf{s}\ y) \to \mathsf{0}\] \[\mathsf{minus}\ \mathsf{0}\ y \to \mathsf{0} \mathsf{quot}\ (\mathsf{s}\ x)\ (\mathsf{s}\ y) \to\mathsf{s}\ (\mathsf{quot}\ (\mathsf{minus}\ x\ y)\ (\mathsf{s}\ y))\] \[\mathsf{minus}\ (\mathsf{s}\ x)\ (\mathsf{s}\ y) \to\mathsf{minus}\ x\ y\] We restrict our attention to innermost rewriting: only redexes with no reducible proper subterms might be reduced. More precisely, the _innermost rewrite relation_\(\neg i_{\mathcal{R}}^{i}\) induced by \(\mathcal{R}\) is defined as follows: 1. \(\ell\gamma\!\rightarrow_{\mathcal{R}}^{i}\tau\gamma\) if \(\ell\to r\in\mathcal{R}\) and all proper subterms of \(\ell\gamma\) are in normal form, 2. \(s\ t\rightarrow_{\mathcal{R}}^{i}s^{\prime}\ t\) if \(s\rightarrow_{\mathcal{R}}^{i}s^{\prime}\), and 3. \(s\ t\rightarrow_{\mathcal{R}}^{i}s^{\prime}\ t^{\prime}\) if \(t\rightarrow_{\mathcal{R}}^{i}t^{\prime}\). In this paper we only analyze innermost rewriting. So we write \(\to\) for \(\neg i_{\mathcal{R}}^{i}\) whenever no ambiguity arises. Derivation Height and Complexity.Given a well-founded and finitely branching relation \(\to\) on terms, we write \(s\xrightarrow{n}t\) if there is a sequence \(s=s_{0}\to\dots\to s_{n}=t\) of length \(n\). The _derivation height_\(\mathsf{dh}(s,\to)\) of a term \(s\) with respect to \(\to\) is the length of the longest \(\to\)-sequence of starting with \(s\), i.e., \(\mathsf{dh}(s,\to)=\max\{n\mid\exists t\in T(\mathcal{F},\mathcal{X}):s \xrightarrow{n}t\}\). The _absolute size_ of a term \(s\), denoted by \(|s|\), is \(1\) if \(s\) is a symbol in \(\mathcal{F}\) or a variable, and \(|s_{1}|+|s_{2}|\) if \(s=s_{1}\ s_{2}\). In order to express various complexity notions in the rewriting setting, we define the _complexity function_ as follows: \(\mathsf{comp}(n,\to,\mathcal{T})=\max\{\mathsf{dh}(s,\to)\,|\,s\in\mathcal{T}\ \text{and}\ |s|\leq n\}\). Intuitively, \(\mathsf{comp}(n,\to,\mathcal{T})\) is the length of the longest \(\to\)-sequence starting with a term whose absolute size is at most \(n\) from \(\mathcal{T}\). We summarize four particular instances in the following table: \[\mathsf{derivational} \mathsf{runtime}\] \[\mathsf{full} \mathsf{dc}_{\mathcal{R}}(n) =\mathsf{comp}(n,\to_{\mathcal{R}},T(\mathcal{F},\mathcal{X})) \mathsf{rc}_{\mathcal{R}}(n) =\mathsf{comp}(n,\to_{\mathcal{R}},T_{b}(\mathcal{F}))\] \[\mathsf{innermost} \mathsf{idc}_{\mathcal{R}}(n) =\mathsf{comp}(n,\to_{\mathcal{R}}^{i},T(\mathcal{F},\mathcal{X})) \mathsf{irc}_{\mathcal{R}}(n) =\mathsf{comp}(n,\to_{\mathcal{R}}^{i},T_{b}(\mathcal{F}))\] Ordered Sets and Monotonic Functions.A _quasi-ordered set_\((A,\sqsupseteq)\) consists of a nonempty set \(A\) and a quasi-order (reflexive and transitive) \(\sqsupseteq\) on \(A\). An _extended well-founded set_\((A,>,\geq)\) is a nonempty set \(A\) together with a well-founded order \(>\) and a quasi-order \(\geq\) on \(A\) such that \(\geq\) is compatible with \(>\), i.e., \(x>y\) implies \(x\geq y\) and \(x>y\geq z\) implies \(x>z\). Below we refer to an extended well-founded set simply as a _well-founded set_. Given quasi-ordered sets \((A,\sqsupseteq)\) and \((B,\sqsupseteq)\), a function \(f:A\longrightarrow B\) is said to be _weakly monotonic_ if \(x\sqsupseteq y\) implies \(f(x)\sqsupseteq f(y)\). Let \(A\Longrightarrow B\) denote the set of weakly monotonic functions from \(A\) to \(B\). The comparison operator \(\sqsupseteq\) on \(B\) induces pointwise comparison on \(A\Longrightarrow B\) as follows: \(f\sqsupseteq g\) if \(f(x)\sqsupseteq g(x)\) for all \(x\in A\). This way \((A\Longrightarrow B,\sqsupseteq)\) is also a quasi-ordered set. Given well-founded sets \((A,>,\geq)\) and \((B,>,\geq)\), a function \(f:A\longrightarrow B\) is said to be _strongly monotonic_ if \(x>y\) implies \(f(x)>f(y)\) and \(x\geq y\) implies \(f(x)\geq f(y)\). ## 3 Tuple Interpretations In this section, we introduce the notion of tuple algebras in the context of innermost rewriting. We start by interpreting types as cost-size products, give interpretation of terms as cost-size tuples, and finally, prove the innermost version of the compatibility theorem. ### Types as Cost-Size Products We start by constructing a cost-size denotational semantics to types in \(\mathcal{T}_{\mathcal{B}}\). The goal is to define a function \((\!\cdot\!)\) that maps each type \(\sigma\in\mathcal{T}_{\mathcal{B}}\) to a well-founded set \((\!\sigma\!)\), the cost-size interpretation of \(\sigma\). **Definition 1** (Cost-Size Products): Given a well-founded set \((\mathcal{C},>,\geq)\), called the _cost set_, and a quasi-ordered set \((\mathcal{S},\sqsupseteq)\), called the _size set_, we call \(\mathcal{C}\times\mathcal{S}\) the _cost-size product_ of \((\mathcal{C},>,\geq)\) and \((\mathcal{S},\sqsupseteq)\), and its elements _cost-size tuples_. Given a cost-size product \(\mathcal{C}\times\mathcal{S}\), the well-foundness of \(\mathcal{C}\) and quasi-ordering on \(\mathcal{S}\) naturally induce an ordering structure on the cartesian product \(\mathcal{C}\times\mathcal{S}\) as follows. **Definition 2** (Product Order): Let \((\mathcal{C},>,\geq)\times(\mathcal{S},\sqsupseteq)\) be a cost-size product. Then we define the relations \(\succ,\sucveve\) over \(\mathcal{C}\times\mathcal{S}\) as follows: for all \(\langle x,y\rangle\) and \(\langle x^{\prime},y^{\prime}\rangle\) in \(\mathcal{C}\times\mathcal{S}\), 1. \(\langle x,y\rangle\succ\langle x^{\prime},y^{\prime}\rangle\) if \(x>x^{\prime}\) and \(y\sqsupseteq y^{\prime}\), and 2. \(\langle x,y\rangle\sucve\langle x^{\prime},y^{\prime}\rangle\) if \(x\geq x^{\prime}\) and \(y\sqsupseteq y^{\prime}\). Next, we show that cost-size products ordered as above form a well-founded set. **Lemma 1**: _The triple \((\mathcal{C}\times\mathcal{S},\succ,\sucve)\) is a well-founded set._ Proof.: It follows immediately from Definition 1 that \(\succ,\sucve\) are transitive and \(\sucve\) is reflexive. To prove that \(\succ\) is well-founded, note that the existence of \(\langle x_{1},y_{1}\rangle\succ\langle x_{2},y_{2}\rangle\succ\cdots\) would imply \(x_{1}>x_{2}>\cdots\) which cannot be the case since \(>\) is well-founded. We still need to check that \(\sucve\) is compatible with \(\succ\). * Suppose \(\langle x,y\rangle\succ\langle x^{\prime},y^{\prime}\rangle\). Since \(x>x^{\prime}\) implies \(x\geq x^{\prime}\), we have \(\langle x,y\rangle\sucve\langle x^{\prime},y^{\prime}\rangle\). * Suppose \(\langle x,y\rangle\succ\langle x^{\prime},y^{\prime}\rangle\sucve\langle x^{ \prime\prime},y^{\prime\prime}\rangle\). Since \(x>x^{\prime}\geq x^{\prime\prime}\) implies \(x>x^{\prime\prime}\) and \(\sqsupseteq\) is transitive, we have \(\langle x,y\rangle\succ\langle x^{\prime\prime},y^{\prime\prime}\rangle\). Now we interpret types as a particular kind of cost-size products. **Definition 3** (Interpretation of Types): Let \(\mathcal{B}\) denote the set of sorts. An _interpretation key_\(\mathcal{J}_{\mathcal{B}}\) for \(\mathcal{B}\) maps each sort \(\iota\) to a quasi-ordered set \((\mathcal{J}_{\mathcal{B}}(\iota),\sqsupseteq)\) with a minimum. For each type \(\sigma\in\mathcal{T}_{\mathcal{B}}\), we define the cost-size interpretation of \(\sigma\) as the product \((\!\sigma\!)=\mathcal{C}_{\sigma}\times\mathcal{S}_{\sigma}\) with \[\mathcal{C}_{\sigma} =\mathbb{N}\times\mathcal{F}_{\sigma}^{\mathsf{c}}\] \[\mathcal{F}_{\iota}^{\mathsf{c}} =\mathsf{unit} \mathcal{S}_{\iota} =\mathcal{J}_{\mathcal{B}}(\iota)\] \[\mathcal{F}_{\iota\Rightarrow\tau}^{\mathsf{c}} =\mathcal{S}_{\iota} \Longrightarrow\mathcal{C}_{\tau} \mathcal{S}_{\iota\Rightarrow\tau} =\mathcal{S}_{\iota}\Longrightarrow\mathcal{S}_{\tau}\] where \(\mathtt{unit}=\{\lambda\}\) is quasi-ordered by \(\geq\) with \(\lambda\geq\lambda\). All \(\mathcal{F}^{c}_{\iota=\tau}\) and \(\mathcal{S}_{\iota=\tau}\) are ordered by pointwise comparison. The set \(\mathcal{C}_{\sigma}\) is ordered as follows: \((n,f)>(m,g)\) if \(n>m\) and \(f\geq g\), and \((n,f)\geq(m,g)\) if \(n\geq m\) and \(f\geq g\). This definition requires that all \((\mathcal{C}_{\sigma},\geq)\) and \((\mathcal{S}_{\sigma},\supseteq)\) are quasi-ordered sets, which is guaranteed by the following lemma. **Lemma 2**: _For any type \(\sigma\), \((\mathcal{C}_{\sigma},>,\geq)\) is a well-founded set and \((\mathcal{S}_{\sigma},\supseteq)\) is a quasi-ordered set with a minimum. Therefore, \((\sigma]\) is a cost-size product._ Proof When \(\sigma\) is a sort, \(\mathcal{C}_{\sigma}=\mathbb{N}\times\mathtt{unit}\cong\mathbb{N}\) and \(\mathcal{S}_{\sigma}=\mathcal{J}_{\mathcal{B}}(\sigma)\), so the statement is trivially true. When \(\sigma=\iota\Rightarrow\tau\), we have \(\mathcal{C}_{\sigma}=\mathbb{N}\times\mathcal{F}^{c}_{\iota=\tau}\), \(\mathcal{F}^{c}_{\iota=\tau}=\mathcal{J}_{\mathcal{B}}(\iota)\Longrightarrow \mathcal{C}_{\tau}\) and \(\mathcal{S}_{\sigma}=\mathcal{J}_{\mathcal{B}}(\iota)\Longrightarrow \mathcal{S}_{\tau}\). By induction, \((\mathcal{C}_{\tau},\geq)\) and \((\mathcal{S}_{\tau},\supseteq)\) are quasi-ordered sets. So are \((\mathcal{F}^{c}_{\iota=\tau},\geq)\) and \((\mathcal{S}_{\sigma},\supseteq)\), which are ordered by pointwise comparison. By Lemma 1, \((\mathcal{C}_{\sigma},>,\geq)\) is a well-founded set. One minimum of \((\mathcal{S}_{\sigma},\supseteq)\) is the constant function \(\boldsymbol{\lambda}x.\bot\) where \(\bot\) is a minimum of \((\mathcal{S}_{\tau},\supseteq)\). The cost component \(\mathcal{C}_{\sigma}\) of \((\sigma]\) holds information about the cost of reducing a term of type \(\sigma\) to its normal form. It has two parts: one is numeric; the other is functional. The functional part \(\mathcal{F}^{c}_{\sigma}\) degenerates to \(\mathtt{unit}\) when \(\sigma\) is just a sort and is indeed a functional space when \(\sigma=\iota\Rightarrow\tau\) is a function type. In the latter case, \(\mathcal{F}^{c}_{\sigma}=\mathcal{S}_{\iota}\Longrightarrow\mathcal{C}_{\tau}\) consists of weakly monotonic functions with domain \(\mathcal{S}_{\iota}\), the size component of \((\iota]\). This is very much in line with the standard complexity notion based on Turing Machines in which time complexity is parametrized by the input's size. We need a concrete interpretation key in order to use Definition 3 to interpret types. In our examples, a particular kind of interpretation key maps each sort \(\iota\) to size sets of the form \((\mathbb{N}^{K[\iota]},\supseteq)\), with \(K[\iota]\geq 1\), and are ordered as follows: if \(x_{i}\geq y_{i}\) for all \(i\). This class of interpretation key is used unless stated otherwise. We take a semantic approach (cf. [16]) to determine the number \(K[\iota]\) for each sort \(\iota\). For instance \(\mathtt{nat}\) is the sort of natural numbers in unary format, so a number \(n\in\mathbb{N}\) is represented as the data term \(\mathtt{s}\ (\ldots(\mathtt{s}\ 0))\), that is, \(n\) successive applications of \(\mathtt{s}\) to \(\mathtt{0}\). With that in mind the number of occurrences of \(\mathtt{s}\) in such terms is a reasonable measure for their size, so we let \(K[\mathtt{nat}]=1\). A second example is that of list. To characterize the size of a list we may need information about the individual elements in addition to the length of the list. So we keep track of the length as well as the maximum size of their elements. This way \(K[\mathtt{list}]=2\). In Example 2 we interpret \(\mathtt{nat},\mathtt{list}\) constructors following this intuition. **Definition 4**: Cost-size tuples in \((\sigma]\) are written as \(\langle(n,f^{c}),f^{\mathtt{s}}\rangle\) where \(n\in\mathbb{N}\), \(f^{c}\in\mathcal{F}^{c}_{\sigma}\), and \(f^{\mathtt{s}}\in\mathcal{S}_{\sigma}\). When \(\sigma\) is a function type, we refer to \(f^{c}\) as the _cost function_ and \(f^{\mathtt{s}}\) as the _size function_. In order to define the interpretation of terms (Definition 7), we need a notion of application for cost-size tuples. More precisely, given \(\boldsymbol{f}\in(\iota\Rightarrow\tau]\) and \(\boldsymbol{x}\in(\iota]\), our goal is to define \(\boldsymbol{f}\cdot\boldsymbol{x}\in(\tau]\), the application of \(\boldsymbol{f}\) to \(\boldsymbol{x}\). Let us illustrate how such an application should work with a concrete example. Consider the function \(\mathtt{append}\colon\mathtt{list}\Rightarrow\mathtt{list}\Rightarrow\mathtt{ list}\) from Example 1. It takes two lists \(q\) and \(l\) as input. The intended cost-size denotational semantics for \(\mathtt{append}\) is a tuple \(\boldsymbol{f}=\langle(n,f^{c}),f^{\mathtt{s}}\rangle\in(\!|\mathtt{list} \Rightarrow\mathtt{list}\Rightarrow\mathtt{list}\!)\), where \[n\in\mathbb{N},\] \[f^{c}\in\overbrace{\mathcal{S}_{\mathtt{list}}}^{\text{size of }q} \Longrightarrow(\mathbb{N}\times(\overbrace{\mathcal{S}_{\mathtt{list}}}^{\text{size of }l}\implies(\mathbb{N}\times\mathtt{unit})))\text{, and}\] \[f^{\mathtt{s}}\in\overbrace{\mathcal{S}_{\mathtt{list}}}^{\text{size of }q} \Longrightarrow(\overbrace{\mathcal{S}_{\mathtt{list}}}^{\text{size of }l}\implies\mathcal{S}_{\mathtt{list}}).\] For the first list \(q\), take a cost-size tuple \(\mathbf{x}=\langle(m,\mathfrak{U}),x^{\mathsf{s}}\rangle\) from \((\mathsf{list})\). We apply \(f^{\mathsf{c}}\) and \(f^{\mathsf{s}}\) to \(x^{\mathsf{s}}\), and get \(f^{\mathsf{c}}(x^{\mathsf{s}})=(k,h)\in\mathbb{N}\times(\mathcal{S}_{\mathsf{ list}}\Longrightarrow(\mathbb{N}\times\mathtt{unit}))\) and \(f^{\mathsf{s}}(x^{\mathsf{s}})\in\mathcal{S}_{\mathsf{list}}\Longrightarrow \mathcal{S}_{\mathsf{list}}\), respectively. Then we sum the numeric parts and collect all the data in the new cost-size tuple \(\langle(n+m+k,h),f^{\mathsf{s}}(x^{\mathsf{s}})\rangle\). This process is summarized in the following definition. **Definition 5** (Semantic Application): Given \(\mathbf{f}=\langle(n,f^{\mathsf{c}}),f^{\mathsf{s}}\rangle\in(\mathfrak{t} \Rightarrow\mathfrak{t})\) and \(\mathbf{x}=\langle(m,\mathfrak{U}),x^{\mathsf{s}}\rangle\in(\mathfrak{t}|)\), the _semantic application_ of \(\mathbf{f}\) to \(\mathbf{x}\), denoted by \(\mathbf{f}\cdot\mathbf{x}\), is \(\langle(n+m+k,h),f^{\mathsf{s}}(x^{\mathsf{s}})\rangle\) where \(f^{\mathsf{c}}(x^{\mathsf{s}})=(k,h)\). Semantic application is left-associative, so \(\mathbf{f}\cdot\mathbf{g}\cdot\mathbf{h}\) stands for \((\mathbf{f}\cdot\mathbf{g})\cdot\mathbf{h}\). This definition conforms to the types, which is stated in the following lemma. **Lemma 3**: _If \(\mathbf{f}\in(\mathfrak{t}\Rightarrow\mathfrak{t})\) and \(\mathbf{x}\in(\mathfrak{t}|)\), then \(\mathbf{f}\cdot\mathbf{x}\in(\mathfrak{t}|)\)._ **Remark 1**: Because \(\mathbb{N}\times\mathtt{unit}\) is order-isomorphic to \(\mathbb{N}\), we identify \(\mathbb{N}\times\mathtt{unit}\) with \(\mathbb{N}\) and \((m,\mathfrak{U})\) with \(m\) unless otherwise stated. So we write \(\langle m,x^{\mathsf{s}}\rangle\) for cost-size tuples in \((\mathfrak{t}|)\) where \(\mathfrak{t}\) is a sort. ### Cost-Size Tuple Algebras An interpretation of a syntax signature \((\mathcal{B},\mathcal{F},\mathtt{ar})\) interprets the types in \(\mathcal{T}_{\mathcal{B}}\) and each \(\mathfrak{f}\!:\!\sigma\in\mathcal{F}\) to an element of \((\!|\sigma|)\). This is formally stated in the definition below. **Definition 6**: A _cost-size tuple algebra_\(((\!|\cdot|),\mathcal{J})\) over a syntax signature \((\mathcal{B},\mathcal{F},\mathtt{ar})\) consists of: 1. a family of cost-size products \(\{(\!|\sigma|\!)\}_{\sigma\in\mathcal{T}_{\mathcal{B}}}\), and 2. an interpretation function \(\mathcal{J}\!:\!\mathcal{F}\longrightarrow\!\biguplus_{\sigma}\!(\!|\sigma|\!)\) that associates to each \(\mathfrak{f}\!:\!\sigma\) an element \(\mathcal{J}_{\mathfrak{f}}\in(\!|\sigma|\!)\). We extend the notion of interpretation to terms, where we use a valuation to map variables of type \(\mathfrak{t}\) to elements of \((\!|\mathfrak{t}|)\). With innermost rewriting we assume that variables have no cost. **Definition 7**: Fix a cost-size tuple algebra \(((\!|\cdot|),\mathcal{J})\). A _valuation_\(\alpha:\mathcal{X}\longrightarrow\!\biguplus_{\sigma}\!(\!|\sigma|\!)\) is a function which maps each variable \(x\!:\!\mathfrak{t}\) to a zero-cost tuple \(\langle 0,x^{\mathsf{s}}\rangle\in(\mathfrak{t}|)\). The interpretation of a term \(s\) under the valuation \(\alpha\), denoted by \([\![s]\!]^{\mathcal{J}}_{\alpha}\), is defined as follows: \[[\![\mathfrak{t}|]\!]^{\mathcal{J}}_{\alpha}=\mathcal{J}_{\mathfrak{f}} [\![x]\!]^{\mathcal{J}}_{\alpha}=\alpha(x) [\![s\,\mathfrak{t}]\!]^{\mathcal{J}}_{\alpha}=[\![s]\!]^{ \mathcal{J}}_{\alpha}\cdot[\![\mathfrak{t}|]\!]^{\mathcal{J}}_{\alpha}\] We write \([\![s]\!]\) instead of \([\![s]\!]^{\mathcal{J}}_{\alpha}\) whenever \(\alpha\) and \(\mathcal{J}\) are universally quantified or clear from the context. In both cases we may write \([\![x]\!]=x\) instead of \([\![x]\!]^{\mathcal{J}}_{\alpha}=\alpha(x)\). As a corollary of Lemma 3, the interpretation of terms conforms with types. **Lemma 4**: _If \(s\!:\!\sigma\) then \([\![s]\!]\in(\!|\sigma|\!)\)._ Let \(\sigma\) be \(\mathfrak{t}_{1}\Rightarrow\ldots\Rightarrow\mathfrak{t}_{m}\Rightarrow\kappa\) where all \(\mathfrak{t}_{i}\) and \(\kappa\) are sorts. Elements of \(\mathcal{C}_{\sigma}\) can be written as \[(e_{0},\mathbf{\lambda}x_{1}. \tag{1}\] \[(e_{1},\mathbf{\lambda}x_{2}.\] \[\ldots\] \[(e_{m-1},\mathbf{\lambda}x_{m}.\] \[(e_{m},\mathfrak{U}))\ldots)).\] When \(e_{0}=e_{1}=\cdots=e_{m-1}=0\), we write \((\mathbf{\lambda}x_{1}\ldots x_{m}.e_{m})\) as a shorthand. **Example 2**: Let \(\mathcal{S}_{\text{nat}}\) and \(\mathcal{S}_{\text{list}}\) be \(\mathbb{N}\) and \(\mathbb{N}\times\mathbb{N}\), respectively. Recall that the size of a natural number is the number of occurrences of \(\mathsf{s}\), and the size of a list is a pair \(q=(q_{\mathsf{I}},q_{\mathsf{m}})\) where \(q_{\mathsf{I}}\) is the length and \(q_{\mathsf{m}}\) is the maximum size of the elements. We interpret the constructors as follows: \[\mathcal{J}_{\mathsf{0}} =\langle 0,0\rangle \mathcal{J}_{\mathsf{s}} =\langle(\boldsymbol{\lambda}x.0),\boldsymbol{\lambda}x.x+1\rangle\] \[\mathcal{J}_{\text{nil}} =\langle 0,(0,0)\rangle \mathcal{J}_{\text{cons}} =\langle(\boldsymbol{\lambda}xq.0),\boldsymbol{\lambda}xq.(q_{ \mathsf{I}}+1,\max(x,q_{\mathsf{m}}))\rangle\] Both \(\mathsf{0}\) and \(\mathsf{nil}\) have no cost because they are constructors without a function type. With innermost rewriting, constructors with a function type, such as \(\mathsf{s}\) and \(\mathsf{cons}\), have \(e_{0}=\cdots=e_{m}=0\) for cost, of form (1). **Remark 2**: In Definition 7 we require that valuations interpret variables as zero-cost tuples. This is an important but subtle requirement that only works when reductions are innermost. Indeed, if reduction is unrestricted we can instantiate variables on the left-hand side of rules to terms containing redexes for which the cost should be accounted. Hence, not accounting for the cost of variables in full rewriting would lead to unsound analysis. Additionally, zero-cost tuples allow us to prove the innermost termination of the TRS \(\mathcal{R}\) in Example 3, which is non-terminating in full rewriting. ### Compatibility Theorem Roughly, the compatibility theorem (Theorem 1) states that if \(\mathcal{R}\) is compatible with a tuple algebra \(\mathcal{A}\), then the innermost rewrite relation \(\rightarrow^{i}_{\mathcal{R}}\) is embedded in the well-founded order on cost-size products. The next two lemmas are technical results needed in order to prove it. Lemma 5 states that interpretations are closed under substitution and Lemma 6 provides strong monotonicity to semantic application. **Definition 8**: Fix a cost-size tuple algebra \((\llparent),\mathcal{J})\). A substitution \(\gamma\) is _zero-cost_ under valuation \(\alpha\) if \(\llbracket\gamma(x)\rrbracket^{\mathcal{J}}_{\alpha}\) is a zero-cost tuple for each variable \(x\). Given a valuation \(\alpha\) and a zero-cost substitution \(\gamma\), the function \(\alpha^{\gamma}=\llbracket\cdot\rrbracket^{\mathcal{J}}_{\alpha}\circ\gamma= \llbracket\gamma(\cdot)\rrbracket^{\mathcal{J}}_{\alpha}\) is thus a valuation. **Lemma 5** (Substitution): _If \(\gamma\) is a zero-cost substitution under valuation \(\alpha\), \(\llbracket s\gamma\rrbracket^{\mathcal{J}}_{\alpha}=\llbracket s\rrbracket^{ \mathcal{J}}_{\alpha^{\prime}}\) for any term \(s\)._ **Lemma 6**: _The application functional \(\mathtt{App}(\boldsymbol{f},\boldsymbol{x})=\boldsymbol{f}\cdot\boldsymbol{x}\) is strongly monotonic on both arguments._ Proof.: We need to prove (i) if \(\boldsymbol{f}\succ\boldsymbol{g}\) and \(\boldsymbol{x}\suctie\boldsymbol{y}\), then \(\mathtt{App}(\boldsymbol{f},\boldsymbol{x})\succ\mathtt{App}(\boldsymbol{g}, \boldsymbol{y})\); (ii) if \(\boldsymbol{f}\suctie\boldsymbol{g}\) and \(\boldsymbol{x}\succ\boldsymbol{y}\), then \(\mathtt{App}(\boldsymbol{f},\boldsymbol{x})\succ\mathtt{App}(\boldsymbol{g}, \boldsymbol{y})\); (iii) if \(\boldsymbol{f}\suctie\boldsymbol{g}\) and \(\boldsymbol{x}\suctie\boldsymbol{y}\), then \(\mathtt{App}(\boldsymbol{f},\boldsymbol{x})\suctie\mathtt{App}(\boldsymbol{g},\boldsymbol{y})\). Consider cost-size tuples \(\boldsymbol{f},\boldsymbol{g}\in(\mathfrak{t}\Rightarrow\mathfrak{t})\) and \(\boldsymbol{x},\boldsymbol{y}\in(\mathfrak{t})\). Let \(\boldsymbol{f}=\langle(n,f^{\mathrm{c}}),f^{\mathrm{s}}\rangle\), \(\boldsymbol{g}=\langle(m,g^{\mathrm{c}}),g^{\mathrm{s}}\rangle\), \(\boldsymbol{x}=\langle x^{\mathrm{c}},x^{\mathrm{s}}\rangle\), and \(\boldsymbol{y}=\langle y^{\mathrm{c}},y^{\mathrm{s}}\rangle\). We proceed to show (i) and observe that (ii) and (iii) follow similar reasoning. Indeed, if \(\boldsymbol{f}\succ\boldsymbol{g}\) and \(\boldsymbol{x}\suctie\boldsymbol{y}\) we have that \(n>m,f^{\mathrm{c}}\geq g^{\mathrm{c}}\), \(f^{\mathrm{s}}\sqsupseteq g^{\mathrm{s}}\), \(x^{\mathrm{c}}\geq y^{\mathrm{c}}\), and \(x^{\mathrm{s}}\sqsupseteq y^{\mathrm{s}}\). Hence, by letting \(f^{\mathrm{c}}(x^{\mathrm{s}})=(k,h)\) and \(g^{\mathrm{c}}(y^{\mathrm{s}})=(k^{\prime},h^{\prime})\), we get: \[\mathtt{App}(\boldsymbol{f},\boldsymbol{x})=\langle(n,f^{\mathrm{c}}),f^{ \mathrm{s}}\rangle\cdot\langle x^{\mathrm{c}},x^{\mathrm{s}}\rangle=\langle(n +x^{\mathrm{c}}+k,h),f^{\mathrm{s}}(x^{\mathrm{s}})\rangle>\left\langle(m+y^{ \mathrm{c}}+k^{\prime},h^{\prime}),g^{\mathrm{s}}(y^{\mathrm{s}})\right\rangle =\mathtt{App}(\boldsymbol{g},\boldsymbol{y})\] **Definition 9**: A TRS \(\mathcal{R}\) is said to be _compatible_ with a cost-size tuple algebra \((\llparent),\mathcal{J})\) if \(\llbracket\ell\rrbracket^{\mathcal{J}}_{\alpha}\succ\llbracket r\rrbracket^{ \mathcal{J}}_{\alpha}\) for all rules \(\ell\to r\in\mathcal{R}\) and valuations \(\alpha\). **Theorem 1** (Compatibility): _Let \(\mathcal{R}\) be a TRS compatible with a cost-size tuple algebra \((\llparent),\mathcal{J})\). Then, for any pair of terms \(s\) and \(t\), whenever \(s\rightarrow^{i}_{\mathcal{R}}t\) we have \(\llbracket s\rrbracket^{\mathcal{J}}_{\alpha}\succ\llbracket t\rrbracket^{ \mathcal{J}}_{\alpha}\)._ Proof We proceed by induction on \(\to_{\mathcal{R}}^{i}\). For the base case, \(s\to_{\mathcal{R}}^{i}t\) by \(\ell\gamma\to r\gamma\) and all subterms of \(\ell\gamma\) are in \(\to_{\mathcal{R}}\) normal form. Therefore, since \([\![\ell]\!]_{\alpha}^{\mathcal{J}}\succ[\![r]\!]_{\alpha}^{\mathcal{J}}\) by hypothesis, Lemma 5 gives us that \([\![\ell\gamma]\!]_{\alpha}^{\mathcal{J}}\succ[\![r\gamma]\!]_{\alpha}^{ \mathcal{J}}\). In the inductive step we use Lemma 6 combined with the (IH) as follows. Suppose \(s\to_{\mathcal{R}}^{i}t\) by \(s=s^{\prime}u\to_{\mathcal{R}}^{i}s^{\prime\prime}u\) with \(s^{\prime}\to_{\mathcal{R}}^{i}s^{\prime\prime}\). Hence, \([\![s^{\prime}u]\!]_{\alpha}^{\mathcal{J}}=[\![s^{\prime}]\!]_{\alpha}^{ \mathcal{J}}\cdot[\![u]\!]_{\alpha}^{\mathcal{J}}=\texttt{App}([\![s^{\prime}]\!] _{\alpha}^{\mathcal{J}},[\![u]\!]_{\alpha}^{\mathcal{J}})\), henceforth the induction hypothesis gives \([\![s^{\prime}]\!]_{\alpha}^{\mathcal{J}}\succ[\![s^{\prime\prime}]\!]_{\alpha}^ {\mathcal{J}}\), which combined with Lemma 6 implies \([\![s]\!]_{\alpha}^{\mathcal{J}}=\texttt{App}([\![s^{\prime}]\!]_{\alpha}^{ \mathcal{J}},[\![u]\!]_{\alpha}^{\mathcal{J}})\succ\texttt{App}([\![s^{\prime \prime}]\!]_{\alpha}^{\mathcal{J}},[\![u]\!]_{\alpha}^{\mathcal{J}})=[\![t]\!] _{\alpha}^{\mathcal{J}}\). When \(s\to_{\mathcal{R}}^{i}t\) with \(s=s^{\prime}\to_{\mathcal{R}}^{i}s^{\prime}\)\(u^{\prime}\) the proof is analogous. **Example 3**: Let \(0,1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\ \colon 1\colon 1\colon 1\colon 1\colon 1\colon 1\ 1\colon 1\colon 1\colon 1\ \colon 1\colon 1\colon 1\colon 1\ \colon 1\colon 1\ \colon 1\ \colon 1\ \colon 1\ \colon 1\ \colon 1\ \colon 1\ \colon 1\ \colon 1\ \colon 1\ \colon 1\ Notice that by this definition linearly bounded (or additive) size functions are not required to be linear (or additive) but to be upper-bounded by a linear (additive) function. So this permits us to use for instance \(\min(x,2y)\), whereas \(xy\) cannot be used. Size interpretations do not necessarily bound the absolute size of data terms. For instance, we may interpret a data constructor \(\mathtt{c}\colon\mathtt{i}\Rightarrow\kappa\) with \(\mathcal{J}_{\mathtt{c}}^{\mathtt{s}}=\boldsymbol{\lambda}x\). \(\lfloor x/2\rfloor\) which would give us \(|d|\geq\llbracket d\rrbracket^{s}\). This is especially useful when dealing with sublinear interpretations. The next lemma ensures that by interpreting constructors additively, the size interpretation of data terms is proportional to their absolute size: **Lemma 7**: _Let \(\mathcal{R}\) be a TRS compatible with a cost-size tuple algebra \(((\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Checking the compatibility of this interpretation is straightforward. Notice that in each set of rules defining a function \(\mathsf{f}\) in Example 1 size components are additively and cost components are polynomially bounded. By case (b) of Lemma 8, we have that \(\mathsf{irc}_{\mathcal{R}_{\mathsf{add}}}\), \(\mathsf{irc}_{\mathcal{R}_{\mathsf{appand}}}\), and \(\mathsf{irc}_{\mathcal{R}_{\mathsf{minus}}}\) are linear. Quadratic bounds can be derived to \(\mathsf{irc}_{\mathcal{R}_{\mathsf{quot}}}\), \(\mathsf{irc}_{\mathcal{R}_{\mathsf{num}}}\), and \(\mathsf{irc}_{\mathcal{R}_{\mathsf{raw}}}\). Recall the semantic meaning given to size components, see Example 2, one can observe that the cost component of interpretations do not only bound the innermost runtime complexity of \(\mathcal{R}_{\mathsf{f}}\) but also provide additional information on the role each size component plays in the rewriting cost. For instance: the cost of adding two numbers depends solely on the size of add's second argument; the cost of summing every element of a list has a linear dependency on its length and non-linear dependency on its length and maximum element. This is particularly useful in program analysis since one can detect a possible costly operation by analyzing the shape of interpretations themselves. ## 5 Automation In this section, we limn a procedure for finding cost-size tuple interpretations. Our goal is to find interpretations that guarantee polynomial bounds to the runtime complexity of the rewriting system at hand. Hence, we have the following conditions: (i) the interpretation key chosen is over \(\mathbb{N}\), (ii) the size interpretation of constructors is additively bounded, and (iii) the interpretation of function symbols is polynomially bounded. Parametric Interpretations.Recall that previously in the paper we assigned an intuitive meaning for size components. In a fully automated setting, where no human guidance is allowed, all sorts \(\mathsf{\mathsf{\mathsf{\mathsf{missing}}}}\) start with \(K[\mathsf{\mathsf{\mathsf{missing}}}]=1\) and go up to a predefined bound \(K\). This maximum bound \(K\) is needed to limit the search space and guarantee that the procedure terminates. Roughly, the procedure works as follows. The interpretation of data constructors is set to be additive. So if \(\mathsf{c}:\mathsf{\mathsf{\mathsf{missing}}}_{1}\Rightarrow\ldots \Rightarrow\mathsf{\mathsf{\mathsf{\mathsf{missing}}}}_{m}\Rightarrow\kappa\) is a data constructor, its size interpretation is \(\boldsymbol{\lambda}x_{1}\ldots x_{m}.\,a+\sum_{i=1}^{m}\sum_{j=1}^{K[ \mathsf{\mathsf{\mathsf{missing}}}_{i}]}x_{ij}\), where \(a\) is a parameter to be determined by the search procedure. We say such an _interpretation shape_ is parametrized by the coefficient \(a\). The next step is to choose (parametric) interpretations for defined symbols \(\mathsf{f}\in\mathcal{F}\). In contrast with constructors where the cost components are zero-valued functions and size components are additive, we can choose any function that is polynomially bounded for cost and size components of a defined symbol \(\mathsf{f}\in\mathcal{F}\). However, the class of functions from which we can choose interpretations of defined symbols is too big. So we restrict our search space to a limited class of polynomially bounded functions: max-polynomials, i.e., functions that combine polynomial terms and the max function. For instance, the interpretations of cons in Example 2 and append in Example 4 are max-polynomials. We then choose generic max-polynomials for the cost and size components which are parametrized by their coefficients. Recall that we wish for finding interpretations that satisfy the compatibility condition, i.e., \([\ell]_{\alpha}\succ[\![r]\!]_{\alpha}\), for any \(\alpha\). Therefore, if we pick max-polynomials parametrized by their coefficients, those give rise to a set of constraints that must be solved in order to determine valid interpretations. **Example 5**: Let us illustrate the ideas above with a simple system defining the function \(\mathsf{dbl}\) over natural numbers. So we consider the system with rules \(\mathsf{dbl}\)\(0\to 0\) and \(\mathsf{dbl}\)\((\mathsf{s}\,x)\to\mathsf{s}\)\((\mathsf{s}\)\((\mathsf{dbl}\,x))\). Let us choose the following parametric interpretation \[\mathcal{J}_{0}=\langle 0,a_{0}\rangle \mathcal{J}_{\mathsf{s}}=\langle(\boldsymbol{\lambda}x.0), \boldsymbol{\lambda}x.x+b_{0}\rangle \mathcal{J}_{\mathsf{dbl}}=\langle(\boldsymbol{\lambda}x.c_{1}x+c_{0}), \boldsymbol{\lambda}x.d_{1}x+d_{0}\rangle\,,\] which satisfy conditions (i)-(iii) above. The interpretation above is _parametric_ in the sense that the coefficients \(a_{0},b_{0},c_{0},c_{1},d_{0},d_{1}\) are to be determined. The compatibility condition for the first rule gives: \[\llbracket\texttt{dbl}\ \texttt{0}\rrbracket\succ\llbracket\texttt{0} \rrbracket\implies\left\langle(c_{1}a_{0}+c_{0}),d_{1}a_{0}+d_{0}\right\rangle \succ\left\langle 0,a_{0}\right\rangle,\] which in consequence requires the validity of \(C_{0}=(c_{1}a_{0}+c_{0}>0)\wedge(d_{1}a_{0}+d_{0}\geq a_{0})\). The compatibility condition for the second rule, on the other hand, gives us the following: \[\llbracket\texttt{dbl}\ \texttt{(s}\,x)\rrbracket\succ\llbracket\texttt{s}\ \texttt{(s}\ \texttt{(dbl}\ x)) \rrbracket\implies\left\langle(c_{1}x+c_{1}b_{0}+c_{0}),d_{1}x+d_{1}b_{0}+d_{0 }\right\rangle\succ\left\langle(c_{1}x+c_{0}),d_{1}x+d_{0}+2b_{0}\right\rangle,\] which in consequence requires the validity of the formula \[C_{1}=(c_{1}x+c_{1}b_{0}+c_{0}>c_{1}x+c_{0})\wedge(d_{1}x+d_{1}b_{0}+d_{0} \geq d_{1}x+d_{0}+2b_{0}).\] Hence, we seek to find witnesses for the constraints \(C_{0},C_{1}\) over \(\mathbb{N}\). For which we can use an SMT solver. The example above is very simple in nature but uses the main ideas of our procedure. Essentially, we choose parametric interpretations for function symbols in \(\mathcal{F}\) and solve the constraints that arise from the compatibility condition. As we have seen in Example 4, cost-size interpretations may become complicated, so more interpretation shapes are needed in the search procedure. We describe such a procedure below. It is modular in the sense that it is parametrized by a _selector strategy_\(\mathcal{S}\) and constraint solver. A selector strategy is an algorithm to choose a parametric interpretation for each function symbol in \(\mathcal{F}\). For instance, in the example above we have chosen _linear parametric shapes_ for all function symbol. **Main Procedure** **Parameter:** A selector algorithm \(\mathcal{S}\) and a constraint solver over non-linear integer arithmetic. **Data Input:** A TRS \(\mathcal{R}\) over a syntax signature \((\mathcal{B},\mathcal{F},\texttt{ar})\). **Output:** YES, if a cost-size tuple interpretation satisfying compatibility can be found and MAYBE, if all steps below were executed and no interpretation could be found1. Footnote 1: Notice that in our setting we cannot possibly return NO. 1. Split \(\mathcal{F}\) into two disjoint sets of constructors and defined symbols, i.e., \(\mathcal{F}=\mathcal{C}\uplus\mathcal{D}\). 2. For each constructor \(\texttt{c}\!:\!\texttt{t}_{1}\Rightarrow\ldots\Rightarrow\texttt{t}_{m}\Rightarrow\kappa\), choose its cost interpretation as the zero-valued cost function; size interpretations are additive. 3. Split \(\mathcal{D}\) into sets \(\mathcal{D}_{1},\ldots,\mathcal{D}_{n}\) such that for each \(\mathsf{f}\in\mathcal{D}_{i}\), with \(1\leq i\leq n\), all function symbols occurring in the rules defining \(\mathsf{f}\) are either constructors or in \(\mathcal{D}_{1}\cup\cdots\cup\mathcal{D}_{i}\). 4. For each \(1\leq i\leq n\), choose an _interpretation shape_ for the symbols in \(\mathcal{D}_{i}\) based on the selector strategy \(\mathcal{S}\) (to be defined below). * Mark the chosen interpretation shape on \(\mathcal{S}\), so we don't choose the same again in case this step fails. * If no choice can be made by \(\mathcal{S}\), stop and return MAYBE. 5. If \(\mathsf{f}\ \ell_{1}\ \ldots\ \ell_{k}\to r\) is a rule of type \(\iota\) with \(\mathsf{f}\in\mathcal{D}_{1}\cup\cdots\cup\mathcal{D}_{i}\). _Simplify_\(\llbracket\mathsf{f}\ \ell_{1}\ \ldots\ \ell_{k}\rrbracket\succ\llbracket\mathsf{f}\rrbracket\) so that the result is a set of inequality constraints \(C\) that does not depend on any interpreted variable (we shall define this simplification step below). * If this simplification step fails, then we return to step 4 to choose another interpretation shape. 6. _Check_ if \(C\) holds. * If all constraints in \(C\) hold and \(i<n\), it means that we could orient all rules headed by function symbols in \(\mathcal{D}_{i}\), so we go to step 4 with \(i:=i+1\). * If all constraints in \(C\) hold and \(i=n\), then we could orient all rules \(\mathcal{R}\), stop return YES. * Otherwise, increase \(K[t]\) by one, update the additive size interpretation for the constructors, and return to step 4 choosing another interpretation shape. Two key aspects of the procedure above remain to be defined. The strategy \(\mathcal{S}\) for selecting interpretation shapes and the constraint solver, Step 6. Strategy-based Search for Tuple Interpretations.Intuitively, a selector strategy \(\mathcal{S}\) is an algorithm for choosing parametric interpretations for defined symbols in \(\mathcal{D}_{i}\). For instance, we could randomly pick an interpretation shape from a list (the **blind** strategy); we could incrementally select interpretations from a list of possible attempts (the **progressive** strategy); or we could select interpretations based on their syntax patterns (the **pattern** strategy). The definition below lists some interpretation shapes we consider. They are based on the classes studied in [10, 30] Parametric interpretations are built by considering the type of defined symbols. **Definition 13** (Interpretation Shapes): Let \(\sigma=t_{1}\Rightarrow\ldots\Rightarrow t_{m}\Rightarrow\kappa\) and each \(f_{ij}\) appearing in the shapes below be an additively bounded weakly monotonic function over \(\mathcal{S}_{\sigma}\). We write \(f(\vec{x})\) for the application of \(f\) to each argument \(x_{1},\ldots,x_{m}\). * The _additive class_ contains additively bounded cost-size functionals of the following form: \[\boldsymbol{\lambda}x_{1}\ldots x_{m}.\sum_{i=1}^{m}\sum_{j=1}^{K[t_{i}]}x_{ij }+b_{0}+f(\vec{x})\] * The _linear class_ contains cost-size functionals written as: \[\boldsymbol{\lambda}x_{1}\ldots x_{m}.\sum_{i=1}^{m}\sum_{j=1}^{K[t_{i}]}a_{ij }x_{ij}f_{ij}(\vec{x})\] * The _simple class_ contains cost-size functionals written as: \[\boldsymbol{\lambda}x_{1}\ldots x_{m}.\sum_{i=1}^{m}\sum_{j=1}^{K[t_{i}]}a_{ ij}x_{ij}^{k_{ij}}f_{ij}(\vec{x}),\text{ such that each }k_{ij}\in\{0,1\}\] * Finally, the _quadratic_ class contains cost-size functionals where we allow general products of variables with degree at maximum 2: \[\boldsymbol{\lambda}x_{1}\ldots x_{m}.\sum_{i=1}^{m}\sum_{j=1}^{K[t_{i}]}a_{ij }x_{ij}^{k_{ij}}f_{ij}(\vec{x}),\text{ such that each }k_{ij}\in\{0,1,2\}\] * The _simple quadratic_ class contains cost-size functionals built as a sum of a simple functional plus a quadratic component: \[\boldsymbol{\lambda}x_{1}\ldots x_{m}.\sum_{i=1}^{m}\sum_{j=1}^{K[t_{i}]}a_{ij }x_{ij}^{k_{ij}}f_{ij}(\vec{x})+\sum_{i=1}^{m}\sum_{j=1}^{K[t_{i}]}a_{ij}x_{ij }^{l_{ij}}f_{ij}(\vec{x}),\] with \(k_{ij}\in\{0,1\}\) and \(l_{ij}\in\{0,1,2\}\). Hence, the blind strategy randomly selects one of the shapes above. The incremental strategy chooses interpretations in order, from additive ones to quadratic ones. The pattern strategy is slightly more difficult to realize since we need heuristic analysis on the shape of rules. For instance, every rule of the form \(\mathsf{f}\ x_{1}\ldots x_{m}\to x_{i}\) have constant cost functions \((\boldsymbol{\lambda}x_{1}\ldots x_{m}.1)\) and additive size components. Rules that duplicate variables, as in the pattern \(C[x]\to D[x,x]\), induce at least quadratic bound on cost. Notice that this is the case for all quadratic complexities in this paper. The concrete implementation of a selector algorithm determines the efficiency of the main procedure for finding interpretations. In order to simplify constraints \([\![\ell]\!]\succ[\![r]\!]\) we have to simplify inequalities between polynomials (max-polynomials). To simplify polynomial (max-polynomial) shapes, we need to compare polynomials \(P_{\ell}^{\mathsf{c}}>R_{r}^{\mathsf{c}}\) and \(P_{\ell_{1}}^{\mathsf{s}}\sqsupseteq P_{r_{1}}^{\mathsf{s}}\wedge\cdots\wedge P _{\ell_{\mathsf{x}[\mathsf{r}]}}^{\mathsf{s}}\sqsupseteq P_{r_{\mathsf{x}[ \mathsf{s}]}}^{\mathsf{s}}\). These conditions are then reduced to formulas in QF_NIA (Quantifier-Free Non-Linear Integer Arithmetic) and sent to an SMT solver, see [11]. Max-polynomials are simplified using the rules \(\max(x,y)+z\leadsto\max(x+z,y+z)\) and \(\max(x,y)z\leadsto\max(xz,yz)\). The result has the form \(\max_{l}P_{l}\) where each \(P_{l}\) is a polynomial without max occurrences [8]. ## 6 Conclusion In this paper we showed that cost-size tuple pairs can be adapted to handle innermost rewriting. The type-aware algebraic interpretation style provided the machinery necessary to deal with innermost termination and a mechanism to establish upper bounds to the innermost runtime complexity of compatible TRSs. We presented sufficient conditions for feasible (polynomial) bounds on \(\mathtt{irc}_{\mathcal{R}}\) of compatible systems, which are in line with related works in the literature. This line of investigation is far from over. Since searching for interpretations can be cumbersome, our immediate future work is to develop new strategies and interpretation shapes. For instance, we seek to expand the class of interpretations beyond max-polynomials such as logarithmic functionals. This has the potential to drastically improve the efficiency of our tooling. Acknowledgments.We wish to thank Cynthia Kop -- for the valuable discussions and guidance during the production of this paper; we thank Niels van der Weide, Marcos Bueno, and Edna Gomes -- for carefully proofreading the various manuscript versions of the paper; and we thank the anonymous referees -- for their valuable comments that helped us improve the paper.
2308.11733
Demand-driven provisioning of Kubernetes-like resources in OSG
The OSG-operated Open Science Pool is an HTCondor-based virtual cluster that aggregates resources from compute clusters provided by several organizations. Most of the resources are not owned by OSG, so demand-based dynamic provisioning is important for maximizing usage without incurring excessive waste. OSG has long relied on GlideinWMS for most of its resource provisioning needs but is limited to resources that provide a Grid-compliant Compute Entrypoint. To work around this limitation, the OSG Software Team has developed a glidein container that resource providers could use to directly contribute to the OSPool. The problem of that approach is that it is not demand-driven, relegating it to backfill scenarios only. To address this limitation, a demand-driven direct provisioner of Kubernetes resources has been developed and successfully used on the NRP. The setup still relies on the OSG-maintained backfill container image but automates the provisioning matchmaking and successive requests. That provisioner has also been extended to support Lancium, a green computing cloud provider with a Kubernetes-like proprietary interface. The provisioner logic has been intentionally kept very simple, making this extension a low-cost project. Both NRP and Lancium resources have been provisioned exclusively using this mechanism for many months.
Igor Sfiligoi, Frank Würthwein, Jeff Dost, Brian Lin, David Schultz
2023-08-22T18:44:42Z
http://arxiv.org/abs/2308.11733v1
# Demand-driven provisioning of Kubernetes-like resources in OSG ###### Abstract The OSG-operated Open Science Pool is an HTCondor-based virtual cluster that aggregates resources from compute clusters provided by several organizations. Most of the resources are not owned by OSG, so demand-based dynamic provisioning is important for maximizing usage without incurring excessive waste. OSG has long relied on GlideinWMS for most of its resource provisioning needs but is limited to resources that provide a Grid-compliant Compute Entrypoint. To work around this limitation, the OSG Software Team has developed a glidein container that resource providers could use to directly contribute to the OSPool. The problem of that approach is that it is not demand-driven, relegating it to backfill scenarios only. To address this limitation, a demand-driven direct provisioner of Kubernetes resources has been developed and successfully used on the NRP. The setup still relies on the OSG-maintained backfill container image but automates the provisioning matchmaking and successive requests. That provisioner has also been extended to support Lancium, a green computing cloud provider with a Kubernetes-like proprietary interface. The provisioner logic has been intentionally kept very simple, making this extension a low-cost project. Both NRP and Lancium resources have been provisioned exclusively using this mechanism for many months. ## 1 Introduction The HTCondor batch workload management system [1; 2] has long been used to aggregate resources from many independent resource providers and is the core technology enabling the Open Science Grid (OSG) [3] operated Open Science Pool (OSPool) [4]. HTCondor architecture has very few hard requirements, allowing its services to operate in virtually any environment, e.g., both with and without elevated privileges, and in restricted network environments. Resource provisioning was, however, never a core competency of the HTCondor stack, delegating that aspect to other software providers. OSPool currently mostly relies on GlideinWMS [5] for its dynamic resource provisioning needs. That said, GlideinWMS specializes in provisioning Grid computing resources, i.e. compute resources managed by independent batch workload management systems behind a Grid-compliant Compute Entrypoint (OSG CE) [6]. In this work we address the provisioning of distributed compute resources that are not managed by a traditional batch system, with a focus on container-based systems. It should be noted that the OSG Software Team has already developed a glidein container image [7] that can be used to contribute resources to the OSPool, which has been successfully used by some resource providers as a backfill solution. Our solution extends that by adding demand-driven provisioning logic on top of that, allowing for OSPool provisioning to work both as backfill and at regular priorities. ## 2 Provisioning Kubernetes-managed resources Kubernetes [8] is a popular container-based resource management system that is getting significant traction in both on-prem and Cloud environments. In particular, the US National Science Foundation (NSF) has funded several on-prem systems that are at least partially managed by Kubernetes, including the Pacific Research Platform (PRP) [9], Expanse, Voyager and the Prototype National Research Platform (PNRP) [10]. Proper integration of such resources in the OSG ecosystem is thus highly desirable. An initial Kubernetes provisioner has been developed in PRP [11], but it was mostly focused on supporting a few large communities, e.g., the IceCube and LIGO experiments. In particular, the container image providing the HTCondor worker processes was built starting from a base Operating System (OS) image and had to be customized for each and every target user community. This work [12] extends that implementation by adding support for the OSG glidein container image. The big advantage of this approach is that the OSG-provided image comes fully pre-configured, avoiding the image customization and maintenance effort needed by the original implementation. The downside of this approach is slightly reduced provisioner flexibility, but we found no showstoppers. ### Provisioning logic The provisioning logic is based on an asynchronous polling mechanism and is demand-driven. The provisioning process periodically queries both HTCondor and Kubernetes for Figure 1: Summary overview of the Kubernetes Provisioner architecture their relevant state, and if there are HTCondor jobs waiting for resources and no relevant queued Kubernetes requests, additional Kubernetes resources are requested. Once a Kubernetes-managed container starts, it contacts the HTCondor scheduling infrastructure, which in turn sends the user job to be executed, as outlined in Figure 1. It should be noted that conceptually this is very similar to the logic used by GlideinWMS, just optimized for provisioning from a single Kubernetes pool. The Kubernetes provisioner effectively only manages the up-scaling part of the auto-scaling logic, by queuing more pods as needed. The provisioner never actively removes any pods. All pods are configured with total lifetime and maximum idle time limits, autonomously auto-terminating and thus implicitly managing the down-scaling. This logic avoids race conditions inherent to the asynchronous nature of the logic and was observed to work reasonably well in production environments. Since HTCondor jobs in the OSPool are virtually never homogeneous, the Kubernetes provisioner groups them by their requirements and requests dedicated Kubernetes resources for each of the groups. This minimizes the waste incurred by the running Kubernetes pods and allows the Kubernetes scheduler to optimally allocate its resources. An overview of the logic is available in Figure 2. Moreover, not all HTCondor jobs can or want to run on a specific Kubernetes cluster. Neither are all the Kubernetes-managed resources suitable or available to the OSG community. The provisioner thus allows for filtering of HTCondor jobs during its query phase and for adding of additional requirements during Kubernetes pod submission. As with most software, those additional restrictions are controlled though an admin-provided configuration file. A simplified example of such a configuration can be seen in Figure 3. Figure 2: Overview of the clustered provisioning logic ### Integration with the OSG maintained container image The container image used by the Kubernetes pods has three main functions: 1. provide the necessary software needed by the user jobs, 2. provide the HTCondor software distribution, and 3. properly configure HTCondor on startup. By using the OSG-maintained glidein container image, the Kubernetes provisioner software stack does not have to maintain the first two anymore. Most of the configuration is also maintained by the OSG container image but must be dynamically patched at runtime to inject the additional bits and pieces the Kubernetes provisioner relies on. This adds the potential of the two getting out of sync, but so far it has not been a problem yet. The major requirement of the dynamic patching is the propagation of provisioner-specific attributes used for querying and matchmaking. On top of that, the provisioner also uses a slightly different approach at passing secrets used in authentication, so the HTCondor configuration in the image has to be patched accordingly; while it would have been in principle possible to alter the provisioner secret handling, we decided it was less disruptive to just patch the existing configuration. ### Interaction between multiple Kubernetes users The Kubernetes provisioner is generally oblivious about the activities of other Kubernetes users in the system. The task of sharing the resources between those users is mostly offloaded to the Kubernetes scheduler. The major knob used by the provisioner is the _priorityClassName_ of the submitted Kubernetes pods, which regulates how those pods will be scheduled in relation to other pods in the system. For example, in the National Research Platform (NRP) Nautilus cluster, which contains both the PRP and PNRP nodes, the _opportunistic2_ class has the lowest priority number of all the defined classes and allows for preemption, and is thus used for backfill pods. On the other hand, regular-priority pods simply do not explicitly specify any _priorityClassName_ at all. Additionally, the provisioner allows for setting of a quota through the configuration setting _max_submit_pods_per_cluster_. This is especially useful for regular-priority, non-preemptable pods, so a single user does not take over the whole cluster in the absence of cluster-wide quota settings. Figure 3: A simplified example configuration file showcasing the provisioning restrictions ## 3 Extending the provisioner to Lancium cloud compute Lancium is a green computing company, who offered a significant amount of compute resources to OSPool through its cloud computing platform. Unfortunately, at that time its cloud offering used a custom interface, so one of the OSG resource provisioning tools had to be extended to make use of it. The Lancium cloud interface was container based and provided the usual batch-like actions, e.g., query and submit. After close examination, we determined that the Lancium semantics was rich enough to support our Kubernetes-focused provisioner, even though the syntax was significantly different. Our Kubernetes-focused provisioner is written in Python language, with Python classes abstracting away the Kubernetes details from the provisioning logic. It was thus relatively easy to implement an alternative class [13] that exposed the same interface but interacted with Lancium instead, due to the fact that we used only the basic Kubernetes capabilities in the original code. ## 4 Summary and conclusions The Kubernetes-focused provisioner described in this work has allowed the OSG communities, and in particular the OSPool, to successfully and effectively make use of the NSF-funded Kubernetes-managed compute resources, e.g., the NRP. At the time of writing, this provisioner was the only solution available to the OSG Consortium for dynamically provisioning multi-tenant Kubernetes systems. The work builds on top of the Kubernetes provisioner built as part of the PRP project, but further integrates it with the OSG Software Stack. This both reduces code maintenance and minimizes OSPool job failures due to configuration errors. Nevertheless, other user communities are still supported using the original container image approach. Additionally, the provisioner has been shown to be easily extendible to support other platforms with a Kubernetes-like interface, due to its minimalistic interface requirements. This was proven by adding support for the Lancium cloud platform, which has delivered a significant amount of resources to the OSPool. ## Acknowledgments This work was partially funded by the U.S. National Science Foundation (NSF) under grants OAC-2030508, OAC-2112167, OAC-1826967, CNS-1925001, OAC-1841530, CNS-1730158, CNS-2100237 and CNS-2120019.
2301.12731
GRASIAN: Towards the first demonstration of gravitational quantum states of atoms with a cryogenic hydrogen beam
At very low energies, a light neutral particle above a horizontal surface can experience quantum reflection. The quantum reflection holds the particle against gravity and leads to gravitational quantum states (GQS). So far, GQS were only observed with neutrons as pioneered by Nesvizhevsky and his collaborators at ILL. However, the existence of GQS is predicted also for atoms. The GRASIAN-collaboration pursues the first observation and studies of GQS of atomic hydrogen. We propose to use atoms in order to exploit the fact that orders of magnitude larger fluxes compared to those of neutrons are available. Moreover, recently the qBounce collaboration, performing GQS spectroscopy with neutrons, reported a discrepancy between theoretical calculations and experiment which deserves further investigations. For this purpose, we set up a cryogenic hydrogen beam at 6 K. We report on our preliminary results, characterizing the hydrogen beam with pulsed laser ionization diagnostics at 243 nm.
Carina Killian, Zakary Burkley, Philipp Blumer, Paolo Crivelli, Fredrik Gustafsson, Otto Hanski, Amit Nanda, Francois Nez, Valery Nesvizhevsky, Serge Reynaud, Katharina Schreiner, Martin Simon, Sergey Vasiliev, Eberhard Widmann, Pauline Yzombard
2023-01-30T08:55:59Z
http://arxiv.org/abs/2301.12731v2
Grasian: Towards the first demonstration of gravitational quantum states of atoms with a cryogenic hydrogen beam ###### Abstract At very low energies, a light neutral particle above a horizontal surface can experience quantum reflection. The quantum reflection holds the particle against gravity and leads to gravitational quantum states (gqs). So far, gqs were only observed with neutrons as pioneered by Nesvizhevsky and his collaborators at ill. However, the existence of gqs is predicted also for atoms. The Grasian collaboration pursues the first observation and studies of gqs of atomic hydrogen. We propose to use atoms in order to exploit the fact that orders of magnitude larger fluxes compared to those of neutrons are available. Moreover, recently the \(q\)-Bounce collaboration, performing gqs spectroscopy with neutrons, reported a discrepancy between theoretical calculations and experiment which deserves further investigations. For this purpose, we set up a cryogenic hydrogen beam at 6 K. We report on our preliminary results, characterizing the hydrogen beam with pulsed laser ionization diagnostics at 243 nm. *Corresponding author(s). E-mail(s): [email protected]; Contributing authors: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; ## 1 Introduction Quantum bouncers were first predicted in 1928 [1]. Nearly 75 years later, this phenomenon was demonstrated through the observation of neutron (\(n\)) gravitational quantum states (gqs) [2, 3, 4, 5, 6, 7, 8]. Confined by the gravitational- and the mirror potential, the \(n\) are settled in gravitationally bound quantum states. Studies of \(n\) gqs have a broad impact on fundamental and applied physics. They serve as a unique method to study the interaction of a particle in a quantum state with a gravitational field. For example, paired with more recent measurements of \(n\) whispering gallery states (wgs) - quantum states trapped by the centrifugal- and the mirror potential [9], they result in the first direct demonstration of the validity of the weak equivalence principle for a particle in a pure quantum state. The observation of gqs initiated active analysis of the pecularities of this phenomenon [10, 11, 12, 13, 14, 15, 16, 17] and their application to the search for new physics, such as the searches for extra fundamental short-range interactions [18, 19, 20, 21, 22, 23, 24], verification of the weak equivalence principle in the quantum regime [25, 26, 27], extensions of quantum mechanics [28, 29], extensions of gravity and space theories [30, 31, 32, 33] or tests of Lorentz invariance [34, 35, 36]. New fundamental short-range interactions are predicted in extensions of the Standard Model such as grand unified, supersymmetric and string theories that could alter the weak gravitational potential. They also appear in certain models explaining dark matter and dark energy [23]. Additionally, studies on gqs can provide extremely sensitive measurements of the mirror's surface potential and shape, which is of high interest to the surface physics community. Spectroscopy and interferometry methods of observation of gqs of \(n\) have been analyzed theoretically and implemented experimentally over the previous two decades [37, 38, 39, 40, 41]. However, the existence of gqs is predicted also for atoms and antiatoms [42, 43, 44, 45, 46, 47, 48, 49, 50]. Those are expected to have essentially identical properties for particles of almost equal mass such as \(n\), atomic hydrogen (\(H\)) or even antihydrogen (\(\bar{H}\)). A major constraint to improve the precision of the current measurements of gqs of \(n\) is the limited density of ultracold \(n\) (ucns). It looks natural to exploit the much higher fluxes available for atoms, namely the high densities of existing \(H\)-beams [51]. However, all the projects concerning the use and study of gqs of atoms are currently based only on theoretical estimations since those, in contrast to \(n\), have never been observed experimentally. Only a direct experiment can prove the existence of gqs of atoms, evaluate the systematic and statistical uncertainties of such experiments and develop the experimental techniques needed for more precise measurements in the future. In section 2, a theoretical derivation of gqs is given. The method used earlier for the observation of \(n\) gqs and the planned implementation of a measurement with \(H\) is presented in 3. A detailed description of the Grasian experimental setup and the recent measurements is given in 4. ## 2 Theoretical framework A sufficiently slow particle trapped by the gravitational field on one side and a horizontal reflective surface ("mirror") on the other side settles in gqs. The particle's wave function \(\psi(z)\) in the Earth's gravitational field above a mirror is governed by the Schrodinger equation \(\frac{\hbar^{2}}{2m}\frac{d^{2}\psi(z)}{dz^{2}}+(E-mgz)\psi(z)=0\), where \(\hbar\) is the reduced Planck constant, \(m\) is the particles mass, \(z\) is the height, \(E\) is the energy of the vertical motion of the particle, and \(g\) is the acceleration in the Earth's gravitational field. The only constant related to the particle's identity is it's mass, which is nearly exactly the same for \(n\), \(H\) or \(\bar{H}\). For simplicity, \(n\) over an ideal mirror will be considered in the following derivation. An ideal horizontal mirror at the height \(z=0\) can be approximated as an infinitely high and abrupt potential step. This approximation is justified by the characteristic values of energies and lengths in our problem. The energy of neutrons in low quantum states \(\sim\)\(10^{-12}\) eV, is much smaller than the optical potential of the mirror material \(\sim\)\(10^{-7}\) eV [52], and the characteristic range of increase in the optical potential for a polished mirror \(\sim\)\(10^{-9}\) m, is much smaller than the wavelength of neutrons in low quantum states \(\sim\)\(10^{-5}\) m. Such an infinitely high and abrupt optical potential corresponds to the zero boundary condition for the wave function, \(\psi(z=0)=0\). A solution of the Schrodinger equation can be written in terms of the Airy function Ai, \(\psi(z)=C\)Ai(\(z/z_{0}\)), where \(z_{0}=[\hbar^{2}/(2m^{2}g)]^{1/3}=5.87\) um is the characteristic length scale of the problem and \(C\) is a normalization constant. The Airy function zeros \(\lambda_{k}\) define the quantum state energies \(E_{k}=mgz_{0}\lambda_{k}\), where \(\varepsilon_{0}=mgz_{0}=\) 0.602 peV is the characteristic energy of the problem and \(f_{0}=\varepsilon_{0}/(2\pi\hbar)=\) 145 Hz is its characteristic frequency. The five lowest zeros of the Airy function Ai are \(\lambda_{k}=\{2.34,4.09,5.52,6.79,7.94...\}\). The eigenfunctions of the quantum states are \(\psi_{k}(\xi(z))\sim C_{k}\)Ai\((\xi_{k}(z))\), where \(\xi_{k}(z)=z/z_{0}-\lambda_{k}\), and \(C_{k}\) are normalization constants. The energy eigenvalues \(E_{k}\) depend only on \(m\), \(g\) and \(\hbar\), and are independent of the ideal mirror properties. Within the classical description, a neutron with the energy \(E_{k}\) can rise in the gravitational field up to the height \(z_{k}=E_{k}/mg\). In quantum mechanics, the probability of observing a neutron with the energy \(E_{k}\) in the \(k^{th}\) quantum state at a height \(z\) is equal to the squared modulus of its wave function (see Fig. 1). ## 3 Measurement method ### gqs measurement with \(n\) In this section, the methods developed for the first observation of gqs of \(n\) at lll will be described. The experimental installation is a one component gravitational uncn spectrometer with a high energy and spatial resolution [3]. The principle of its operation, illustrated in Fig. 2, is the measurement of the \(n\) flux through a slit between the mirror on bottom and the flat scatterer on top as a function of the slit height \(\Delta z\) which can be changed and precisely measured. The scatterer surface is smooth on a large scale but rough on the um scale. The roughness amplitude is about a few um, and is comparable to the characteristic scale \(z_{0}\) of the problem. The scatterer's surface reflects \(n\) which reach it non specularly, mixing the vertical and horizontal velocity components of the \(n\). Because the \(n\) horizontal velocity components are much larger than their vertical velocity components, such mixing causes numerous collisions of the \(n\) with the scatterer, thus resulting in a prompt loss of those \(n\). The length of the bottom mirror is chosen based on the energy time uncertainty relation \(\Delta\tau\Delta E\geq\hbar/2\). The observation of the \(k^{th}\) quantum state is possible if the difference between the Eigenenergies of state \(k+1\) and state \(k\), \(\Delta E_{k+1,k}\) is bigger than the width of the \(k^{th}\) quantum state, \(\delta E_{k}\): \(\Delta E_{k+1,k}>\delta E_{k}\). As the state number \(k\) increases, \(\Delta E_{k+1,k}\sim k^{-1/3}\) decreases until the levels pass into the classical continuum. Evidently, measurements of low quantum states are easier and more convenient. \(\delta E_{k}\) is defined by the time of flight of \(n\) above the mirror. Therefore, the mirror length is determined by the time interval needed to observe a \(n\) in a gqs: \(\Delta\tau\sim\) 0.5 ms. It follows, that the mirror length should be \(L\sim\) 10 cm for low states and for \(n\) velocities \(v_{\rm hor}\sim\) 5\(-\)10 m s\({}^{-1}\). The vertical scale in the problem is defined by the relation between momentum \(p\), velocity \(v\) and wavelength \(\lambda\): \(p=mv=h/\lambda\) and the momentum position uncertainty relation \(\Delta p\Delta z\geq\hbar/2\) Figure 1: Squared modules of the neutron wavefunctions \(|\psi_{k}(z)|^{2}\) as a function of the height \(z\) for the five lowest quantum states; they correspond to the probabilities of observing neutrons at a height \(z\). Figure 2: Schematic of the experimental setup in the flow through mode. 1 are the bottom and top entrance collimator plates, arrows 2 correspond to neutron classical trajectories between the collimator and the entrance to the slit between mirror 3 and scatterer 4. Dotted horizontal arrows 5 illustrate neutron quantum motion above the mirror. 6 is the neutron detector. The height of the slit between the mirror and the scatterer can be varied and precisely measured. The smaller the \(n\) vertical velocity component, the larger the \(n\) wavelength associated with this velocity component. But the classical height up to which a \(n\) can rise in the gravitational field cannot be smaller than the quantum mechanical uncertainty of its vertical coordinate, i.e., the \(n\) wavelength. This relation determines the lowest bound state of \(n\) in the Earth's gravitational field. The height uncertainty is then \(\Delta z\sim z_{0}\), and the vertical velocity uncertainty is \(\Delta v_{z}\sim v_{0}=\sqrt{2\varepsilon_{0}/m}=\)1.07 cm s\({}^{-1}\), the characteristic velocity in the problem. The method used in the first observation of \(n\) gqs[4] consisted in measuring \(n\) transmission through the narrow slit \(\Delta z\) between the horizontal mirror and the scatterer above it. If \(\Delta z\gg z_{k}\), neutrons in the \(k^{th}\) quantum state pass through the slit with no significant loss. But as \(\Delta z\) decreases, the neutron wave function \(\psi_{k}(z)\) starts penetrating the scatterer, and the \(n\) loss probability increases. If \(\Delta z\leq z_{k}\), the slit is practically non-transparent to neutrons in the \(k^{th}\) quantum state. In an "ideal" experiment with an infinitely high energy resolution, the \(n\) flux \(N_{\rm QM}(\Delta z)\) through the slit would sharply change at the height \(\Delta z\sim z_{k}\). In reality, the idealized step like dependence is smoothed due to two factors: the spectrometer experimental resolution and the smooth shape of \(n\) wave functions. The latter is due to the tunneling of \(n\) through the gravitational barrier separating the classically allowed heights and the scatterer height. An example of the experimental data is shown in Fig. 3. ### gqs measurement with \(H\) The method, developed for the observation of gqs of \(n\), will be used to demonstrate gqs of \(H\). In the following section, the feasibility of such an experiment will be analysed and the characteristic parameters of the two experiments will be compared. The observation time is a key parameter. The time of observation is defined by the mean particle velocity and the mirror length. For the characteristic velocity of \(n\), \(v_{n}\sim\) 10 m s\({}^{-1}\) and the mirror length \(L_{n}=\) 10 cm, the observation time was \(\tau_{n}\sim 10\) ms. \(\tau_{n}/\tau_{0}\sim 20\), i.e. the observation time is much larger than the formation time (\(\tau_{n}\gg\tau_{0}\)) and the gqs were well resolved. In order to provide the same conditions for the resolution of gqs, the time of observation of \(H\) has to be the same \(\tau_{H}\sim\tau_{n}\sim\) 10 ms. For the planned gqs mirror length \(L_{H}\sim\) 30 cm, the mean velocity of \(H\) has to be \(v_{H}\sim L_{H}/10\) ms \(\sim\) 30 m s\({}^{-1}\). All \(H\) with significantly higher velocities do not settle in gqs, they only increase the background and have to be eliminated. Velocities of up to \(v_{H}\sim 100\) m s\({}^{-1}\) can still be tolerated, at the cost of a worse energy resolution of the experiment. This is still acceptable for the first observation of gqs of \(H\). However, low velocities, good control over the velocity selection and sufficient background suppression is the key condition for the observation of gqs of \(H\). A major difference in the behavior of \(n\) and \(H\), is the mechanism of their interaction with the rough surface of the scatterer. Scattered \(n\) are lost in the bulk of the mirror or scatterer after several reflections from their surfaces. Scattered \(H\) atoms have higher chances to leak through the slit and produce background. They escape from the slit into a broad angular distribution, in contrast to \(H\), which pass the slit setteld in a gqs. \(H\) atoms escaping to larger angles have to be eliminated to decrease the background. Therefore, it is very important to implement a proper background suppression. The expected count rate of \(H\) is much higher than that of \(n\). A simple comparison of the "brightness" of the \(n\) source at ill and the \(H\) source at the Grasian experiment at eth Zurich Figure 3: The data points correspond to the measured \(n\) flux through the slit between mirror and absorber versus the slit width at low width values. The dashed curve is a fit using the quantum mechanical calculation, where all level populations and the height resolution are extracted from the experimental data. The solid curve is the full classical treatment. The dotted line is a truncated fit in which it is assumed that only the lowest quantum state - which leads to the first step - exists. Fig. taken from [4]. is the following. The total number of the particles produced at \(\mathrm{\textsc{i}}\mathrm{\textsc{l}}\mathrm{\textsc{l}}\mathrm{\textsc{l}}\mathrm{ \textsc{l}}\mathrm{\textsc{l}}\mathrm{\textsc{l}}\mathrm{\textsc{l}}\mathrm{ \textsc{s}}^{-1}\), while it is \(\sim 10^{17}\,\mathrm{s}^{-1}\) at \(\mathrm{\textsc{e}}\mathrm{\textsc{t}}\mathrm{\textsc{e}}\mathrm{\textsc{t}}\mathrm{ \textsc{e}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}} \mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ \textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{\textsc{t}}\mathrm{ }\mathrm{\textsc{t}}\mathrm A schematic of the laser system is shown in Fig. 5. It consists of two lasers - a continuous wave1 (CW) and a pulsed laser2, a pulsed dye amplifier3 (pda) and a second harmonic generation (shg) unit. The fundamental of the CW laser has a wavelength of 972 nm. An internal shg cavity creates the output CW laser beam with a wavelength of 486 nm and an average power of \(\sim\)250 mW. With exactly double of the desired wavelength for the two photon 1S-2S transition, the CW laser acts as the seeder laser. The pulsed laser is the 355 nm third harmonic of a 10 Hz pulsed Nd:YAG laser. In the Q-switch mode, the pulses are 10 ns long and have a pulse energy of \(\sim\)100 mJ. Footnote 1: Toptica TA-SHG pro Footnote 2: Spectra-Physics Quanta-Ray Lab 190 Within the pda, three quartz cuvettes are circulated with the dye, Coumarin 102, dissolved in ethanol. The absorption and fluorescence of Coumarin 102 fit the application well: when pumped with 355 nm light, the emission is centered around 473 nm. The pulsed laser is split up and focused onto the three cuvettes, and pumps the Coumarin molecules. The CW beam seeds the pda by passing through the three cuvettes overlapping with the pumped dye molecules. This stimulates the emission of 486 nm photons with every pulse that pumps the dye. After three cuvettes, \(\sim\) 10-12 mJ of 486 nm pulsed laser light is generated. Like the pulsed pump laser, it runs at 10 Hz with a pulse length of \(\sim 10\) ns. In the shg unit, the output of the pda is frequency doubled with a barium borate (bbo) crystal to generate \(\sim\)1-2 mJ of pulsed UV radiation at 243 nm. The 243 nm beam is then sent into the detection chamber. A mirror is mounted behind the photo ionization region to produce counter-propagating beams for the doppler free two photon excitation. The 243 nm photons efficiently ionize the \(H\) and the \(H^{+}\) ions are detected by an MCP. The MCP creates a voltage signal, which is read out by an oscilloscope. In Fig. 6, such a waveform of \(H^{+}\) signal is shown. As indicated in Fig. 5, the frequency of the fundamental of the seeder laser is determined and controlled by a wavelength meter4. While the wavelength meter is reliable for relative measurements, the absolute values are shifted by \(\sim 230\) MHz, due to outdated calibration. The CW fundamental frequency corresponds to 1/8 of the 1S-2S transition frequency, due to two shg processes and the two photon excitation. Footnote 4: HighFinesse WS7-60 Scanning the laser frequency around the resonance, shows that we are capable to resolve the hyperfine splitting (HFS) of \(H\). The two peaks, shown in Fig. 7, correspond to the difference of the HFS of the 1S and the 2S state. The measured value is \(\Delta\nu_{\mathrm{meas.}}=1.23(2)\) GHz, which agrees with Figure 5: Schematic of the laser system. The pda is pumped with a pulsed Nd:YAG laser and seeded with a CW diode laser. The generated 486 nm pulsed beam is frequency doubled in the shg unit and afterwards sent into the detection chamber to ionize \(H\). The wavelength of the fundamental of the seeder laser, is determined and controlled by a wavelength meter. Figure 6: Waveform captured by the oscilloscope showing the \(H^{+}\) signal. The purple region determines the offset in the signal strength evaluation. The magenta curve corresponds to the signal induced by the UV light in the detection chamber. The red curve corresponds to the \(H^{+}\) induced voltage change in the expected time-of-flight window between 640 ns to 1.15 μs. the literature value \(\Delta\nu_{\rm lit.}=1.24\,\)GHz (which can be calculated from [57]) within \(1\sigma\). This proves, that we detect \(H\) atoms, and not any other potential pollutant in the detection chamber. A scan of the laser pulse intensity dependency of the signal resulted in the same conclusion, showing the expected behavior. The intensity \(I\) is determined by the pulse energy \(E\) and the beam waist \(\omega_{0}\sim 0.75\,\)mm: \(I=E/(\omega_{0}^{2}\pi)\). The data can be taken from Fig. 8. The signal shows an \(I^{3}\) dependence as expected for a three photon process. The two photon 1S-2S excitation is \(I^{2}\) dependent and the ionization of the 2S state adds another \(I\) dependency to the overall process. At \(\sim 10^{7}\,\)W cm\({}^{-2}\), the 2S ionization process starts to saturate, as the ionization rate \(\Gamma_{i}=(I/h\nu)\sigma\), where \(\sigma\) is the 2S \(H\) ionization cross section and \(h\nu\) the photon energy, reaches \(1/\tau_{\rm 2S}\), where \(\tau_{\rm 2S}\) is the lifetime of the 2S state. From here, the overall process follows an \(I^{2}\) dependency. At \(\sim 3\times 10^{7}\,\)W cm\({}^{-2}\), also the 1S-2S excitation process will start to saturate and the overall process becomes \(I\) independent [58]. It would be ideal to run the \(H\) detection on saturation, because the ionization process would become independent of energy- or frequency instabilities of the laser. In order to reach the point of saturation, the laser beam size has to be decreased. Different combinations of convex and concave lenses were already used to compress the beam. But, the mirrors did not withstand the increased intensity for long and were damaged. It seemed like the point of saturation overlapped with the damage threshold of the UV mirrors, which were used at that point. We replaced the mirrors and will hopefully be able to increase the intensity until saturation is reached. ### Hydrogen beam characterization and rate estimation To characterise the velocity distribution of the \(H\)-beam, a time of flight (ToF) measurement was performed. The delay between the opening of the chopper and the firing of the laser was varied while the \(H\) count rate was measured. The expected signal \(S(t)\) is a convolution of the chopper kernel \(h(t)\) and the atomic ToF distribution \(P_{t}(t)\) (assumed to follow a Maxwellian distribution) and is given by \[S(t) =h(t)*P_{t}(t)\,, \tag{1}\] \[P_{v}(t) \propto v^{3}\exp\left(-\frac{mv^{2}}{2kT}\right),\] (2) \[P_{t}(t) =P_{v}\left(\frac{\Delta x}{t}\right)\frac{\Delta x}{t^{2}}\,, \tag{3}\] where \(\Delta x\) is the distance between chopper and detection region, \(m\) is the \(H\) mass, \(k\) is the Boltzmann constant and \(T\) the temperature. The data taken in 2021 and a fit are shown in Fig. 9. The fit resulted in a temperature of \(T=6.07(74)\,\)K, meaning, that the \(H\) gas thermalizes well with the cryogenic nozzle at 6 K. The measurement shows that the maximum of the atom flux appears after around 5 ms delay, relating to an atomic velocity of 250 m s\({}^{-1}\) with a significant fraction of atoms below 100 m s\({}^{-1}\). Figure 8: Laser intensity scan - Observation of the \(I^{3}\) and \(I^{2}\) dependency of \(H\) ionization. Figure 7: Laser frequency sweep - Observation of the 1S and 2S HFS of \(H\). The frequency values on the abscissa correspond to the measured frequency of the seeder laser fundamental, shifted by 230 MHz (outdated wavelength meter calibration) and multiplied by a factor of 8 (two SHG processes, two photon excitation). This was done to to match the absolute literature values [57]. As mentioned in section 3.2, \(H\) velocities of up to \(100\,\mathrm{m}\,\mathrm{s}^{-1}\) can be tolerated. For the upcoming gqs measurement, a certain velocity interval will be selected by setting the delay between chopper opening and firing of the laser to a certain value. The width of this interval is determined by the duration of the chopper opening \(t_{\mathrm{open}}\sim 6.1\,\mathrm{ms}\). With the current velocity distribution of the \(H\) beam, it makes sense, to set the upper bound of the velocity interval to \(v_{\mathrm{max}}=100\,\mathrm{m}\,\mathrm{s}^{-1}\) which leads to a lower bound of \(v_{\mathrm{min}}=\Delta x/(t_{100}+t_{\mathrm{open}})=62\,\mathrm{m}\,\mathrm{ s}^{-1}\). The mean velocity of this interval is \(\bar{v}=81\,\mathrm{m}\,\mathrm{s}^{-1}\) with a corresponding ToF of \(t_{\bar{v}}=12.3\,\mathrm{ms}\). With the fit result, it is possible to estimate the rate of \(H\) atoms passing through the future gqs region. It is composed of the \(H\) input rate \(R_{\mathrm{in}}\sim 10^{17}\,\mathrm{s}^{-1}\), the chopper duty cycle \(d_{c}\sim 0.061\), the form of the distribution after the chopper, assuming a cone like distribution with an opening angle of \(\theta\sim\frac{3}{4}\pi\), the cross sectional area of the gqs region \(A\sim 0.5\,\mathrm{mm}^{2}\) (\(\Delta z\sim 20\,\mathrm{\SIUnitSymbolMicro m}\), \(\Delta y\sim 25\,\mathrm{mm}\)), the beam waist of the laser \(\omega_{0}\sim 0.75\,\mathrm{mm}\) and the probability of the atoms with the given velocity distribution to have a velocity within the selected velocity interval, \(P_{v}(v_{\mathrm{min}}\geq v_{x}\geq v_{\mathrm{max}})=3.9\times 10^{-3}\). These parameters yield an estimated rate of \(H\) passing through the gqs chamber of \[R=R_{\mathrm{in}}dP_{v}(v_{\mathrm{min}}\geq v_{x}\geq v_{ \mathrm{max}})\] \[\frac{A}{2\pi\Delta x^{2}\,(1-\cos\theta/2)}\frac{2\omega_{0}}{( v_{\mathrm{max}}-v_{\mathrm{min}})t_{\bar{v}}}\cong 10^{4}\,\mathrm{s}^{-1}\,. \tag{4}\] Multiplying \(R\) by the ionization efficiency of \(\epsilon_{\mathrm{ion}}\sim 8\%\) (determined by \(\omega_{0}\) and an assumed laser energy of \(1\,\mathrm{mJ}\)) and the MCP efficiency \(\epsilon_{\mathrm{MCP}}\sim 50\%\) yields the signal rate \(R_{\mathrm{sig}}\cong 400\,\mathrm{s}^{-1}\). This is \(4\times 10^{3}\) times more \(H\) signal, as compared to the unc signal. ## 5 Outlook There are currently three major improvements being implemented and tested. New UV mirrors with a higher damage threshold were installed. It can be expected, that the laser intensity will be improved by an order of magnitude: when the beam size is compressed to \(\omega_{0}\sim 0.3\,\mathrm{mm}\), with a UV laser energy of \(\sim 1\,\mathrm{mJ}\), the intensity becomes \(\sim 3.6\times 10^{7}\,\mathrm{W}\,\mathrm{cm}^{-2}\). At this level, saturation is reached and the signal becomes independent of laser energy- or frequency fluctuations. Furthermore the ionization efficiency will be improved dramatically. A beam size of \(0.3\,\mathrm{mm}\) yields an ionization efficiency of \(\epsilon_{\mathrm{ion}}=98.22\%\), which would improve the estimated rate by a factor of 12. The estimated rate will further be improved, by the installation of a new coldhead and an additional heatshield. This is currently being implemented, and first measurements show, that temperatures around \(\sim 4\,\mathrm{K}\) can be expected. This would improve our estimated rate for the velocity interval \([62\,\mathrm{m}\,\mathrm{s}^{-1},\ 100\,\mathrm{m}\,\mathrm{s}^{-1}]\) by a factor of \(\sim 2.2\). It would alternatively be possible to select slower velocities in the interval \([55\,\mathrm{m}\,\mathrm{s}^{-1},\ 83\,\mathrm{m}\,\mathrm{s}^{-1}]\) while maintaining the same countrate as with the old cryo system. It would be preferable to go to even lower velocities. But, as can be seen in Fig. 9 at around \(20\,\mathrm{ms}\), the residual hydrogen gas in the chamber prevents the measurements to be sensitive to atoms with lower velocities. This could be improved by an aperture system between the source and the chamber where the gqs region will be installed. Such a system is currently being Figure 9: Upper figure: ToF data and fit. The fit results in a temperature of \(T=6.07(74)\,\mathrm{K}\). \(t_{100}\) and \(t_{50}\) correspond to the ToFs of \(H\) with velocities of \(100\,\mathrm{m}\,\mathrm{s}^{-1}\) and \(50\,\mathrm{m}\,\mathrm{s}^{-1}\), respectively. In the small figure, the corresponding cumulative velocity distribution for \(v\in[0,200]\,\mathrm{m}\,\mathrm{s}^{-1}\) is shown. Lower figure: Standardized residuals of the fit and corresponding histogram. installed and tested. It consists of three height adjustable, vertical slits with a width of \(200\,\mathrm{\SIUnitSymbolMicro m}\) for the first slit and \(1\,\mathrm{mm}\) for the second and third. Two more vacuum pumps will be installed in between the first and the second and the second and the third slit. This system has two purposes: It will decrease the background, due to the separation of the different vacuum regions of the cryogenic chamber, the beamline and the detection chamber. It will also act as a velocity selecting aperture. As the slit height is adjustable, different trajectories of the atoms can be selected. With the three slits, it will be possible to select the low energy tail of the \(H\) atoms with a vertical velocity component \(v_{z}\sim 0\) at the entrance of the gqs spectrometer as described in section 3.1. As soon as those new implementations are completed and characterized, the gqs chamber will be installed at the end of the beamline replacing the detection chamber. It will contain the gqs spectrometer and the viewports for the UV laser. In this way, the atoms passing through the spectrometer will be photo ionized at the end of the mirror and the \(H^{+}\) detected in the MCP. ## 6 Conclusions We conclude that a gqs measurement with \(H\) is a very promising but challenging endeavor. The expected count rate exceeds the count rates accessible with ucns by orders of magnitudes. An extension and improvement of the existing gqs measurements is highly interesting for multiple fields. In the course of realizing a gqs measurement with \(H\), we set up a cryogenic \(H\)-beam. A highly efficient \(H\) detection system was developed. New UV mirrors, an improved cryogenic system and an aperture system which will reduce the background and select ideal velocity components are currently being implemented and tested. We aim to demonstrate the existence of gqs of \(H\) within the next measurement campaign, starting in 2023. ## Acknowledgments This project was supported by the Austrian Science Fund (FWF) [W1252-N27] (Doktoratskolleg Particles and Interactions) and the ETH Zurich Career Seed Grant [SEED-17 20-1]. Francois Nez and Pauline Yzombard acknowledge support from CNRS (IEA 2021-2022 QRECH). Paolo Crivelli acknowledges the support of the European Research Council (grant 818053-Mu-MASS) and the Swiss National Science Foundation (grant 197346). ## Author contributions All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by CK, ZB, PB, OH, KS and PY. The first draft of the manuscript was written by CK and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. ### Data Availability Statement The datasets generated during and/or analysed during the current study are available from the corresponding author on request.
2306.10181
Catastrophic Forgetting in the Context of Model Updates
A large obstacle to deploying deep learning models in practice is the process of updating models post-deployment (ideally, frequently). Deep neural networks can cost many thousands of dollars to train. When new data comes in the pipeline, you can train a new model from scratch (randomly initialized weights) on all existing data. Instead, you can take an existing model and fine-tune (continue to train) it on new data. The former is costly and slow. The latter is cheap and fast, but catastrophic forgetting generally causes the new model to 'forget' how to classify older data well. There are a plethora of complicated techniques to keep models from forgetting their past learnings. Arguably the most basic is to mix in a small amount of past data into the new data during fine-tuning: also known as 'data rehearsal'. In this paper, we compare various methods of limiting catastrophic forgetting and conclude that if you can maintain access to a portion of your past data (or tasks), data rehearsal is ideal in terms of overall accuracy across all time periods, and performs even better when combined with methods like Elastic Weight Consolidation (EWC). Especially when the amount of past data (past 'tasks') is large compared to new data, the cost of updating an existing model is far cheaper and faster than training a new model from scratch.
Rich Harang, Hillary Sanders
2023-06-16T21:21:41Z
http://arxiv.org/abs/2306.10181v1
# Catastrophic Forgetting in the Context of Model Updates ###### Abstract A large obstacle to deploying deep learning models in practice is the process of updating models post-deployment (ideally, frequently). Deep neural networks can cost many thousands of dollars to train. When new data comes in the pipeline, you can train a new model from scratch (randomly initialized weights) on all existing data. Instead, you can take an existing model and fine-tune (continue to train) it on new data. The former is costly and slow. The latter is cheap and fast, but catastrophic forgetting generally causes the new model to 'forget' how to classify older data well. There are a plethora of complicated techniques to keep models from forgetting their past learnings. Arguably the most basic is to mix in a small amount of past data into the new data during fine-tuning: also known as 'data rehearsal'. In this paper, we compare various methods of limiting catastrophic forgetting and conclude that if you can maintain access to a portion of your past data (or tasks), data rehearsal is ideal in terms of overall accuracy across all time periods, and performs even better when combined with methods like Elastic Weight Consolidation (EWC). Especially when the amount of past data (past 'tasks') is large compared to new data, the cost of updating an existing model is far cheaper and faster than training a new model from scratch. ## 1 Introduction Catastrophic Forgetting - the tendency of deep neural networks to 'forget' previously learned information upon learning new information - has been studied since 1989[3]. This is most clearly demonstrated when models are given separate tasks to learn sequentially, but the effect is also at play whenever a model is learning any type of information sequentially, where that information changes in distribution over time. Real-world applications of machine learning very often have new (training) data coming in over time. In order to release new models trained on this new information, machine learning developers can retrain an entire model from scratch (randomly initialized weights) using all existing training data, but this is computationally very costly. Another option is to take an existing model trained on past data, and simply fine-tune it on the new data that has come in. But new data is coming from (generally) a slightly different 'distribution' than old data, and so especially when this change in distribution is large, the catastrophic forgetting effect becomes clear upon fine-tuning. In the malware detection space, the distribution of malicious and benign content is always changing. New malware is always being produced, as well as new benign content. As a result, a model trained on data up until time \(t\) will generally perform extremely well on a holdout set from before \(t\) (coming from the same parent distribution as the training data), but it's accuracy will, on average, decay validated against \(t+n\) holdout sets with increasing \(n\). Simply fine-tuning existing models with new data, however, results in models that perform well on data similar to the fine-tuned data, but perform poorly on past data (which we still want our models to be able to classify well). Figure 2 contrasts a model that has been sequentially fine-tuned on monthly data (colored lines) vs a model that has been fully retrained from scratch. Note how the red line (sequentially fine-tuned) does more poorly on past months, than the black line (retrained from scratch). If you can get sequentially fine-tuned models (red line) to match or beat performance of a model trained all at once (black line) by minimizing catastrophic forgetting, then you have a path to deploying accurate models very quickly and very cheaply. In this paper, we compare various methods of minimizing this catastrophic forgetting effect. Arguably the most basic approach, one that was suggested in the 90s by X [cite], is to simply mix in data associated with past 'tasks' (older data) into the fine-tuning dataset. We also tested approaches that don't require access to past data: model averaging, L2 regularization on weight movement, and Elastic Weight Consolidation (EWC) regularization [2]. Our results showed that by far the most effective approach is data rehearsal. ## 2 Related Work Catastrophic Forgetting was introduced as an issue in 1989[3] by McCloskey and Cohen. In 1995 Robins [4] discussed a model training set up where each new item (task) is a single sample, but instead of training a model by sequentially learning on each item individually, Figure 1: Model performance on holdout validation data over time (timestamp value is based on the first-seen time of a particular sample). Model was trained on training data up until 2019-03. While performance on holdout validation data is high in the months overlapping with training data, the model’s accuracy quickly decays on holdout validation data coming from after 2019-03. items were learned alongside 3 past items during each epoch. This was termed'sweep rehearsal', and helped improve forgetting. This concept generalizes to situations in which each item (new 'task') is composed of many samples, and data associated with old tasks is mixed in with new task data during model training. Data rehearsal requires access to past data, which isn't always possible. Much work has been done on the subject of limiting catastrophic forgetting when you don't have permanent access to past task data. Atkinson et. al.[1] (among others) use a psuedo-rehearsal system to generate (via a deep generative neural network) items representative of previous tasks. ## 3 Experiments Our experimental setup was as follows. We experimented with our PE malware detection model, which is deep neural network consisting of five large fully connected layers (sizes 1024, 768, 512, 512, 512), followed by output layers (primarily the 'is_malware' output). We grabbed data and extracted features for 12 separate months (each treated as a 'task' in our set up), consisting of about four million training samples each (about a million leftover for validation and testing). The first five months were used to train an initial 'base' model. To test a method, the base model was fine-tuned on data from the sixth month (using the chosen method, e.g. L2 regularization, EWC, etc), the resulting model was trained on the seventh month, that resulting model was trained on the eight month, and that resulting model was trained on the ninth month. The final three months were kept a time-split validation holdout. We chose to apply the methods sequentially on four separate months in order to better simulate real-world model updates (if a method cannot be applied iteratively and still work, it is of _much_ less use). Unless otherwise stated, all models were trained for Figure 2: Here, a base model trained on data up until 2019-03 (the dark teal line) is sequentially fine-tuned on data from the following months: 2019-04 (light teal), then 2019-05 (grey), then 2019-06 (light pink), and finally 2019-07 (the red line). The black line represents a model that was retrained from scratch (with randomly initialized weights) on all data up until 2019-07. While the black line is far more computationally costly to produce each month, note how it performs better on older data than its sequentially fine-tuned comparison model, the red line. ten epochs during each training or fine-tuning session. As a comparison, we also trained a model on the first nine months of data all at once. Although theoretically a sequentially updated model could perform better, the basic idea is that a sequentially updated model with no catastrophic forgetting will perform the same as a model that was trained all at once (the latter being far more cost-prohibitive to train for each new model update). How to best evaluate a model is dependant on what the model is to be used for. We chose to use average AUC on holdout test sets across all twelve months as our main metric (using the final sequentially updated model that has seen the first nine months of data). We also separately show (see Table 1): average AUC across the first five 'base' months, the four 'update' months, and the final three 'future' months. ### Model Averaging Simply training multiple models per time period and averaging the results seems like a potentially reasonable approach. It would result in a much larger and harder to deploy ensemble model, but each additional model would be fast to train. Our model averaging results, however, were very poor. Averaging the predictions of multiple separate models, each trained on one month's worth of data (for the same number of epochs as the sequentially fine-tuned model, making training costs equal) performed much worse than a sequentially fine-tuned model. ### Regularization One option to limit 'forgetting' is to suppress the movement of parameters during fine-tuning. You can do this naively, like via \(L2\) regularization, or you can use more complicated methods like Elastic Weight Consolidation (EWC)[2] to penalize the movement of parameters that are estimated to be important to past tasks. Figure 3: Model averaging results are shown in blue. (Note that the y-limits in the Experiments figures are different than those shown earlier in the paper.) #### 3.2.1 L2 Regularization Figure 3 shows \(L2\) regularization: an \(L2\) loss penalty is added to each parameter with respect to the current parameter's value \(\theta_{c}\) during fine-tuning, and the previous model's parameter value \(\theta_{p}\). The further away you move from the previous model's parameter value, the higher the penalty:\((\theta_{c}-\theta_{p})^{2}\). Training loss (denoted \(L_{c}(\theta)\) in the equation below) is simply standard training loss of the current model \(L_{c}(\theta)\), plus the \(L2\) regularization penalty: \[L_{c}^{{}^{\prime}}(\theta)=L_{c}(\theta)+\sum_{i}^{n}{(\theta_{i,c}-\theta_{ i,p})^{2}} \tag{1}\] As expected, forgetting is reduced but learning is reduced too. There's a very clear trade-off, with no happy medium that attains the performance of the trained-from-scratch model. #### 3.2.2 EWC Regularization Elastic Weight Consolidation (EWC)[2] is a regularization method developed by researchers at Deep Mind in 2017. EWC is similar to \(L2\) regularization, but each parameter's movement penalty is scaled by the parameter movement squared (\((\theta_{c}-\theta_{p})^{2}\): standard \(L2\) regularization), multiplied by EWC's estimate of parameter importance to past tasks (in our case, the ability to classify past data). The idea is to think about your base model as a Bayesian prior of the parameter estimates - then fine tuning is just applying more data to approximate the posterior. Essentially, you apply regularization that simulates the prior distribution. EWC assumes that the prior is normally distributed: with mean given by the base model's parameters, and variance given by the inverse of the Fisher Information matrix diagonal \(F\). Instead of minimizing loss during fine-tuning with respect to your new training data, you minimize an estimate of your loss with respect to both your past \(p\) and current \(c\) data: \[L_{p,c}(\theta)=L_{c}(\theta)+\lambda L_{p}(\theta)=L_{c}(\theta)+\sum_{i}^{n} \frac{\lambda}{2}*F_{i,p}*(\theta_{i,c}-\theta_{i,p})^{2} \tag{2}\] Figure 4: L2 regularization. The colored lines represent the original base model trained up until 2019-03, and then sequentially fine-tuned using different levels of \(L2\) regularization. So what we have here is \(L2\) regularization, scaled by a scaling parameter \(\frac{\lambda}{2}\) and by this \(F_{i}\) value for each parameter \(\theta_{i}\). Elastic Weight Consolidation (EWC)[2] is similar to \(L2\) regularization, but each parameter's movement penalty is scaled by the movement squared (standard \(L2\) regularization), multiplied by EWC's estimate of parameter importance to classifying past data. EWC estimates this using the diagonal of the Hessian of the negative log likelihood with respect to parameters (where this loss corresponds to accuracy on past task(s)). This requires saving second partial derivatives of your model's loss with respect to its parameters during each training round, but doesn't require saving the data used to train the model each round. The specifics of the formula appear to come from using Taylor Series / Laplace Approximation. Loss with current parameters on current data (fine-tuning data) is known, but estimating loss with current parameters on past data is not known. Without access to past data during fine-tuning, but _with_ access to saved partial derivatives of loss on past data with previous parameters \(\theta_{p}\), you can estimate the loss \(L_{p}\) on past data with current parameters \(\theta_{c}\) via Taylor Series: \[L_{p}(\theta_{c})=L_{p}(\theta_{p})+\frac{\partial L_{p}}{\partial\theta_{p}} (\theta_{c}-\theta_{p})+\frac{1}{2}\frac{\partial^{2}L_{p}}{\partial\theta_{p }^{2}}(\theta_{c}-\theta_{p})^{2}+... \tag{3}\] Because we're minimizing loss through gradient descent, we only care about terms that aren't constant with respect to \(\theta_{c}\). So the first term on the right hand side of of the equation we can drop. The second term we can assume is 0 because \(L_{p}\) was at a local minimum with previous parameters \(\theta_{p}\) (before fine-tuning), and the fourth term (...) we're choosing to drop as "small change". What we really want to estimate is the loss of both current and previous data on current parameters \(\theta_{c}\), so adding \(L_{c}(\theta_{c})\) to both sides of our equation leads us to: \[L_{p,c}(\theta_{c})=L_{c}(\theta_{c})+\frac{1}{2}\frac{\partial^{2}L_{p}}{ \partial\theta_{p}^{2}}(\theta_{c}-\theta_{p})^{2}+constant \tag{4}\] Replacing \(\frac{1}{2}\) with the scaling term \(\frac{\lambda}{2}\), leads us to an equation that looks very similar to the one the EWC paper shows. The formula we arrived at represents fisher information \(F\) as \(\frac{\partial^{2}L_{p}}{\partial\theta_{p}^{2}}\). But is that what Fisher Information is? Well, with your model's (negative log likelihood) loss as \(L_{p}\), the Fisher diagonal is defined as \(E(-\frac{\partial L_{p}}{\partial\theta}*-\frac{\partial L_{p}}{\partial \theta})=E(\frac{\partial L_{p}}{\partial\theta}*\frac{\partial L_{p}}{ \partial\theta})\): the negatives coming from the 'negative' log likelihood and then cancelling out.1. Footnote 1: It’s interesting to note that when loss is at a local minimum and thus \(E(\frac{\partial L_{p}}{\partial\theta})=0\), \(E(\frac{\partial L_{p}}{\partial\theta}*\frac{\partial L_{p}}{\partial\theta})\) is the same as the \(variance(\frac{\partial L_{p}}{\partial\theta})\) Under certain regularity constraints (which don't hold particularly true in this case), the Fisher diagonal value _is_ equal to the negative expected second partial derivative of the Loss \(L\) with respect to \(\theta\): \(-E(-\frac{\partial^{2}L}{\partial\theta^{2}})=E(\frac{\partial^{2}L}{\partial \theta^{2}})\) - i.e., the diagonal of the Hessian of the negative log-likelihood (loss) with respect to parameters: what we had in our derivation formula. If you switch out \(E(\frac{\partial^{2}L}{\partial\theta^{2}})\) for \(E(\frac{\partial L_{p}}{\partial\theta}*\frac{\partial L_{p}}{\partial\theta})\), you arrive at the formula Deep Mind proposed.23 Footnote 2: the derivation was not published alongside the paper, so this is our guess at how they derived their formula! Footnote 3: A more intuitive way to think about this is that \(variance(\frac{\partial L}{\partial\theta})\) might be a nice way to estimate how sensitive your model is to changes in \(\theta\). When loss is greatly affected by small changes to a given parameter \(\theta_{i}\) (i.e. \(variance(\frac{\partial L}{\partial\theta_{i}})\)) is high), then the ‘confidence’ that \(\theta_{i}\) should not be changed is high, so the variance in the prior distribution should be small (tight). The added regularization term simulates this confidence, so that parameters the model is very sensitive to don’t end up being changed much. While the derivation uses the Hessian form of Fisher Information, in practice, we found that we had to use the \(E(\frac{\partial L}{\partial\theta}*\frac{\partial L}{\partial\theta})\) version to attain any sort of good performance. The reason is that if, for any parameter \(i\), your \(F_{i,p}\) term is less than zero, then the resulting regularization term will just keep on pushing \(\theta_{i}\) further and further away from \(\theta_{i,p}\) (the previous value), which will tend to wreck any model (including ours). The \(E(\frac{\partial L}{\partial\theta}*\frac{\partial L}{\partial\theta})\) form of fisher \(F\) is guaranteed to be positive, while the \(E(\frac{\partial^{2}L}{\partial\theta^{2}})\) form, not so much. In practice, we found that about a third of our \(E(\frac{\partial^{2}L}{\partial\theta^{2}})\) estimates tended to be negative, making that approach unusable. EWC regularization performed better than basic L2 regularization, but not by an outstanding amount. We theorize that the larger and more complicated a model is, the more error is introduced by the taylor-series estimations and approximations involved in EWC. ### Data Rehearsal Robins [4] discussed the concept of rehearsal in 1995, shortly after the concept of catastrophic forgetting was introduced. Simply put: you mix in data from past tasks while training on new ones. It works really well, but it does mean you need to maintain access to your older data (or at least an _iid_ (independent and identically distributed, i.e. random) subsample of it), which isn't always possible. Mixing in old data also increases the amount of training data you have, so it takes longer to fine-tune a model one epoch. Initially, to make comparisons to other methods equal, we fixed the fine-tuning epoch sizes Figure 5: Elastic Weight Consolidation to 4 million, and tested out various proportions from past data (see Figure 6). For example, a 50% rehearsal indicates that during fine-tuning, 2 million samples come from the new month's data, and 2 million samples come from past data (sampled uniformly unless otherwise stated) - meaning that the model misses out on much new data each epoch. Not fixing the fine-tuning epoch size (i.e. allow it to increase with added past data) yielded far better results (see Figure 7). However, doing so increases the computational cost of each model update - for example, if you rehearse old data at a rate of 50%, epoch size jumps to 8 million, doubling the computational cost of training. This is still generally much cheaper computationally than retraining the model from scratch on all available data, but is no longer a computationally fair comparison to the other methods presented above. Figure 6: Various forms of data rehearsal with fixed number of training samples each epoch. Each form mixes in varying amounts of past data, but all train for the same number of epochs on the same total number of samples. Figure 7: Rehearsal without epoch size limits (i.e. fine-tuning updates are slower amd more costly). 50% rehearsal (2x time to train, light-pink) achieved the best average AUC. ### Conclusion Our experiments concluded that if you do have access to past data, rehearsal is an excellent way to minimize catastrophic forgetting. Furthermore, if you're willing to increase model fine-tuning time, it is much more effective (and easy to implement). It also can be combined with regularization approaches, like EWC (which resulted in the best results overall). Figure 8: Various forms of model updates compared. For each method category (e.g. L2 regularization), the version with the best validation accuracy was selected (e.g. L2 with a certain Lambda value). ## Acknowledgements Thanks to Sophos for supporting this research.
2302.02072
An Inexact Deflected Subgradient Algorithm in Infinite Dimensional spaces
We propose a duality scheme for solving constrained nonsmooth and nonconvex optimization problems in a reflexive Banach space. We establish strong duality for a very general type of augmented Lagrangian, in which we assume a less restrictive type of coercivity on the augmenting function. We solve the dual problem (in a Hilbert space) using a deflected subgradient method via this general augmented Lagrangian. We provide two choices of step-size for the method. For both choices, we prove that every weak accumulation point of the primal sequence is a primal solution. We also prove strong convergence of the dual sequence.
Regina S. Burachik, Xuemei Liu
2023-02-04T02:50:21Z
http://arxiv.org/abs/2302.02072v1
# An Inexact Deflected Subgradient Algorithm in Infinite Dimensional spaces ###### Abstract We propose a duality scheme for solving constrained nonsmooth and nonconvex optimization problems in a reflexive Banach space. We establish strong duality for a very general type of augmented Lagrangian, in which we assume a less restrictive type of coercivity on the augmenting function. We solve the dual problem (in a Hilbert space) using a deflected subgradient method via this general augmented Lagrangian. We provide two choices of step-size for the method. For both choices, we prove that every weak accumulation point of the primal sequence is a primal solution. We also prove strong convergence of the dual sequence. _Key words_: Augmented Lagrangian; Banach space; Nonconvex optimization; Nonsmooth optimization; Subgradient methods; Duality scheme; Penalty function methods. **AMS subject classifications.** 49M29; 90C25; 90C26; 90C46; 65K10. ## 1 Introduction The (generalized) augmented Lagrangian duality theory is a powerful tool for solving nonconvex constrained optimization problems. Instead of tackling directly the constrained (primal) problem, we can recover the primal solution by solving the dual problem. This is particularly useful when the dual problem is easier to solve than the primal one. This is the case in the augmented Lagrangian duality framework. The dual problem is obtained from the Lagrangian function, which is a function that incorporates both the objective function and the information on the constraints. _Strong duality_ (i.e., when the primal and dual problems have the same optimal value) is a basic requirement when using a duality framework. For nonconvex problems, however, a positive gap may exist between the primal and dual optimal values when the classical Lagrangian is used. The augmented Lagrangian duality [31, 33], on the other hand, will have zero duality gap even in the nonconvex case, and will allow us to recover the solutions of the original (nonconvex) problem. We describe next the origins and up-to-date development of augmented Lagrangians. The _linear augmented Lagrangian_ as introduced in [31, Chapter 11] is the sum of the classical Lagrangian and an augmenting term; in other words, it is the sum of the objective function, a linear term and an augmenting term. The _sharp Lagrangian_, introduced in [31, Example 11.58], is a linear augmented Lagrangian which adds to the classical linear term any norm function. The theory of Lagrangian duality is an active area of research, see, e.g., [34, 35, 36, 37, 38, 39, 3, 4, 12, 24, 25, 28, 34, 37, 38]. In particular, [37] and [38] are for infinite dimensional settings. Dolgopolik in [21] and [22] gives an excellent introduction on different types of Lagrangians and their applications in solving various kinds of problems. More general types of Lagrangian function have been studied in [18] and more recently in [16]. Our aim is to provide a primal-dual framework for the infinite dimensional setting using a general, although simple enough, Lagrangian. Our duality scheme is paired with an algorithmic framework, the Deflected Subgradient Method (DSG). Several works exist that use DSG algorithm within a similar primal-dual framework. Hence we make a comparison among these works and ours in terms of the range of applications (in finite or infinite dimensions), the Lagrangian, and the convergence results. Since some level of dual convergence is studied in all these works, we will rather focus on primal convergence results. Gasimov [23] proposed a deflected subgradient algorithm which uses the sharp Lagrangian, where the augmenting function is a norm, to solve finite dimensional optimization problems. This method has the desirable property that it generates a dual sequence with a strict improvement of the dual values in each iteration (dual strict improvement). It uses a Polyak-type step-size which requires the knowledge of the optimal dual value. One should note however that this knowledge can be difficult to obtain in practice, especially for non-convex problems. The analysis in Gasimov [23] establishes only convergence of the sequence of dual values to the optimal value. It goes without saying that primal convergence is probably the most important feature of any primal-dual scheme, but unfortunately this is not studied in [23]. Indeed, the example given by Burachik et al. in [5, Example 1] shows that the primal sequence in [23] may not converge to the primal solution. Later on, using the same primal-dual framework as [23], the works [5, 7, 15] developed further results on the step-size and convergence results. In [5], Burachik, Gasimov, Ismayilova and Kaya, establish the convergence of an _auxiliary primal sequence_ for the Polyak-type step-sizes. In [7] Burachik, Iusem and Melo propose an _inexact version_ of the DSG algorithm and prove _auxiliary primal convergence_ for inexact iterations. Burachik, Kaya and Mammadov [15] devise an inexact version of the methods in [5, 23] and show that the same convergence properties can be preserved when there is a level of inexactness in the solution of the subproblems. Burachik, Iusem and Melo [7] use the sharp Lagrangian, and propose two choices of step-sizes which are independent of the optimal value. They establish primal convergence with these step-sizes, even for the case when the dual solution set is empty. Hence we will adopt these types of step-sizes for our work to inherit the nice primal convergence properties for our algorithm. These four works above, namely [5, 7, 15, 23], are for the finite dimensional setting. Let us now recall the ones that apply to infinite dimensions, which will constitute the main motivation of the present paper. Burachik, Iusem and Melo [9] use a general type of augmented Lagrangian, which includes the Lagrangians used by the four previous works as particular cases. They extend the analysis in [7] to the infinite dimensional setting. Namely, the primal problems are defined in a reflexive Banach space and the constraint functions are defined in a Hilbert space. They also use an inexact version of the DSG algorithm as in [7], and establish both the primal and dual convergence by adopting the types of step-sizes in [7]. Burachik and Kaya [13], and later Burachik, Freire and Kaya [11], incorporated a scaling symmetric matrix \(A\) in the linear term of the Lagrangian in finite dimensions. This general type of augmented Lagrangian will be the focus of the present paper, since it provides a level of generality that allows, e.g., the full theoretical analysis of the penalty case, i.e., when \(A\) is taken as the zero matrix. The type of Lagrangian we focus on is an extension of the one in [11, 13] to infinite dimensions. It is associated with the following infinite dimensional equality constrained problem: \[\min_{x\in X}\varphi(x)\quad\text{s.t.}\quad h(x)=0\,, \tag{1}\] where \(X\) is a reflexive Banach space, \(\varphi:X\to\mathbb{R}\cup\{\infty\}\) is lower semi-continuous; and is continuous. The Lagrangian \(l:X\times\mathbb{R}^{m}\times\mathbb{R}_{+}\to\mathbb{R}\) is defined as \[l(x,y,c):=\varphi(x)-\langle Ay,h(x)\rangle+c\,\sigma(h(x))\,, \tag{2}\] where \(x\in X\), \(y\in\mathbb{R}^{m}\), \(c\in[0,\infty)\), \(A:\mathbb{R}^{m}\to\mathbb{R}^{m}\), is a continuous map, and \(\sigma:\mathbb{R}^{m}\to\mathbb{R}_{+}\) verifies \(\sigma(x)=0\) if and only if \(x=0\) (see Definition 3.5 for more details). The sharp Lagrangian is a particular case of the general Lagrangian defined in (2) when \(A\) is the identity map and the penalty function \(\sigma\) is any norm in \(\mathbb{R}^{m}\). Moreover, the linear term reduces to the classical penalty function when \(A\) is zero [20, 13]. The numerical experiments in [13] demonstrate that choosing suitable \(A\) or \(\sigma\) for various classes of problems can improve the computational performance. In [13], the exact version of the DSG algorithm is utilized and both auxiliary and primal convergence results for the Polyak-type step-size are obtained. Our analysis is inspired by [15] in designing our inexact DSG algorithm and establishing the convergence results. Our results extend those in [15] in the following ways. 1. Our Lagrangian (2) has a general map \(A\), which includes the identity and the zero map as particular cases. The case \(A\) the identity matrix is studied in [15], while the case \(A=0\) is new. This opens the way for the analysis and implementation of penalty methods. 2. We establish strong duality for our very general type of Lagrangian. In particular, the function \(\sigma\) we consider may not be coercive (see Definition 3.4(a') and Theorem 3.1). Regarding the study of the theoretical properties of our primal-dual setting, we point out that the proof of strong duality provided in [17] would cover our case. However, we provide our own proof here, because our type of Lagrangian allows us to provide a result which requires weaker assumptions (see more details in the paragraph right before Lemma 3.1). In Section 3 we show that our infinite dimensional framework (i) has no duality gap, and (ii) has a dual problem which is convex hence we can solve it using the techniques in convex analysis [30, 1, 29]. In particular, the algorithm we introduce here can be seen as an epsilon-subgradient algorithm applied to the maximization of the dual function (see Remark 4.3). The paper is organized as follows. In Section 2, we give the preliminaries mainly on functional analysis, which help in building our primal-dual framework and establishing our convergence results. In Section 3, we give our primal-dual framework and its important assumptions. In particular, we will respond to the two questions above in this section, where we show the properties of this framework and give the related proofs. In Section 4, we state the DSG algorithm. We provide two choices of the step-size and establish the convergence results for both of the choices. Our conclusion is given in Section 6. ## 2 Preliminaries We provide in this chapter some functional analysis tools for future use. Most of these results can be found in the text books of functional analysis such as those by Brezis [2] and Kreyszig [26]. We use Brezis's book on Functional Analysis [2] as our main reference, and provide our own proof for results which are either not included in [2] or hard to track down elsewhere. The results we list here will be used in proving the properties of the primal-dual framework, as well as in establishing convergence results of the DSG algorithm. Let \(X\) be a reflexive Banach space, \(X^{*}\) its topological dual (i.e., the set all continuous linear functionals from \(X\) to \(\mathbb{R}\)), and \(H\) a Hilbert space. We denote by \(\langle\cdot,\cdot\rangle\) both the duality product in \(X\times X^{*}\) and the scalar product in \(H\). We denote by \(\|\cdot\|\) the norm, where the same notation will be used for the norm both in \(X\) and \(H\). We use the notation \(\mathbb{R}_{++}\) for the positive real numbers, \(\mathbb{R}_{+\infty}:=\mathbb{R}\cup\{+\infty\}\) (sometimes \(\mathbb{R}_{\infty}\) for short) and \(\overline{\mathbb{R}}:=\mathbb{R}_{+\infty}\cup\{-\infty\}\). Given a function \(g:X\to\overline{\mathbb{R}}\), define the _effective domain_ of \(g\) as \(\operatorname{dom}g:=\{x\in X\;:\;g(x)<+\infty\}\). We say that \(g\) is _proper_ if \(g(x)>-\infty\) and \(\operatorname{dom}g\neq\emptyset\). Recall that the set \(\operatorname{epi}g:=\{(x,t)\in X\times\mathbb{R}\;:\;g(x)\leq t\}\) is the _epigraph of \(g\)_, and that the set \(\mathit{lev}_{g}(\alpha):=\{x\in X\;:\;g(x)\leq\alpha\}\) is the \(\alpha\)-_level set of \(g\)_. Let \(Y\) be a Banach space and consider a map \(F:X\to Y\), the _graph of \(F\)_ is the set \(G(F):=\{(x,v)\in X\times Y\;:\;v=F(x)\}\). Given \(C\subset X\), the _indicator function of \(C\)_ is defined as \(\delta_{C}(v):=0\) if \(v\in C\) and \(+\infty\) otherwise. If \(C=\{z\}\) is a singleton, we denote \(\delta_{\{z\}}=:\delta_{z}\). ### Functional Analysis Tools The topology induced by the norm (in \(X\) or \(H\)), is called the _strong_ topology. The weak topology in \(X\) (weak topology in \(H\)) is the coarsest topology that makes all elements of \(X^{*}\) (all elements of \(H^{*}=H\)) continuous. **Definition 2.1** (definitions related with the weak topology): _Let \(X\) be a Banach space, and \(H\) a Hilbert space._ * _Let_ \(K\subset X\)_. We say that_ \(K\) _is_ weakly closed in \(X\) _when it is closed w.r.t. the weak topology in_ \(X\)_._ * _We say that a function_ \(h:X\to H\) _is_ weak-weak continuous _when_ \(h^{-1}(U)\subset X\) _is weakly open in_ \(X\) _for every_ \(U\subset H\) _weakly open in_ \(H\)_._ * _We say that a function_ \(h:X\to H\) _is_ weak-strong continuous _when_ \(h^{-1}(U)\subset X\) _is weakly open in_ \(X\) _for every_ \(U\subset H\) _strongly open in_ \(H\)_. Strong-strong and strong-weak continuity are defined similarly._ * _We say that a function_ \(\varphi:X\to\mathbb{R}_{+\infty}\) _is_ weakly lower semi-continuous _(_w-lsc_) when it is lsc w.r.t. the weak topology in_ \(X\)_. Namely, when epi_ \(\varphi\) _is w-closed._ We recall next some well-known facts from functional analysis. **Fact 2.1**: _Let \(X\) be a Banach space, \(H\) be a Hilbert space. Assume that \(K\subset X\) is nonempty. The following hold. If \(K\subset X\) is weakly compact, then it is weakly closed._ In most of what follows, when a topological property is mentioned by its own, this means that the property holds w.r.t. the strong (i.e., the norm) topology. For instance, if we write "\(A\) is closed", we mean "\(A\) is strongly closed". If a property holds w.r.t. the weak topology, we will mention the term "weak" (or "weakly") explicitly (e.g., weakly closed, weakly compact, etc.). It is well-known that, in any metric space, compactness is equivalent to sequential-compactness. To clarify what the situation is for the case of the weak topology in a Banach space \(X\), we recall the following definitions. **Definition 2.2** (weak compactness; sequential compactness; coercive): _Let \(X\) be a Banach space, \(A\subset X\) and \(\varphi:X\to\mathbb{R}_{+\infty}\)._ * _The set_ \(A\) _is_ weakly-compact _when its weak closure, denoted as_ \(\overline{A}^{w}\)_, is compact w.r.t the weak topology._ * _A set_ \(A\subset X\) _is_ sequentially-compact _(respectively,_ weakly sequentially-compact_) when every sequence_ \(\{x_{n}\}\subset A\) _has a subsequence converging strongly (respectively, weakly) to a limit in_ \(A\) _._ 3. _The function_ \(\varphi:X\to\mathbb{R}_{+\infty}\) _is_ coercive _when_ \(\lim_{\|x\|\to\infty}\varphi(x)=+\infty\)_._ The equivalence between compactness and sequential-compactness in normed spaces allows the use of sequences when dealing with compact sets in \(X\). To be able to deal with weakly compact sets in \(X\) in terms of sequences, we recall the following classical well-known result, which is [2, Problem 10(3), p. 448]. **Theorem 2.1** (Eberlein-Smulian): _Let \(E\) be a Banach space and let \(A\subset E\). Set \(B:=\overline{A}^{w}\) (i.e., \(B\) is the weak closure of \(A\)). The following statements are equivalent._ 1. \(B\) _is weakly compact._ 2. \(B\) _is weakly sequentially-compact._ Next we quote results that connect boundedness, closedness and compactness both in strong and weak topologies. The next result, a corollary of Bourbaki-Alaoglu's theorem, is [2, Corollary 3.22]. This result is a consequence of a separation result for convex sets, together with Bourbaki-Alaoglu's theorem. **Theorem 2.2**: _Let \(E\) be a reflexive Banach space. Let \(K\subset E\) be a bounded, closed, and convex subset of \(E\). Then \(K\) is weakly compact._ **Corollary 2.1**: _If \(X\) is a Banach space, then every weakly compact set is closed and bounded._ We quote next a result on sequential compactness that holds in reflexive Banach spaces. **Theorem 2.3**: _Assume that \(X\) is a reflexive Banach space and let \(\{x_{n}\}\) be a bounded sequence in \(X\). Then there exists a subsequence \(\{x_{n_{k}}\}\subset\{x_{n}\}\) that converges in the weak topology._ In our analysis, we will consider level sets of w-lsc functions. For future use, we prove below a property that directly follows from the results quoted above. This property is well-known, but hard to track down as stated below. So we provide the proof here for convenience of the reader. **Corollary 2.2**: _Let \(X\) be a reflexive Banach space and assume that \(\varphi:X\to\mathbb{R}_{+\infty}\) is w-lsc. The function \(\varphi\) is coercive if and only if all its level sets are weakly compact. In this situation, all the level sets are closed and bounded._ Proof. Assume first that \(\varphi\) is coercive and fix \(\alpha\in\mathbb{R}\). We need to show that \(lev_{\varphi}(\alpha)\) is weakly compact. By Eberlein-Smulian theorem, which is Theorem 2.1, it is enough to show that \(lev_{\varphi}(\alpha)\) is weakly sequentially compact. This means that every sequence in \(lev_{\varphi}(\alpha)\) contains a subsequence weakly convergent to a limit, and this limit belongs to \(lev_{\varphi}(\alpha)\). Indeed, take a sequence \(\{x_{k}\}\subset lev_{\varphi}(\alpha)\). Since \(\varphi\) is coercive this sequence is bounded, and by Theorem 2.3 there exists a subsequence \(\{x_{n_{k}}\}\) converging weakly to some \(x\in X\). Since \(\varphi\) is w-lsc, we can write \[\varphi(x)\leq\liminf_{k\to\infty}\varphi(x_{n_{k}})\leq\alpha,\] where the last inequality holds because \(x_{n_{k}}\in lev_{\varphi}(\alpha)\) for all \(k\). This implies that \(x\in lev_{\varphi}(\alpha)\). Hence, the level sets \(lev_{\varphi}(\alpha)\) are weakly sequentially compact. By Eberlein-Smulian theorem, they are weakly compact. Conversely, assume that \(lev_{\varphi}(\alpha)\) is weakly compact for every \(\alpha\in\mathbb{R}\). To show that \(\varphi\) is coercive, it is enough to show that \(lev_{\varphi}(\alpha)\) is bounded. This follows directly from Corollary 2.1, which states that any weakly compact set must be bounded. Hence every level set \(lev_{\varphi}(\alpha)\) is bounded and \(\varphi\) is coercive. The last statement in the corollary is a direct consequence of Corollary 2.1. \(\Box\) The next result is crucial in establishing the well-definedness of the algorithm we will present in section 4. The result we quote below is [6, Proposition 3.1.15]. **Theorem 2.4**: _Let \(E\) be any topological space, and let \(\varphi:E\to\mathbb{R}_{+\infty}\) be a proper function which is lsc (w.r.t. the topology of \(E\)). If \(A\subset E\) is compact and such that \(\operatorname{dom}\varphi\cap A\neq\emptyset\), then \(\varphi\) is bounded below on \(A\) and it attains its minimum on \(A\), which is finite._ ## 3 Primal and Dual Problems ### Theoretical Framework The primal-dual framework we present here extends the one studied in [13, 11] to the infinite dimensional setting. A particular case of our duality framework is the sharp Lagrangian in finite dimensions, as studied in [23, 5, 7, 15]. The sharp Lagrangian has as augmenting term any norm, which motivates the terminology "sharp". In infinite dimensions, Burachik, Iusem and Melo [9] propose a related Lagrangian framework. The Lagrangian proposed in [9] uses a penalty function \(\sigma(\cdot)\) with the same properties we study here, but the difference with our type of Lagrangian is in the linear term. Namely, our Lagrangian includes a map \(A\) in the linear term, opening the way to the consideration of penalty methods for the particular case in which \(A=0\). The framework in [9] deals with the case in which \(A\) equals the identity map. Even though some of our proofs are similar to those in [9], extra care is needed due to the presence of a general map \(A\). Since the identity map satisfies all the assumptions we make on the map \(A\), all the results in [9] can be deduced from our analysis. We use a different and more involved method of proof for obtaining the main result in this chapter, namely, the strong duality property. Let \(X\) be a reflexive Banach space, and \(\varphi:X\to\mathbb{R}_{\infty}\) be a proper function. We consider the primal optimization problem \[(P)\qquad\min\ \varphi(x)\ \ \text{s.t.}\ x\ \text{in}\ X.\] Following [17, Section 2.2], we embed problem \((P)\) into a family of parametrized problems by means of a function that coincides with the objective function when the parameter is zero. The tool we use is defined next. **Definition 3.1**: _A dualizing parameterization for \((P)\) is a function \(f:X\times H\to\bar{\mathbb{R}}\) that verifies \(f(x,0)=\varphi(x)\) for all \(x\in X\). The perturbation function induced by this dualizing parameterization is defined as \(\beta:H\to\bar{\mathbb{R}}\) such that_ \[\beta(z):=\inf_{x\in X}f(x,z). \tag{3}\] The next definition, which is [17, Definition 5.1], will be a basic assumption for the dualizing parametrizations. It uses the concepts of weakly open and weakly compact sets. We recalled the latter concept in Definition 2.2(a). Recall also that a set is weakly open set when its complement is weakly closed (see Definition 2.1(a)). **Definition 3.2**: _A function \(f:X\times H\to\bar{\mathbb{R}}\) is said to be weakly level-compact if for each \(\bar{z}\in H\) and \(\alpha\in\mathbb{R}\) there exist a weakly open neighbourhood \(U\subset H\) of \(\bar{z}\), and a weakly compact set \(B\subset X\), such that_ \[lev_{z,f}(\alpha):=\{x\in X:f(x,z)\leq\alpha\}\subset B\ \ \text{for all}\ z\in U.\] _In other words, there exist sets \(U\subset H\) weakly open and \(B\subset X\) weakly compact, such that \(\bar{z}\in U\) and we have_ \[\bigcup_{z\in U}lev_{z,f}(\alpha):=\{x\in X:f(x,z)\leq\alpha\ \ \forall z\in U\} \subset B.\] If the duality parameterization is weakly-level compact, the corresponding perturbation function is _sequentially weak-lsc_. Before establishing this fact, we recall next the definition. **Definition 3.3** (sequentially weak-lsc function): _Let \(\theta:H\to\mathbb{R}_{+\infty}\). We say that \(\theta\) is sequentially weakly lsc if the following property holds._ \[\text{If }u_{n}\rightharpoonup u,\text{ then }\theta(u)\leq\liminf_{n\to\infty} \theta(u_{n}).\] **Remark 3.1** (weak-lsc vs. sequentially weak-lsc): In finite dimensions, or more generally in any metric space, there is no difference between semicontinuity and its sequential version. In an infinite dimensional Hilbert space, however, weak lsc as given in Definition 2.1(d) is more restrictive than its sequential version. In the former, the liminf inequality in Definition 3.3 must hold for any net weakly converging to a limit. Since a sequence is a particular case of a net, weak lsc implies sequential weak-lsc, and the converse, in general, does not hold. Indeed, while Definition 2.1(d) corresponds to weak closedness of the epigraph, Definition 3.3 corresponds to the latter set merely being sequentially weakly closed. We will use the following type of functions for constructing our Lagrangian function. **Definition 3.4** (augmenting function): _A function \(\sigma:H\to\mathbb{R}_{+\infty}\) is an augmenting function if the following properties hold._ 1. _The function_ \(\sigma\) _is proper, w-lsc and coercive (see Definitions_ 2.1_(d) and_ 2.2_(c))._ 2. _The function_ \(\sigma\) _is proper, w-lsc, and satisfies the following condition: There exists_ \(K_{\sigma}>0\) _s.t. the set_ \[lev_{\sigma}(K_{\sigma})=\{z\in H\::\:\sigma(z)\leq K_{\sigma}\},\] _is bounded. We call this type of_ \(\sigma\) conditionally coercive.__ 3. _It holds that_ \(\sigma(0)=0\) _and_ \(\operatorname*{argmin}_{z}\sigma(z)=\{0\}\)_._ In what follows, we always assume that the function \(\sigma\) used in the Lagrangian satisfies the assumptions of Definition 3.4, either with (a) or (a'). Note that condition (a) (coercivity), is strictly stronger than (a') (conditional coercivity). If we are able to relax the requirements on \(\sigma\) and just require conditional coercivity for \(\sigma\), we will make it clear in our proofs. Otherwise, we may simply say that \(\sigma\) is as in Definition 3.4(a). It is the less restrictive assumption on \(\sigma\) the one we will use in our proof of strong duality. Before doing this, we proceed to establish the announced sequential w-lsc for the perturbation function. **Proposition 3.1** (sequential w-lsc of \(\beta\)): _Let \(f:X\times Z\to\overline{\mathbb{R}}\) be weakly lower semicontinuous and weakly level-compact. Then the function \(\beta\) defined by (3) is sequentially-weakly lower semicontinuous._ Proof. Assume that \(\beta\) is not sequentially weakly lower semicontinuous. This implies that there is a point \(u\), a sequence \(\{u_{n}\}_{n\in\mathbb{N}}\), and \(\varepsilon>0\) such that 1. \(u_{n}\rightharpoonup u\), 2. \(\liminf_{n}\beta(u_{n})<\beta(u)-\varepsilon\). By weak-level compactness of \(f\), there exists a weak neighborhood \(W\) of \(u\) such that the set \[\{x\in X\::\:f(x,z)\leq\beta(u)-\varepsilon\}\subset B\text{ for all }z\in W,\] where \(B\) is weakly compact in \(X\). By (i), there exists \(n_{0}\in\mathbb{N}\) such that \(u_{n}\in W\) for all \(n\geq n_{0}\). Therefore, for all \(n\geq n_{0}\) we have that \[\{x\in X\::\:f(x,u_{n})\leq\beta(u)-\varepsilon\}\subset B.\] Calling \(\tilde{F}(x):=\sup_{n\geq n_{0}}f(x,u_{n})\), this implies that \[L:=lev_{\tilde{F}}(\beta(u)-\varepsilon)=\{x\in X\,:\,\tilde{F}(x)\leq\beta(u)- \varepsilon\}\subset B. \tag{4}\] Since \(f(\cdot,v)\) is w-lsc for all \(v\in H\), we deduce that \(\tilde{F}\) is w-lsc too. Hence, the set \(L\) is weakly closed. Since \(L\) is a weakly closed subset of a weakly compact set, it is weakly compact. We can apply now Eberlein-Smulian Theorem (Theorem 2.1), to deduce that \(L\) is weakly sequentially compact. By (ii) and the definition of liminf we can write \[\beta(u)-\varepsilon>\sup_{k\in\mathbb{N}}\inf_{n\geq k}\beta(u_{n})\geq\inf _{n\geq n_{0}}\beta(u_{n}).\] Take now \(n_{1}\geq n_{0}\) such that for all \(k\geq n_{1}\) we have \[\beta(u)-\varepsilon-1/k>\inf_{n\geq n_{0}}\beta(u_{n})=\inf_{n\geq n_{0}} \inf_{x\in X}f(x,u_{n}),\] where we used the definition of \(\beta\) in the equality. Define now \(\tilde{G}(x):=\inf_{n\geq n_{0}}f(x,u_{n})\), so the above expression becomes \[\beta(u)-\varepsilon-1/k>\inf_{n\geq n_{0}}\beta(u_{n})=\inf_{x\in X}\inf_{n \geq n_{0}}f(x,u_{n})=\inf_{x\in X}\tilde{G}(x),\] which holds for all \(k\geq n_{1}\). Fix now an index \(k\geq n_{1}\). By definition of infimum we can find \(x_{k}\in X\) such that \(\tilde{G}(x_{k})=\inf_{n\geq n_{0}}f(x_{k},u_{n})<\beta(u)-\varepsilon-1/k\). Using a similar argument again for this fixed \(k\geq n_{1}\), we can find \(n_{k}\geq n_{0}\) and \(u_{n_{k}}\) s.t. \(f(x_{k},u_{n_{k}})<\beta(u)-\varepsilon-1/k\). Doing this for every \(k\geq n_{1}\) and using the fact that \(u_{n_{k}}\in W\) for all \(k\) we deduce that the obtained sequence \((x_{k})\) is contained in the sequentially compact set \(L\). Thus there exists a subsequence of \((x_{k})\) which is weakly convergent to a limit \(\hat{x}\in L\). For simplicity, we still denote this weakly convergent sequence by \((x_{k})\). So we can assume that \(x_{k}\rightharpoonup\hat{x}\in L\). Since \((u_{n_{k}})\subset(u_{n})\) we have that \(u_{n_{k}}\rightharpoonup u\). By w-lsc of \(f\) and the definition of \(\beta\) we obtain \[\beta(u)\leq f(\hat{x},u)\leq\liminf_{k}f(x_{k},u_{n_{k}})\leq\liminf_{k} \beta(u)-\varepsilon-1/k=\beta(u)-\varepsilon,\] a contradiction. Therefore, \(\beta\) must be sequentially weakly lsc. \(\Box\) We present next the basic assumptions we need for the map \(A\) (one of these involves the augmenting function \(\sigma\) as in Definition 3.4). Assume that map \(A:H\to H\) verifies the following properties: * \(\sigma(z)\geq\|A(z)\|\), for all \(z\in H\). * For every \(y\in H\), the function \(\langle A(\cdot),y\rangle\) is w-usc (i.e., \(-\langle A(\cdot),y\rangle\) is w-lsc for every \(y\in H\)). **Remark 3.2** (Assumptions \(\mathbf{(A_{0})}\)-\(\mathbf{(A_{1})}\)): __ * By Definition 3.4 (b), \((A_{0})\) trivially implies that \(A(0)=0\). This assumption will be used in Proposition 3.2(iii) to derive the weak duality property. Assumption \(\mathbf{(}A_{0})\) is also important in obtaining a non-decreasing property of the dual function, as we will see in Proposition 3.2(ii). * \(\mathbf{(}A_{1})\) plays a role in obtaining the strong duality for our primal-dual framework in Lemma 3.1 and Theorem 3.1. We next list more assumptions on our primal-dual framework. * The objective function \(\varphi:X\to\mathbb{R}_{+\infty}\) is proper and w-lsc. * The function \(\varphi\) has weakly compact level sets. * The dualizing parameterization \(f\) is proper (i.e., \(\operatorname{dom}f\neq\emptyset\) and \(f(x,z)>-\infty,\ \forall\,(x,z)\in X\times H\)), w-lsc and weakly level-compact (see Definition 3.2). We define next the problem dual to \((P)\), via our augmented Lagrangian function. **Definition 3.5** (augmented Lagrangian and associated dual problem): _With the notation of Problem \((P)\), let_ * \(f\) _be a dualizing parameterization as in Definition_ 3.1_, satisfying assumption_ (H2)_,_ * \(A:H\to H\) _be a function verifying the assumptions_ (A0)_-_(A1)_._ * \(\sigma\) _be an augmenting function as in Definition_ 3.4_, with (a') instead of (a)._ _The augmented Lagrangian for Problem \((P)\) is defined as_ \[\ell(x,y,c):=\inf_{z\in H}\{f(x,z)-\langle A(z),y\rangle+c\sigma(z)\}. \tag{5}\] _The dual function \(q:H\times\mathbb{R}_{+}\to\mathbb{R}_{-\infty}\) is defined as_ \[q(y,c):=\inf_{x\in X}\ell(x,y,c). \tag{6}\] _The dual problem of \((P)\) is given by_ \[(D)\qquad\text{maximize }q(y,c)\ \operatorname{s.t.}\ (y,c)\in H\times \mathbb{R}_{+}.\] _Denote by \(M_{P}:=\inf_{x\in X}\varphi(x)\) and by \(M_{D}:=\sup_{(y,c)\in H\times\mathbb{R}_{+}}q(y,c)\) the optimal values of the primal and dual problem, respectively. The primal and dual solution sets are denoted by \(S(P)\) and \(S(D)\), respectively._ **Remark 3.3** (finite primal value for Problem \((P)\)): _By definition of duality parameterization and assumption (H0), we have that \(\varphi\) is proper, so by Definition 3.1, the following holds_ \[M_{P}=\beta(0)<+\infty. \tag{7}\] ### Properties of the Primal-Dual Setting We next present some basic properties of the dual function given in Definition 3.5. **Proposition 3.2** (properties of the dual function): _Let \(q\) be the dual function which is defined in (5). The following facts hold._ * _The dual function_ \(q\) _is concave and weakly upper-semicontinuous_ (w-usc)_._ * _If_ \(c\geq c_{1}\) _then_ \(q(y,c)\geq q(y,c_{1})\) _for all_ \(y\in H\)_. In particular, if_ \((y,c_{1})\) _is a dual solution, then also_ \((y,c)\) _is a dual solution for all_ \(c\geq c_{1}\)_._ * dual framework, i.e._ \[M_{D}=\sup_{(y,c)\in H\times\mathbb{R}_{+}}q(y,c)\leq\inf_{x\in X}\varphi(x)=M _{P},\] _where_ \(\varphi\) _verifies_ (H0)_, and_ \(q\) _is as in Definition_ 3.5_._ Proof. (i) We show that \(q\) is w-usc and concave simultaneously. By (6), \(q\) is the infimum of a family of w-usc and concave functions. Indeed, define \(\psi_{xz}:H\times\mathbb{R}\to\mathbb{R}\), as \(\psi_{xz}(y,c):=f(x,z)-\langle A(z),y\rangle+c\sigma(z)\). Then \(\psi_{xz}\) is w-continuous and concave (actually affine), w.r.t. the variable \((y,c)\). Now the concavity and weak-upper semicontinuity of \(q\) follow from (6). We now proceed to show (ii). The fact that \(q(y,\cdot)\) is non-decreasing follows directly from the definition and the fact that \(\sigma(z)\geq 0\). Let now \((y,c_{1})\in S(D)\). So \(q(y,c_{1})=M_{D}\). For every \(c\geq c_{1}\) we use the non-decreasing property to write \[M_{D}\geq q(y,c)\geq q(y,c_{1})=M_{D},\] so \(q(y,c)=M_{D}\) for all \(c\geq c_{1}\). Let us now show (iii). Using equations (5) and (6), we have that \[\begin{array}{rcl}M_{D}=\sup_{(y,c)\in H\times\mathbb{R}_{+}}q(y,c)&=&\sup_ {(y,c)\in H\times\mathbb{R}_{+}}\inf_{x\in X}\inf_{z\in H}\{f(x,z)-\langle A( z),y\rangle+c\sigma(z)\}\\ &=&\sup_{(y,c)\in H\times\mathbb{R}_{+}}\inf_{z\in H}\{\inf_{x\in X}f(x,z)\} -\langle A(z),y\rangle+c\sigma(z)\\ &=&\sup_{(y,c)\in H\times\mathbb{R}_{+}}\inf_{z\in H}\{\beta(z)-\langle A(z),y\rangle+c\sigma(z)\}\\ &\leq&\sup_{(y,c)\in H\times\mathbb{R}_{+}}\{\beta(0)-\langle A(0),y\rangle+ c\sigma(0)\}=\beta(0)\\ \\ &=&\inf_{x\in X}\varphi(x)=M_{P},\end{array} \tag{8}\] where we used the definition of \(\beta\) (see Definition 3.1) in the fourth equality. We also used the fact that \(A(0)=0\) (which holds by \((A_{0})\)) and the fact that \(\sigma(0)=0\) (see Remark 3.2(i) and Definition 3.4(b)). \(\Box\) We note that the result above only requires for \(\sigma\) to verify property (b) in Definition 3.4. We now proceed to establish the zero duality gap property for our primal dual setting. Burachik and Rubinov show strong duality for very general primal - dual frameworks in [17]. In their analysis, they use _abstract convexity_ tools [32]. Even though we can deduce the zero duality gap property as a consequence of their analysis, we prefer to prove this fact directly here. We do this because one of the assumptions used in [17] can be relaxed in our setting. Namely, we can replace the assumption of w-lsc of \(\beta\) (used in [17]) by just _sequential_-w-lsc of \(\beta\). Recall that the latter property holds for our function \(\beta\), as established in Proposition 3.1. **Lemma 3.1** (properties of the Lagrangian approximation): _Consider the primal problem (P) and its dual problem (D). Assume that (H0)-(H2), \((A_{0})\) and \((A_{1})\) hold. Assume that the augmenting function \(\sigma\) verifies Definition 3.4(a')(b). Suppose that there exists some \((\bar{y},\tilde{c})\in H\times\mathbb{R}_{+}\) such that \(q(\bar{y},\tilde{c})>-\infty\). For \(n\in\mathbb{N}\), define \(\gamma_{n}:H\to\mathbb{R}_{-\infty}\) as_ \[\gamma_{n}(z):=\beta(z)-\langle A(z),\bar{y}\rangle+n\sigma(z).\] _There exists a sequence \((v_{n})\subset H\) with the following properties._ * _There exists_ \(n_{0}\in\mathbb{N}\) _such that_ \[\gamma_{n}(v_{n})=\inf_{z\in H}\gamma_{n}(z),\ \forall\,n\geq n_{0}.\] (9) * _The sequence_ \((v_{n})\subset H\) _verifying (_9_) is bounded and converges weakly to zero._ Proof. Take \((\bar{y},\bar{c})\in H\times\mathbb{R}_{+}\) as given in the assumption of the theorem. Let \(\bar{n}:=[\tilde{c}]+1\) where \([\cdot]\) denotes the integer part (or floor) of a real number. Then \(\bar{n}\in\mathbb{N}\) and since \(q(\bar{y},\cdot)\) is increasing we have that \(q(\bar{y},\bar{n})\geq q(\bar{y},\bar{c})>-\infty\). For any \(n\in\mathbb{N}\), denote by \(s(n):=\inf_{z\in H}\gamma_{n}(z)\). Since \(q(\bar{y},\bar{n})>-\infty\) there exists \(r_{0}\in\mathbb{R}\) such that \[r_{0}<q(\bar{y},\bar{n})=\inf_{z\in H}\beta(z)-\langle A(z),\bar{y}\rangle+ \bar{n}\sigma(z)=s(\bar{n}),\] which, together with the fact that \((s(n))\) is an increasing sequence, yields \(s(n)>r_{0}\) for all \(n\geq\bar{n}\). From now on, we consider the sequence \((s(n))\) for \(n\geq\bar{n}\). Observe that this sequence \((s(n))\) is monotone increasing, bounded below by \(r_{0}\) and bounded above by \(\beta(0)=M_{P}\). Indeed, the monotonicity property follows from Proposition 3.2(ii). The statement on the upper bound follows directly from the definition of \(s(\cdot)\). Namely, for every \(n\geq\bar{n}\) we have that \(s(n)\leq\gamma_{n}(0)=\beta(0)\), where we used assumption \((\mathbf{A_{0}})\) and the fact that \(\sigma(0)=0\). Altogether, the latter properties imply that the sequence \((s(n))\) converges (increasingly) to a limit \(\bar{s}\leq\beta(0)\). Proof of (i). Fix \(n\geq\bar{n}\), we have that \[s(n)+1/k>s(n)=\inf_{z\in H}\gamma_{n}(z),\] for every \(k>\bar{n}\). The definition of infimum allows us to find \(v_{n}^{k}\in H\) such that \[0\leq\gamma_{n}(v_{n}^{k})-s(n)<1/k, \tag{10}\] where we used the definition of \(s(\cdot)\) in the first inequality. For every \(k>\bar{n}\), we use the definition of \(s(\bar{n})\) to write \[s(\bar{n})\leq\gamma_{\bar{n}}(v_{n}^{k})=\gamma_{k}(v_{n}^{k})+(\bar{n}-k) \sigma(v_{n}^{k}), \tag{11}\] which re-arranges as \[(k-\bar{n})\sigma(v_{n}^{k})\leq\gamma_{k}(v_{n}^{k})-s(\bar{n}). \tag{12}\] By (10) and the established properties of the sequence \((s(n))\), we know that \[\gamma_{n}(v_{n}^{k})<s(n)+1/k\leq\bar{s}+1/k<\bar{s}+1, \tag{13}\] for all \(k>\bar{n}\). Using (13) in (12), and re-arranging the resulting expression we obtain \[\sigma(v_{n}^{k})\leq\frac{(\bar{s}-s(\bar{n}))+1}{(k-\bar{n})}, \tag{14}\] where we also used the fact that \(k>\bar{n}\). Take now \(k_{0}:=\bar{n}+\frac{(\bar{s}-s(\bar{n}))+1}{K_{\sigma}}\), where \(K_{\sigma}>0\) is as in Definition 3.4(a'). Then it is direct to check that \[\sigma(v_{n}^{k})\leq\frac{(\bar{s}-s(\bar{n}))+1}{(k-\bar{n})}<K_{\sigma},\] for all \(k\geq k_{0}\). Now Definition 3.4(a') implies that the set \[T:=\{v_{n}^{k}\,:\,n\geq\bar{n},k\geq k_{0}\},\] is bounded. In particular, for a fixed \(n\geq\bar{n}\) the sequence \((v_{n}^{k})_{k\geq k_{0}}\) is bounded and hence it has a subsequence that converges weakly to some \(v_{n}\in H\). To keep notation simple, we still denote the weakly convergent subsequence by \((v_{n}^{k})_{k\geq k_{0}}\). By Proposition 3.1, we know that \(\beta\) is sequentially-w-lsc. By \((\mathbf{A_{1}})\) and Definition 3.4(a'), \(-\langle A(\cdot),\bar{y}\rangle\) and \(\sigma\) are w-lsc (and hence sequentially-w-lsc), we deduce that \(\gamma_{n}\) is sequentially-w-lsc. Using the sequential w-lsc of \(\gamma_{n}\) and the first inequality in (13) we can write \[\gamma_{n}(v_{n})\leq\liminf_{k\to\infty}\gamma_{k}(v_{n}^{k})\leq\liminf_{k \to\infty}s(n)+1/k=s(n)\leq\gamma_{n}(z),\] for every \(z\in H\) and every fixed \(n\geq\bar{n}\). The last inequality in the expression above follows from the definition of \(s(\cdot)\). Statement (i) now follows with \(n_{0}:=\bar{n}\), by taking \(z=v_{n}\) in the above expression. Proof of (ii). Take now the sequence \((v_{n})\) defined in part (i). Note that the set \(T\) defined in part (i) is bounded, so there exists a closed ball \(B_{0}\) such that \(T\subset B_{0}\). By Theorem 2.2, \(B_{0}\) is weakly compact (and by Fact 2.1 weakly closed). This implies that the weak closure of \(T\) must be contained in \(B_{0}\). Namely, \[\overline{T}^{w}\subset\overline{B_{0}}^{w}=B_{0},\] showing that \(\overline{T}^{w}\)is bounded. By construction (see proof of (i)), every \(v_{n}\) is a weak limit of a sequence in \(T\), so we deduce that \[(v_{n})\subset\overline{T}^{w}\subset B_{0},\] showing that \((v_{n})\) is bounded. Thus the boundedness statement in (ii) holds. Let us proceed to show now that the sequence \((v_{n})\) converges weakly to zero. To prove this fact, we will show that every weakly convergent subsequence must converge to zero. If the latter is true, zero is the only weak accumulation point, so the whole sequence must weakly converge to zero. We have just established that \((v_{n})\) is bounded, so by Theorem 2.3, it has weakly convergent subsequences. Take any such subsequence, denoted by \((v_{n_{j}})_{j\in\mathbb{N}}\), converging weakly to some \(v\). Since \((v_{n_{j}})\subset(v_{n})\) and \(n\geq\bar{n}\), we can take \(n_{j}>\bar{n}\). Following the same steps as in (11)-(14) with \(k=n_{j}>\bar{n}\) and \(v_{n_{j}}\) in place of \(v_{n}^{k}\) we have that \[\sigma(v_{n_{j}})\leq\frac{(\bar{s}-s(\bar{n}))+1}{(n_{j}-\bar{n})},\] with \(n_{j}\to\infty\). By the w-lsc of \(\sigma\) we can write \[0\leq\sigma(v)\leq\liminf_{j\to\infty}\sigma(v_{n_{j}})\leq\liminf_{j\to \infty}\frac{(\bar{s}-s(\bar{n}))+1}{(n_{j}-\bar{n})}=0,\] so \(\sigma(v)=0\) and the assumptions on \(\sigma\) yield \(v=0\). This shows that every weak accumulation point of \((v_{n})\) must be equal to zero, and hence the whole sequence converges weakly to zero, completing the proof of (ii). \(\Box\) We are now ready to establish the strong duality property of our primal-dual framework. **Theorem 3.1** (strong duality for \((p)\)-\((d)\) framework): _Consider the primal problem (P) and its dual problem (D). Assume that (H0)-(H2), \((\mathbb{A}_{0})\) and \((\mathbb{A}_{1})\) hold. Assume that the augmenting function \(\sigma\) verifies Definition 3.4(a')(b). Suppose that there exists some \((\bar{y},\bar{c})\in H\times\mathbb{R}_{+}\) such that \(q(\bar{y},\bar{c})>-\infty\). Then the zero-duality-gap property holds, i.e. \(M_{P}=M_{D}\)._ Proof. Recall that weak duality (i.e., that \(M_{D}\leq M_{P}\)) holds in our setting, as established earlier in Proposition 3.2(iii). Hence, we only need to prove that \(M_{D}\geq M_{P}\). By Lemma 3.1(i), we can take a sequence \((v_{n})\) verifying (9). Our first step is to show the following inequality. \[M_{D}\geq\liminf_{n}\gamma_{n}(v_{n}), \tag{15}\] where \(\gamma_{n}\) and \(v_{n}\) are as in Lemma 3.1. Using \(n_{0}\) as in Lemma 3.1(i) and the definition of \(M_{D}\), we have that \[\begin{array}{rcl}M_{D}&=&\sup_{(y,c)\in H\times\mathbb{R}_{+}}\inf_{z\in H} \{\beta(z)-\langle A(z),y\rangle+c\sigma(z)\}\\ &\geq&\inf_{z\in H}\{\beta(z)-\langle A(z),\bar{y}\rangle+n\sigma(z)\}\\ &=&\{\beta(v_{n})-\langle A(v_{n}),\bar{y}\rangle+n\sigma(v_{n})\}=\gamma_{n }(v_{n}),\end{array} \tag{16}\] where we used the fixed choice of \((y,c):=(\bar{y},n)\) with \(n>n_{0}\) in the inequality, fact (9) in the second equality, and the definition of \(\gamma_{n}\) in the last one. Inequality (15) now follows by taking \(\liminf\) in (16). Using (15), the definition of \(\gamma_{n}\) and the properties of \(\liminf\) we deduce that \[M_{D} \geq \liminf\!\inf_{n}\gamma_{n}(v_{n})\] \[= \liminf\!\inf_{n}\beta(v_{n})+\,[-\langle A(v_{n}),\bar{y} \rangle+n\sigma(v_{n})]\] \[\geq \liminf\!\inf_{n}\beta(v_{n})+\liminf\!\inf_{n}\left[-\langle A( v_{n}),\bar{y}\rangle+n\sigma(v_{n})\right]\] \[\geq \liminf\!\inf_{n}\beta(v_{n})+\liminf\!\inf_{n}\left[-\langle A( v_{n}),\bar{y}\rangle\right]\] \[\geq \beta(0)+0=\beta(0)=M_{P},\] where we used the fact that \(n\sigma(v_{n})\geq 0\) in the third inequality. In the last inequality we used Lemma 3.1(ii), namely the fact that \((v_{n})\) converges weakly to zero and the fact that \(A(0)=0\). More precisely, using the (sequential) w-lsc of the functions \(\beta\) and \(-\langle A(\cdot),\bar{y}\rangle\), we obtain \[\liminf_{n}\left[-\langle A(v_{n}),\bar{y}\rangle\right]\geq-\langle A(0), \bar{y}\rangle=0,\] and \[\liminf_{n}\beta(v_{n})\geq\beta(0).\] Both facts were used in the last inequality of (17). Since we already have that \(M_{D}\leq M_{P}\), we have thus established that \(M_{P}=M_{D}\). \(\Box\) **Definition 3.6** (superdifferential of a concave function): _Let \(H\) be a Hilbert space and \(g:H\to\mathbb{R}_{-\infty}\) be a concave function. Take \(r\geq 0\). The \(r\)-superdifferential of \(g\) at \(w_{0}\in\mathrm{dom}(g):=\{w\in H\;:\;g(w)>-\infty\}\) is the set \(\partial_{r}g(w_{0})\) defined by_ \[\partial_{r}g(w_{0}):=\{v\in H:g(w)\leq g(w_{0})+\langle v,w-w_{0}\rangle+r, \;\;\forall v\in H\}.\] **Definition 3.7** (approximations for the primal-dual and Lagrangian): _We say that_ 1. \(x_{*}\in X\) _is an_ \(\epsilon\)_-optimal primal solution of_ \((P)\) _if_ \(\varphi(x_{*})\leq M_{P}+\epsilon\)__ 2. \((y_{*},c_{*})\in H\times\mathbb{R}_{+}\) _is an_ \(\epsilon\)_-optimal dual solution if_ \(q(y_{*},c_{*})\geq M_{D}-\epsilon\)_._ 3. _For_ \(r\geq 0\) _define the set_ \[X_{r}(y,c):=\{(x,z)\in X\times H:f(x,z)-\langle A(z),y\rangle+c\sigma(z)\leq q (y,c)+r\},\] (18) _which contains all_ \(r\)_-minimizers of the augmented Lagrangian._ 4. _Fix_ \((w,c)\in H\times\mathbb{R}_{+}\) _and define_ \(\Phi_{(w,c)}:X\times H\to\;\mathbb{R}\) _as_ \[\Phi_{(w,c)}(x,z):=f(x,z)-\langle A(z),w\rangle+c\sigma(z).\] (19) **Remark 3.4** (the dual set of the approximation for the Lagrangian): By definition of \(q\) as an infimum, for every \(r>0\) and every \((y,c)\) such that \(q(y,c)>-\infty\), there exists \((x,z)\) such that \(f(x,z)-\langle A(z),y\rangle+c\sigma(z)<q(y,c)+r\). Therefore, for every \(r>0\) and every \((y,c)\) such that \(q(y,c)>-\infty\), we have that \(X_{r}(y,c)\) is nonempty. The result below extends [9, Proposition 3.1, parts (i) and (iii)], where the particular case in which \(A=I\), the identity map in \(H\), is considered. Since the proof follows, mutatis mutandis, the same steps as those in [9, Proposition 3.1, parts (i) and (iii)], we omit it. **Proposition 3.3**: _If \((\hat{x},\hat{z})\in X_{r}(\hat{y},\hat{c})\), then the following facts hold. i) For all \(r\geq 0\), \((-A(\hat{z}),\sigma(\hat{z}))\in\partial_{r}q(\hat{y},\hat{c})\). ii) If \(M_{D}\leq M_{P}\) and \(\hat{z}=0\), then \(\hat{x}\) is a \(r\)-optimal primal solution, and \((\hat{y},\hat{c})\) is a \(r\)-optimal dual solution. In particular, if \(r\leq\epsilon\), then \(\hat{x}\) is a \(\epsilon\)-optimal primal solution, and \((\hat{y},\hat{c})\) is a \(\epsilon\)-optimal dual solution._ **From now on we assume that the hypotheses of Theorem 3.1 are verified, and hence we have \(M_{P}=M_{D}\).** The following result establishes several properties of the primal-dual solution sets, as well as compactness properties of the level sets of the function \(\Phi_{(y,c)}\) defined in (19). The techniques of the proof for parts (i), (ii), the non-emptiness of the set in (20), and (iiiA) are standard, and can be found in [9, Lemma 3.1]. Hence we will omit their proofs. The proof of part (iiiB), however, is new because the coercivity assumption on \(\sigma\), which is used in [9], is relaxed to the weaker version of Definition 3.4 with condition (a'). Hence we present here the proof of this part. **Theorem 3.2** (the compact level set of the Lagrangian): _Consider the primal problem (P) and its dual problem (D). Suppose that_ (H0)-(H2)_,_ (\(A_{0}\)) _and_ (\(A_{1}\)) _hold. The following statements hold._ * _The set_ \(S(P)\neq\emptyset\) _and_ \(M_{P}\in\mathbb{R}\)_._ * _Let_ \((\hat{y},\hat{c})\in H\times\mathbb{R}_{+}\) _be such that_ \(q(\hat{y},\hat{c})>-\infty\) _and consider the set_ \[T:=\{(w,c)\in H\times\mathbb{R}_{+}\,:\,c>\hat{c}+\|w-\hat{y}\|\}.\] _Then,_ * \(T\subset\operatorname{dom}q\)_, i.e.,_ \(q(w,c)>-\infty\) _for every_ \((w,c)\in T\)_._ * _If_ \((\hat{y},\hat{c})\in S(D)\) _then_ \(T\subset S(D)\)_._ * _For every_ \(s\geq M_{P}\)_, and every_ \((w,c)\in H\times\mathbb{R}_{+}\)_, the level set_ \[\text{lev}_{\Phi_{(w,c)}}(s)=\{(x,z)\in X\times H:\Phi_{(w,c)}(x,z)=f(x,z)- \langle A(z),w\rangle+c\sigma(z)\leq s\},\] (20) _is not empty._ * _Let_ \((\hat{y},\hat{c})\) _be as in (ii) and assume that_ \(\sigma\) _verifies Definition_ 3.4 _with condition (a). Then the level set in (_20_) is weakly-compact for every_ \((w,c)\in T\)_. In this situation, there exists_ \((\tilde{x},\tilde{z})\) _such that_ \[q(w,c)=f(\tilde{x},\tilde{z})-\langle A(\tilde{z}),w\rangle+c\sigma(\tilde{z}).\] (21) * _Let_ \((\hat{y},\hat{c})\) _be as in (ii) and assume that_ \(\sigma\) _verifies Definition_ 3.4 _with condition (a'). Define the set_ \[\hat{T}(s):=\{(w,c)\in H\times\mathbb{R}_{+}\,:\,c>\hat{c}+\left(\frac{s-q( \hat{y},\hat{c})}{K_{\sigma}}\right)+\|w-\hat{y}\|\}\subset T,\] _where_ \(K_{\sigma}>0\) _is as in Definition_ 3.4_(a'). Then the level set in (_20_) is weakly-compact for every_ \((w,c)\in\hat{T}(s)\)_. In this situation, there exists_ \((\tilde{x},\tilde{z})\) _such that (_21_) holds._ Proof. The proof of parts (i), (ii), the non-emptiness of the set in (20), and (iiiA) are similar to [9, Lemma 3.1]. We proceed to establish (iiiB). Note first that \(\tilde{T}(s)\subset T\) because \(\left(\frac{s-q(\hat{y},\hat{c})}{K_{\sigma}}\right)\geq 0\). Indeed, note that \(s\geq M_{P}=M_{D}\geq q(\hat{y},\hat{c})\) and \(K_{\sigma}>0\). It remains to show that \(lev_{\Phi_{(w,c)}}(s)\) is weakly compact under the assumptions given in (iiiB). Namely, we need to show that the level set in (20) is weakly compact for every \((w,c)\in\tilde{T}(s)\). By Theorem 2.1, it is enough to show that the set \(lev_{\Phi_{(w,c)}}(s)\) is weekly sequentially compact. The latter means that every sequence contained in \(lev_{\Phi_{(w,c)}}(s)\) has a weakly convergent subsequence, and that the limit of the weakly convergent subsequence belongs to \(lev_{\Phi_{(w,c)}}(s)\). Take a sequence \(\{(x_{k},z_{k})\}\subset lev_{\Phi_{(w,c)}}(s)\). We start by showing that \(\{z_{k}\}\) has a weakly convergent subsequence. Indeed, by definition of \(lev_{\Phi_{(w,c)}}(s)\) we have \[s \geq\ f(x_{k},z_{k})-\langle A(z_{k}),w\rangle+c\sigma(z_{k})\] \[=\ f(x_{k},z_{k})-\langle A(z_{k}),\hat{y}\rangle+\hat{c}\sigma(z _{k})+\langle A(z_{k}),\hat{y}-w\rangle+(c-\hat{c})\sigma(z_{k})\] \[\geq\ f(x_{k},z_{k})-\langle A(z_{k}),\hat{y}\rangle+\hat{c} \sigma(z_{k})-\|A(z_{k})\|\|\hat{y}-w\|+(c-\hat{c})\sigma(z_{k})\] \[\geq\ q(\hat{y},\hat{c})+(c-\hat{c}-\|w-\hat{y}\|)\sigma(z_{k}),\] where we used Cauchy-Schwarz in the second inequality and the definition of \(q\) and \((A_{0})\) in the third one. Since \(\tilde{T}(s)\subset T\) and \((w,c)\in\tilde{T}(s)\) we have that \((w,c)\in T\) and hence \((c-\hat{c}-\|w-\hat{y}\|)>0\). The fact that \(q(\hat{y},\hat{c})>-\infty\), together with the properness of \(\varphi\) imply that \(q(\hat{y},\hat{c})\in\mathbb{R}\). Altogether, we can re-arrange the last expression to obtain \[\sigma(z_{k})\leq\frac{s-q(\hat{y},\hat{c})}{c-\hat{c}-\|w-\hat{y}\|}=:M(c).\] Since \(s\geq M_{P}=M_{D}\geq q(\hat{y},\hat{c})\) we have that \(M(c)\geq 0\). We will use now the fact that \((w,c)\in\tilde{T}(s)\). Indeed, this assumption implies that \[c>\frac{s-q(\hat{y},\hat{c})}{K_{\sigma}}+\hat{c}+\|w-\hat{y}\|.\] Under this assumption on \((w,c)\) it direct to check that \(M(c)<K_{\sigma}\). By Definition 3.4(a'), the sequence \(\{z_{k}\}\) is bounded and hence it has a weakly convergent subsequence. Without loss of generality, we can assume that the whole sequence \(\{z_{k}\}\) converges weakly to some \(\bar{z}\). Now we proceed to find a subsequence of \(\{x_{k}\}\) which is weakly convergent. Indeed, using the fact that \(\{(x_{k},z_{k})\}\subset lev_{\Phi_{(w,c)}}(s)\) we can write \[f(x_{k},z_{k})\leq s+\langle A(z_{k}),w\rangle-c\sigma(z_{k})\leq s+\|w\| \|A(z_{k})\|\leq s+\|w\|\sigma(z_{k})\leq s+\|w\|M(c)=:\tilde{\alpha} \tag{22}\] for some \(\tilde{\alpha}\in\mathbb{R}\) (note that \((w,c)\in\tilde{T}(s)\) is fixed). By weak level compactness of \(f\) (see Definition 3.2), there exists a weakly compact set \(B\subset X\) and a weakly open neighbourhood \(U\) of \(\bar{z}\) such that \[\bigcup_{z\in U}lev_{z,f}(\tilde{\alpha})\subset B,\] where \(lev_{z,f}(\tilde{\alpha}):=\{x\in X\,:\,f(x,z)\leq\tilde{\alpha}\}\). Since \(\{z_{k}\}\) converges weakly to \(\bar{z}\) and \(U\) is weakly open, there exists a \(k_{0}\) such that \(z_{k}\in U\) for all \(k>k_{0}\). Using (22) we deduce that \[\{x_{k}\}_{k>k_{0}}\subset\bigcup_{k>k_{0}}\{x\in X\,:\,f(x,z_{k})\leq\tilde{ \alpha}\}\subset B.\] Consequently, \(\{x_{k}\}_{k>k_{0}}\subset B\) and since \(B\) is weakly compact, there exists a subsequence of \(\{x_{k}\}_{k>k_{0}}\) which converges weakly to some \(\bar{x}\). Altogether, we have established that \(\{(x_{k},z_{k})\}\) has a weakly convergent subsequence \(\{(x_{k_{j}},z_{k_{j}})\}\), with limit \((\bar{x},\bar{z})\). Recall that \(f\) and \(\sigma\) are w-lsc, and the function \(\langle A(\cdot),w\rangle\) is w-lsc. Therefore, \(-\langle A(\cdot),w\rangle\) is w-lsc. Altogether, the function \(\Phi_{(w,c)}\) given by (19) is w-lsc. For the weakly convergent subsequence \(\{(x_{k_{j}},z_{k_{j}})\}\) we can write \[\Phi_{(w,c)}(\bar{x},\bar{z})\leq\liminf_{j\to\infty}\Phi_{(w,c)}(x_{k_{j}},z_{ k_{j}})\leq s,\] where the last inequality follows from the assumption that \(\{(x_{k},z_{k})\}\subset lev_{\Phi_{(w,c)}}(s)\). Hence, we have proved that the weak limit \((\bar{x},\bar{z})\) belongs to \(lev_{\Phi_{(w,c)}}(s)\), and so the latter set is weakly compact, as claimed. We proceed now to prove the last statement in (iiiB), which requires the existence of \((\tilde{x},\tilde{z})\) as in (21). We note first that, by (ii) and the properness of \(\varphi\), \(q(w,c)\in\mathbb{R}\) for every \((w,c)\in T\), and hence the same holds for every \((w,c)\in\tilde{T}(s)\). To establish the equality in (21), we need to show that the infimum corresponding to the value \(q(w,c)\) is actually attained. We claim that the equality in (21) follows from the fact that \[\operatorname*{argmin}_{(x,z)\in X\times H}\Phi_{(w,c)}(x,z)\neq\emptyset. \tag{23}\] Indeed, assume that (23) holds and take \((\tilde{x},\tilde{z})\in\operatorname*{argmin}_{(x,z)\in X\times H}\Phi_{(w,c )}(x,z)\). Thus, \[q(w,c)=\inf_{(x,z)\in X\times H}\Phi_{(w,c)}(x,z)=\Phi_{(w,c)}(\tilde{x}, \tilde{z})=f(\tilde{x},\tilde{z})-\langle A(\tilde{z}),w\rangle+c\sigma( \tilde{z}), \tag{24}\] where we used the definitions of \(q\) and \(\Phi_{(w,c)}\), and the assumption on \((\tilde{x},\tilde{z})\). Therefore, the equality in (21) will hold if we prove (23). We know that the set in (20) is nonempty, and we proved already that it is weakly compact. With the notation of Theorem 2.4, set \(E:=X\times H\), and consider in \(E\) the weak topology, set \(\varphi:=\Phi_{(w,c)}\) and \(K:=lev_{\Phi_{(w,c)}}(s)\). It holds by definition of level set that \(lev_{\Phi_{(w,c)}}(s)\subset\operatorname*{dom}\Phi_{(w,c)}\). Altogether, we have that \[\operatorname*{dom}\Phi_{(w,c)}\cap lev_{\Phi_{(w,c)}}(s)=lev_{\Phi_{(w,c)}}( s)\neq\emptyset,\] where the non-emptiness follows from the first statement in part (iii). Since \(\Phi_{(w,c)}\) is w-lsc, all the assumptions of Theorem 2.4 hold and therefore \(\Phi_{(w,c)}\) is bounded below over the set \(lev_{\Phi_{(w,c)}}(s)\) and attains its minimum over this set. This establishes (23), and the proof of the theorem is complete. \(\Box\) ## 4 Deflected Subgradient Algorithm (DSG) The following notation will be used throughout the paper. \[\begin{array}{l}q_{k}:=q(y_{k},c_{k}),\\ \bar{q}:=q(\bar{y},\bar{c}),\end{array}\] where \((\bar{y},\tilde{c})\) represents a dual solution, so \(\bar{q}=q(\bar{y},\bar{c})=M_{D}=M_{P}\geq q_{k}\) for every \(k\). ### Definition and Convergence Analysis In this section, we define the (DSG) algorithm and establish its convergence properties. We start by defining the Deflected Subgradient Algorithm (DSG). **Algorithm 4.1**: **Deflected Subgradient Algorithm (DSG)**__ **Step \(0\).** Choose \((y_{0},c_{0})\in H\times\mathbb{R}_{+}\) such that \(q(y_{0},c_{0})>-\infty\), and exogenous parameters \(\epsilon>0\) (a prescribed tolerance), \(\delta<1\), \(\{\alpha_{k}\}\subset(0,\alpha)\) for some \(\alpha>0\), and \(\{r_{k}\}\subset\mathbb{R}_{+}\) such that \(r_{k}\to 0\). Let \(k:=0\). **Step \(1\).** (Subproblem and Stopping Criterion) \(a)\) Find \((x_{k},z_{k})\in X_{r_{k}}(y_{k},c_{k})\), \(b)\) if \(z_{k}=0\) and \(r_{k}\leq\epsilon\) stop, \(c)\) if \(z_{k}=0\) and \(r_{k}>\epsilon\), then \(r_{k}:=\delta r_{k}\) and go to \((a)\), \(d)\) if \(z_{k}\neq 0\) go to Step \(2\). **Step \(2\).** (Selection of the stepsize and Updating the Variables) Consider \(s_{k}>0\) a stepsize and define \[y_{k+1} :=y_{k}-s_{k}A(z_{k}),\] \[c_{k+1} :=c_{k}+(\alpha_{k}+1)s_{k}\sigma(z_{k}),\] \[k :=k+1,\,\text{go to Step 1}.\] Note that, when \(A=0\), DSG becomes a classical penalty method. For \(A:=I\) the identity map in \(H\), we recover IMSg Algorithm defined in [9, Section 3]. By Remark 3.2, when \(r_{k}>0\), there exists \((x_{k},z_{k})\in X_{r_{k}}(y_{k},c_{k})\) as in Step 1(a), showing that the inexact version of the algorithm is always well defined. When \(r_{k}=0\) for all \(k\) we obtain the exact version, which stops at the first \(k\) for which \(z_{k}=0\). The well-definedness of the exact version is shown below in Proposition 4. In the latter result, we give conditions under which Step 1(a) of the exact version can be performed. Our analysis includes a choice of \(\sigma\) either as in part (a) or as in part (a'), of Definition 3.2. Using Proposition 3.2, we see that Step 2 in Algorithm 4.1 is nothing but an epsilon subgradient step for the maximization of the dual function. If \(z_{k}\in N(A)\), then \(y_{k+1}=y_{k}\) and \(c_{k+1}>c_{k}\). Therefore \(y_{k}\) is not updated in this case. The situation in which \(z_{k}\in N(A)\) does not pose a problem in terms of convergence. Indeed, our results hold for \(A=0\), so that \(N(A)=X\). We establish next properties that hold for every stepsize \(s_{k}>0\) and for every \(\alpha_{k}\in(0,\alpha)\). The proof of the equivalence between statements (a) and (b) is standard, and, with minimal changes, follows the same steps of [9, Proposition 3.1(ii)]. We include its short proof here, however, because the expressions involved in the proof will often be used in later results. **Proposition 4.1** (Characterization of dual convergence): _Let \(\{(x_{k},z_{k})\}\) and \(\{(y_{k},c_{k})\}\) be the sequences generated by \(\mathrm{DSG}\), and assume that \((\mathbf{A_{0}})\) holds. The following statements are equivalent._ 1. _The dual sequence_ \(\{(y_{k},c_{k})\}\) _is bounded._ 2. \(\sum_{k}s_{k}\sigma(z_{k})<+\infty\)_._ 3. _The dual sequence_ \(\{(y_{k},c_{k})\}\) _converges strongly to a limit._ 4. _The sequence_ \(\{c_{k}\}\) _is Cauchy._ 5. _The sequence_ \(\{c_{k}\}\) _is bounded._ _Furthermore, if \(\{c_{k}\}\) is bounded, then \(\{y_{k}\}\) is also bounded._ Proof. Using \((\mathbf{A_{0}})\) and the definition of \(\{y_{k}\}\), we obtain \[\|y_{k+1}-y_{0}\|\leq\sum_{j=0}^{k}\|y_{j+1}-y_{j}\|=\sum_{j=0}^{k}s_{j}\|A(z _{j})\|\leq\sum_{j=0}^{k}s_{j}\sigma(z_{j}). \tag{25}\] On the other hand, by definition of \(\{c_{k}\}\) we have \[c_{k+1}-c_{0}=\sum_{j=0}^{k}c_{j+1}-c_{j}=\sum_{j=0}^{k}(\alpha_{j}+1)s_{j} \sigma(z_{j})\leq(\alpha+1)\sum_{j=0}^{k}s_{j}\sigma(z_{j}), \tag{26}\] where we used the fact that \(\alpha_{k}<\alpha\) for every \(k\). We prove first the equivalence between (a) and (b). If (b) holds, then (25) and (26) readily yield (a). Conversely, assume that (a) holds. Using the left hand side of (26) and (a) gives the existence of \(M>0\) such that \[M\geq c_{k+1}-c_{0}=\sum_{j=0}^{k}c_{j+1}-c_{j}=\sum_{j=0}^{k}(\alpha_{j}+1)s_{ j}\sigma(z_{j})\geq\sum_{j=0}^{k}s_{j}\sigma(z_{j}), \tag{27}\] where we used the fact that \(\alpha_{j}>0\) for all \(j\). Since the inequality above holds for all \(k\), we must have \(\sum_{k}^{\infty}s_{k}\sigma(z_{k})\leq M\) and we deduce (b). Let us now show that (a) is equivalent to (c). Clearly (c) implies (a), so it is enough to show that (a) implies (c). Assume that (a) holds. Then the sequence \(\{c_{k}\}\) is bounded. Since it is strictly increasing, it must be convergent. In particular, this implies that the sequence \(\{c_{k}\}\) is Cauchy. We will show now that \(\{y_{k}\}\) is also Cauchy with respect to the norm. Indeed, for every \(k,j\in\mathbb{N}\) we can write \[c_{k+j}-c_{k}=\sum_{l=k}^{k+j-1}c_{l+1}-c_{l}=\sum_{l=k}^{k+j-1}(\alpha_{l}+1) s_{l}\sigma(z_{l})\geq\sum_{l=k}^{k+j-1}s_{l}\sigma(z_{l})\geq\|y_{k+j}-y_{k}\|, \tag{28}\] where we used again the fact that \(\alpha_{j}>0\) for all \(j\) in the first inequality. The last inequality is obtained as in (25), but with \(k+j-1\) in place of \(k\) and \(k\) in place of \(0\). Since \(\{c_{k}\}\) is Cauchy, then for all \(j\in\mathbb{N}\) we have \[0=\lim_{k\to\infty}c_{k+j}-c_{k}\geq\lim_{k\to\infty}\|y_{k+j}-y_{k}\|\geq 0, \tag{29}\] so \(\{y_{k}\}\) is also Cauchy as claimed. Our claim is true and since \(H\) is complete, the sequence \(\{y_{k}\}\) strongly converges to a limit \(\bar{y}\). Altogether, \(\{(y_{k},c_{k})\}\) is strongly convergent, so (a) implies (c). Since (c) implies (d), to complete the proof it is enough to show that (d) implies (c). This is achieved in a similar way as in (a) implies (c). Indeed, if \(\{c_{k}\}\) is Cauchy, then by (29) we deduce that \(\{y_{k}\}\) is also Cauchy, and hence by completeness of \(H\times\mathbb{R}\), we deduce that (c) holds. We clearly have that (d) implies (e). If (e) holds, then by (26) and (25) we must have \(\{y_{k}\}\) also bounded, hence (a) holds. The proof is complete. \(\Box\) The next result establishes the well-definedness of the algorithm, namely that the minimization performed in Step 1 has a solution. We establish this fact either when \(\sigma\) is coercive or when it is as in Definition 3.4(a'). The proof of part (i) in the next result follows the steps of [9, Proposition 3.2], so we omit its proof. Part (ii) uses the weaker assumption on \(\sigma\), namely conditional coercivity. **Proposition 4.2** (Well-definedness of DSG): _Consider \(s\geq M_{P}\), \((\hat{y},\hat{c})\), and the sets \(T,\tilde{T}(s)\) as in Theorem 3.2._ * _Assume that_ \(\sigma\) _verifies Definition_ 3.4_(a). Take_ \(y_{0}:=\hat{y}\) _and_ \(c_{0}>\hat{c}\)_. Then, the dual sequence_ \(\{(y_{k},c_{k})\}\) _generated by the exact version of DSG with_ \((y_{0},c_{0})\) _is well-defined. Namely, the set_ \[X(y_{k},c_{k}):=\{(x,z)\in X\times H:f(x,z)-\langle A(z),y_{k}\rangle+c_{k} \sigma(z)=q(y_{k},c_{k})\},\] _is nonempty for all_ \(k\geq 0\)_._ * _Assume that_ \(\sigma\) _verifies Definition_ 3.4_(a'). Take_ \(y_{0}:=\hat{y}\) _and_ \(c_{0}>\hat{c}+\frac{s-q(\hat{y},\hat{c})}{K_{\sigma}}\)_. Then the same conclusion in (i) holds._ _In particular, as long as \((y_{0},c_{0})\) is chosen as in (i) or (ii) for the corresponding type of \(\sigma\), we will have that the exact version of DSG is well defined; that is to say, for all \(k\geq 1\), there exists \((x_{k},z_{k})\in X\times H\) satisfying \(q(y_{k},c_{k})=[f(x_{k},z_{k})-\langle A(z_{k}),y_{k}\rangle+c_{k}\sigma(z_{k })]\in\mathbb{R}\) for every \(k\geq 1\)._ Proof. (i) Similar to [9, Proposition 3.2]. (ii) By assumption \((y_{0},c_{0})\in\tilde{T}(s)\), we know by Theorem 3.2(iiiB) that there exists \((x_{0},z_{0})\in X(y_{0},c_{0})\). If \(z_{0}=0\) the algorithm stops at \(k=0\) and the claim in (ii) holds for the single iterate \((y_{0},c_{0})\). Assume that \(k\geq 1\) (i.e., the algorithm does not stop at \(k=0\) and hence \(z_{0}\neq 0\)). Let us show that \((y_{k},c_{k})\in\tilde{T}(s)\) for every \(k\geq 1\). From (25), we have \(\sum_{j=0}^{k-1}s_{j}\sigma(z_{j})\geq\|y_{k}-y_{0}\|\) and from (26) we have \(c_{k}-c_{0}=\sum_{j=0}^{k-1}(\alpha_{j}+1)s_{j}\sigma(z_{j})\). Altogether, we have \[c_{k}\;=\;c_{0}+\sum_{j=0}^{k-1}(\alpha_{j}+1)s_{j}\sigma(z_{j})\geq c_{0}+\| y_{k}-y_{0}\|+\sum_{j=0}^{k-1}\alpha_{j}s_{j}\sigma(z_{j})\] \[>c_{0}+\|y_{k}-y_{0}\|>(\hat{c}+\frac{s-q(\hat{y},\hat{c})}{K_{\sigma}})+\|y_ {k}-y_{0}\|,\] where the first strict inequality follows from the fact that \(z_{j}\neq 0\) (equivalently, \(\sigma(z_{j})\neq 0\)) for every \(0\leq j\leq k-1\) (otherwise the algorithm would have stopped at some \(j<k\)), and the second strict inequality uses the definition of \(c_{0}\). Therefore, \((y_{k},c_{k})\in\tilde{T}(s)\) for every \(k\geq 1\) and the result follows from Theorem 3.2(iiiB).The last assertion of the proposition follows directly from (i) and (ii). \(\Box\) Part (a) of the following result is proved for \(A=I\) and \(\sigma\) coercive in [9, Lemma 3.2], and establishes the boundedness of the sequences \(\{z_{k}\}\) and \(\{\sigma(z_{k})\}\) without any additional assumptions on the parameters of DSG. Since the proof for the case involving the map \(A\) and \(\sigma\) as in Definition 3.4(a) follows the same steps as the ones in [9, Lemma 3.2], we omit its proof, and analyze below the case for \(\sigma\) conditionally coercive. **Proposition 4.3**: _Assume that \((\mathbf{A_{0}})\) holds and that \(z_{0}\neq 0\). Fix \(\tilde{r}\) an upper bound of \(\{r_{k}\}\)._ * _If_ \(\sigma\) _is as in Definition_ 3.4_(a) and_ \((y_{0},c_{0})\) _in DSG Algorithm is taken as in Proposition_ 4.2_(i), then the sequences_ \(\{z_{k}\}\) _and_ \(\{\sigma(z_{k})\}\) _are bounded._ * _If_ \(\sigma\) _is conditionally coercive with constant_ \(K_{\sigma}\) _and_ \((y_{0},c_{0})\) _in DSG Algorithm is taken as in Proposition_ 4.2_(ii). Then, the sequence_ \(\{\sigma(z_{k})\}\) _is bounded. Furthermore, if the parameters_ \(\alpha_{0},\,s_{0}\) _in DSG are chosen such that_ \[\alpha_{0}s_{0}>\frac{M_{P}-q(y_{0},c_{0})+\tilde{r}}{K_{\sigma}\,\sigma(z_{0})},\] (30) _then_ \(\{z_{k}\}\) _is bounded._ Proof. (a) Similar to [9, Lemma 3.2]. Let us prove the first statement in (b), namely the boundedness of \(\{\sigma(z_{k})\}\). If the algorithm stops at iteration \(k_{0}\), then the sequences \(\{\sigma(z_{k})\}\) and \(\{z_{k}\}\) are finite and therefore bounded. Indeed, in the latter case, the sequence either stops (if \(r_{k_{0}}\leq\epsilon\)), or it goes into a finite inner loop until \(r_{k_{0}}\leq\epsilon\). In either case, the sequences \(\{z_{k}\}\) and \(\{\sigma(z_{k})\}\) are finite and their boundedness trivially holds. Hence, it is enough to assume that Step 2 is visited at every \(k\geq 0\) and hence \(z_{k}\neq 0\) for every \(k\geq 0\). Call \(a_{0}:=\alpha_{0}s_{0}\sigma(z_{0})>0\). From (25) we deduce for all \(k\geq 1\), \[c_{k}-c_{0}\;=\;\sum_{l=0}^{k-1}(\alpha_{l}+1)s_{l}\sigma(z_{l})=\sum_{l=0}^{k- 1}s_{l}\sigma(z_{l})+\sum_{l=0}^{k-1}\alpha_{l}s_{l}\sigma(z_{l})\] \[\geq\sum_{l=0}^{k-1}s_{l}\sigma(z_{l})+\alpha_{0}s_{0}\sigma(z_{0})\geq\sum_{l =0}^{k-1}s_{l}\|Az_{l}\|+\alpha_{0}s_{0}\sigma(z_{0})\] \[\geq\|y_{k}-y_{0}\|+\alpha_{0}s_{0}\sigma(z_{0})=\|y_{k}-y_{0}\|+a_{0},\] where we used \((\mathbf{A_{0}})\) in the second inequality, (25) in the last one, and the definition of \(a_{0}\) in the last equality. Re-arrange this expression to obtain \[c_{k}-c_{0}-\|y_{k}-y_{0}\|\geq a_{0}. \tag{31}\] By Proposition 3.3(i), we know that \((-A(z_{k}),\sigma(z_{k}))\in\partial_{r_{k}}q(y_{k},c_{k})\). Use the subgradient inequality to write, for every \(k\), \[-\infty<q_{0}=q(y_{0},c_{0}) \leq q(y_{k},c_{k})+\langle-A(z_{k}),y_{0}-y_{k}\rangle+(c_{0}-c_{k}) \sigma(z_{k})+r_{k}\] \[\leq q_{k}+\|A(z_{k})\|\|y_{k}-y_{0}\|+(c_{0}-c_{k})\sigma(z_{k})+r_{k}\] \[\leq q_{k}+\sigma(z_{k})\left(\|y_{k}-y_{0}\|+c_{0}-c_{k}\right)+\tilde {r}\] \[\leq q_{k}-a_{0}\sigma(z_{k})+\tilde{r}\leq q_{k}+\tilde{r},\] where we used Cauchy-Schwarz inequality, \((\mathbf{A_{0}})\), and (31). The above expression yields the boundedness of \(\{\sigma(z_{k})\}\). Indeed, it re-arranges to \[\sigma(z_{k})\leq\frac{M_{D}-q_{0}+\tilde{r}}{a_{0}}:=b.\] Hence \(\sigma(z_{k})\leq b\) for all \(k\) and the proof of the first statement is complete. Let us prove now that, if (30) holds, then we also have that \(\{z_{k}\}\) is bounded. Indeed, (30) directly implies that \(a_{0}=\alpha_{0}s_{0}\sigma(z_{0})>\frac{M_{D}-q_{0}+\tilde{r}}{K_{\sigma}}\) and hence the above expression becomes \[\sigma(z_{k})\leq\frac{M_{D}-q_{0}+\tilde{r}}{a_{0}}<K_{\sigma},\] which implies that \(\{z_{k}\}\) is bounded by definition of \(K_{\sigma}\). The proof is complete. \(\Box\) We show next that, if an iterate generated by DSG is a dual solution, then the exact version of DSG must stop, either at the current iteration or at the next one. This result holds for either type of \(\sigma\). **Proposition 4.4**: _Assume that \((\mathbf{A_{0}})\) holds and assume DSG has \(r_{k}=0\) for all \(k\). If the \(k\)th DSG iterate is a dual solution, then either \(z_{k}=0\) or \(z_{k+1}=0\). Consequently, in this situation DSG will stop at iteration \(k\) or \(k+1\)._ Proof. Assume that, at iteration \(k\), we have that \((y_{k},c_{k})\in S(D)\). This means that \(q_{k}=q(y_{k},c_{k})=M_{D}\). It is enough to prove that, if \(z_{k}\neq 0\), then \(z_{k+1}=0\). Assume that \(z_{k}\neq 0\), by \((\mathbf{A_{0}})\), we clearly have that \[\sigma(z_{k})\sigma(z_{k+1})-\|Az_{k}\|\,\|Az_{k+1}\|\geq 0. \tag{32}\] Take \((x_{k+1},z_{k+1})\in X(y_{k+1},c_{k+1})\). With the notation of DSG, denote \(\varepsilon_{k}:=\alpha_{k}s_{k}\). Using the fact that \((y_{k},c_{k})\in S(D)\) and the definitions of \(q\) and \((x_{k+1},z_{k+1})\), we can write \[M_{D}\geq q_{k+1} = f(x_{k+1},z_{k+1})-\langle Az_{k+1},y_{k+1}\rangle+c_{k+1}\sigma (z_{k+1})\] \[= f(x_{k+1},z_{k+1})-\langle Az_{k+1},y_{k}-s_{k}Az_{k}\rangle+(c_ {k}+(s_{k}+\varepsilon_{k})\sigma(z_{k}))\sigma(z_{k+1})\] \[= f(x_{k+1},z_{k+1})-\langle Az_{k+1},y_{k}\rangle+s_{k}\langle Az _{k+1},Az_{k}\rangle+(c_{k}+(s_{k}+\varepsilon_{k})\sigma(z_{k}))\sigma(z_{k +1})\] \[\geq [f(x_{k+1},z_{k+1})-\langle Az_{k+1},y_{k}\rangle+c_{k}\sigma(z_{ k+1})]+s_{k}\left(\sigma(z_{k})\sigma(z_{k+1})-\|Az_{k}\|\,\|Az_{k+1}\|\right)\] \[+\varepsilon_{k}\sigma(z_{k})\sigma(z_{k+1})\geq q_{k}+ \varepsilon_{k}\sigma(z_{k})\sigma(z_{k+1})=M_{D}+\varepsilon_{k}\sigma(z_{k}) \sigma(z_{k+1}),\] where we used the definition of DSG in the third equality. We also used (32) and the definition of \(q_{k}\) in the second to last inequality. This shows that \(\varepsilon_{k}\sigma(z_{k})\sigma(z_{k+1})\leq 0\). Since both \(\varepsilon\) and \(\sigma(z_{k})\) are assumed to be positive, we must have \(\sigma(z_{k+1})=0\) and hence \(z_{k+1}=0\). \(\Box\) The following theorem states that DSG guarantees a monotonic increase of the dual function. If the initial iterate is taken as in Theorem 3.2, we know that the algorithm is well defined for either type of \(\sigma\) (coercive or conditionally coercive). Assuming this is the case, the proof of the result below follows similar steps to those in [9, Theorem 3.1] and hence are omitted. **Theorem 4.1**: _Assume that \(\mathrm{DSG}\) generates an infinite sequence \(\{(y_{k},c_{k})\}\) and that for every \(k\), \((y_{k},c_{k})\) is not a dual solution. Then \(q(y_{k+1},c_{k+1})>q(y_{k},c_{k})\)._ Proof. Similar to [9, Theorem 3.1]. \(\Box\) From now on, we assume that \(z_{k}\neq 0\) for all \(k\). In other words, we assume that the method generates an infinite sequence. We will also assume that the initial iterate and parameters are chosen so that, for either type of \(\sigma\), the previous results and properties hold. The technical result below has a proof similar to the one in [9, Lemma 3.3] and hence is omitted. **Lemma 4.1**: _Consider the sequences \(\{(x_{k},z_{k})\}\), \(\{(y_{k},c_{k})\}\) generated by \(\mathrm{DSG}\) algorithm._ * _The following estimates hold for all_ \(k\geq 1\)__ \[f(x_{k},z_{k})-\langle A(z_{k}),y_{0}\rangle \leq\,q_{k}+r_{k},\text{ and}\] (33) \[\sigma(z_{k})\sum_{j=0}^{k-1}\alpha_{j}s_{j}\sigma(z_{j}) \leq\,q_{k}-q_{0}+r_{k}.\] (34) * _Assume that the dual solution set_ \(S(D)\) _is nonempty. If_ \((\bar{y},\bar{c})\in S(D)\) _then for all_ \(k\)_,_ \[\|y_{k+1}-\bar{y}\|^{2}\leq\|y_{k}-\bar{y}\|^{2}+2s_{k}\sigma(z_{k})\left[ \frac{s_{k}\sigma(z_{k})}{2}+\frac{q_{k}-\bar{q}+r_{k}}{\sigma(z_{k})}+\bar{c} -c_{k}\right].\] (35) Proof. Similar to [9, Lemma 3.3]. \(\Box\) The following result holds for either type of \(\sigma\). The only new result involved in its proof is the fact that, for \(\sigma\) conditionally coercive, strong duality holds. Again, due to the similarity of the proof techniques with [9, Lemma 3.4], we omit its proof here. **Lemma 4.2**: _If the sequence \(\{z_{k}\}\) converges weakly to \(0\), then \(\{q_{k}\}\) converges to \(\bar{q}\), the primal sequence \(\{x_{k}\}\) is bounded, and all its weak accumulation points are primal solutions._ Proof. Similar to [9, Lemma 3.4]. \(\Box\) ### Algorithm DSG-1 We consider in this section the stepsize similar to the one given in [9, Algorithm 1], and use it for our particular scheme. The difference is the use of the function \(A\) in the choice of the stepsize (see the definition of \(\eta_{k}\) below). Take two parameters \(\beta>\eta>0\). We consider the step size \[s_{k}\in[\eta_{k},\beta_{k}], \tag{36}\] where \(\eta_{k}:=\min\{\eta,\|A(z_{k})\|+\|z_{k}\|\}\) and \(\beta_{k}:=\max\{\beta,\sigma(z_{k})+\|z_{k}\|\}\). With this choice of \(s_{k}\), we denote the DSG algorithm as DSG-1. **Remark 4.5**: If \((\mathbf{A_{0}})\) holds, then \[\eta_{k}\leq\|A(z_{k})\|+\|z_{k}\|\leq\sigma(z_{k})+\|z_{k}\|\leq\beta_{k},\] where first and last inequalities use the definition of \(\eta_{k},\beta_{k}\). The second inequality holds by \((\mathbf{A_{0}})\). Note that, a constant stepsize for all iterations is admissible. The next theorem only requires a \(\sigma\) which satisfies the following property: \[\text{if }\sigma(w_{k})\downarrow 0\text{ then }\{w_{k}\}\text{ bounded.} \tag{37}\] Its proof considers two possible cases, according to whether the dual sequence \((y_{k},c_{k})\) is bounded or not. The case of an unbounded sequence has a proof similar to the one [9, Theorem 3.2], and hence is omitted. The case of bounded dual sequence \((y_{k},c_{k})\) is slightly different because of our different type of stepsize, so we provide it here. **Theorem 4.2**: _Assume that \(\sigma\) is an augmenting function verifies that if \(\sigma(w_{k})\downarrow 0\), then \(\{w_{k}\}\) bounded, and assume that \(M_{P}=M_{D}\). Consider the primal sequence \(\{x_{k}\}\) generated by DSG-1. Take the parameter sequence \(\{\alpha_{k}\}\) satisfying \(\alpha_{k}\geq\bar{\alpha}\) for all \(k\) and some \(\bar{\alpha}>0\). Then \(\{x_{k}\}\) is bounded, all its weak accumulation points are primal solutions, and \(\{q_{k}\}\) converges to the optimal value \(M_{P}\)._ Proof. Take the dual sequence \(\{(y_{k},c_{k})\}\) generated by DSG-1. If \(\{(y_{k},c_{k})\}\) is unbounded, then the proof is similar to the corresponding part of [9, Theorem 3.2]. We proceed to consider the case in which \(\{(y_{k},c_{k})\}\) is bounded. By Proposition 4.1\((i)\), \(\sum_{k}s_{k}\sigma(z_{k})<\infty\). In particular, \(\{s_{k}\sigma(z_{k})\}\) converges to \(0\). On the other hand, from \(s_{k}\geq\min\{\eta,\|A(z_{k})\|+\|z_{k}\|\}\), we obtain \[s_{k}\sigma(z_{k})\geq\min\{\eta\sigma(z_{k}),(\|A(z_{k})\|+\|z_{k}\|)\sigma(z _{k})\}\geq\min\{\eta\sigma(z_{k}),\|z_{k}\|\sigma(z_{k})\}>0,\] because \(z_{k}\neq 0\) for all \(k\). Since \(\eta>0\), we conclude that \(\{\|z_{k}\|\}\) converges to \(0\) or \(\sigma(z_{k})\) converges to \(0\). We will show that either case implies that \(\{z_{k}\}\) weakly converges to \(0\). If \(\{\|z_{k}\|\}\) converges to \(0\) then \(\{z_{k}\}\) converges strongly to \(0\), and hence weakly to \(0\). Alternatively, if \(\sigma(z_{k})\) converges to \(0\), then \(\{z_{k}\}\) is bounded by assumption. Then, there exists a subsequence \(\{z_{k_{j}}\}\) weakly converging to some \(\tilde{z}\). From the weak lower semicontinuity of \(\sigma(\cdot)\), we have \(0\leq\sigma(\tilde{z})\leq\liminf_{k\to\infty}\sigma(z_{k_{j}})=0\). Hence \(\sigma(\tilde{z})=0\). The properties of \(\sigma\) now imply that \(\tilde{z}=0\). Therefore, the whole sequence \(\{z_{k}\}\) weakly converges to \(0\). Thus, in the case that \(\{(y_{k},c_{k})\}\) is bounded, the results follows from Lemma 4.2 and the zero duality gap property \(\bar{q}=M_{P}\). \(\Box\) The following corollary holds because a conditionally coercive \(\sigma\) induces strong duality and also verifies (37). **Corollary 4.1**: _If \(\sigma\) verifies Definition 3.4\((b)\) with either (a) or (a'), then the conclusion of Theorem 4.2 holds._ Proof. Since a conditionally coercive \(\sigma\) verifies (37), the same holds for a coercive \(\sigma\). By Theorem 3.1, strong duality holds. So we are in conditions of Theorem 4.2. \(\Box\) Theorem 4.2 above establishes primal convergence results for DSG-1, the following theorem establishes a dual convergence result, its proof is identical to [9, Theorem 3.3] and hence omitted. **Theorem 4.3**: _If DSG-1 generates an infinite sequence \(\{(y_{k},c_{k})\}\), then every weak accumulation point of \(\{(y_{k},c_{k})\}\), if any, is a dual solution._ Proof. See [9, Theorem 3.3]. \(\Box\) We know that, when \(\{c_{k}\}\) is bounded, then \(\{y_{k}\}\) bounded by Proposition 4.1. The converse is not necessarily true, and it holds under an additional assumption which requires the sequences \(\{r_{k}\}\) and \(\sigma(z_{k})\) to decrease at a similar rate. \((\mathbf{\Gamma_{0}})\): There exists \(R>0\) such that \(r_{k}\leq R\sigma(z_{k})\) for all \(k\), that is \(r_{k}\approx O(\sigma(z_{k}))\). The proof of the next result is similar to [9, Lemma 3.5] and hence omitted. **Lemma 4.3**: _Assume that \((\mathbf{A_{0}})\) and \((\mathbf{\Gamma_{0}})\) hold. If the dual solution set is nonempty and \(\{y_{k}\}\) bounded, then \(\{c_{k}\}\) is bounded too._ Proof. Similar to [9, Lemma 3.5]. \(\Box\) **Remark 4.6**: Lemma 4.3 holds under assumptions \((\mathbf{A_{0}})\) and \((\mathbf{\Gamma_{0}})\) in the general framework of DSG, regardless the choice of the stepsize \(s_{k}\). The following result extends [9, Proposition 3.3] to our general case, since our function \(\sigma\) and our stepsize are different, it requires a slightly different proof. **Proposition 4.5**: _Assume that \((\mathbf{\Gamma_{0}})\) holds, and that we are in conditions of Proposition 4.3(a) or (b). Assume also that DSG-\(1\) generates an infinite dual sequence \(\{(y_{k},c_{k})\}\). If the dual optimal set is nonempty then \(\{(y_{k},c_{k})\}\) is bounded._ Proof. Under the conditions of Proposition 4.3(a) or (b), we have that \(\{\sigma(z_{k})\}\) and \(\{z_{k}\}\) are bounded, so take \(b>0\) such that \(\sigma(z_{k})+\|z_{k}\|<b\) for all \(k\). By definition, \(s_{k}\leq\beta_{k}\leq\max\{\beta,b\}=:\hat{b}\). In particular, \(s_{k}\sigma(z_{k})\leq\ b\hat{b}=\bar{b}\) for all \(k\). Let \(R\) be as in \((\mathbf{\Gamma_{0}})\) and take \((\bar{y},\bar{c})\in S(D)\). We claim that \(\{(y_{k},c_{k})\}\) is bounded. We will show firstly that \(\{c_{k}\}\) is bounded. Suppose by contradiction that \(\{c_{k}\}\) is unbounded. Thus there exists \(k_{0}\) such that \(c_{k}\geq M:=\frac{\bar{b}}{2}+R+\bar{c}\) for all \(k\geq k_{0}\). Observing that \(q_{k}\leq\bar{q}\) and using the estimates in (35), we obtain \[\|y_{k+1}-\bar{y}\|^{2} \leq\ \|y_{k}-\bar{y}\|^{2}+2s_{k}\sigma(z_{k})\left[\frac{s_{k} \sigma(z_{k})}{2}+\frac{q_{k}-\bar{q}+r_{k}}{\sigma(z_{k})}+\bar{c}-c_{k}\right] \tag{38}\] \[\leq\ \|y_{k}-\bar{y}\|^{2}+2s_{k}\sigma(z_{k})\left[\frac{\bar{b}}{2 }+\frac{r_{k}}{\sigma(z_{k})}+\bar{c}-c_{k}\right]\] \[\leq\ \|y_{k}-\bar{y}\|^{2}+2s_{k}\sigma(z_{k})\left[\frac{\bar{b}}{2 }+R+\bar{c}-c_{k}\right]\] \[\leq\ \|y_{k}-\bar{y}\|^{2}\] for all \(k\geq k_{0}\). It follows that \(\{\|y_{k}-\bar{y}\|\}\) is a decreasing sequence and hence \(\{y_{k}\}\) is bounded. By Lemma 4.3, this entails a contradiction. Therefore, the dual sequence is bounded. \(\Box\) Theorem 4.4, which we prove next, establishes strong convergence of the whole dual sequence generated by DSG-\(1\) to a dual solution. Theorem 3.4 in [9] relies on Fejer convergence properties and establishes only weak convergence of the dual sequence. Our proof is inspired by Theorem 5.1 in [10] and uses the properties of \(q\). **Theorem 4.4**: _If \((\mathbf{A_{0}})\) holds and the parameter sequence \(\{\alpha_{k}\}\) satisfies that \(\alpha_{k}\geq\bar{\alpha}\) for all \(k\) and some \(\bar{\alpha}>0\), then the following hold._ * _Assume that_ \(\sigma\) _is an augmenting function that verifies (_37_), and assume that_ \(M_{P}=M_{D}\)_. If the dual sequence generated by_ DSG-\(1\) _is bounded then_ \(S(D)\neq\varnothing\) _and the dual sequence converges strongly to a dual solution._ * _Assume that_ \((\mathbf{\Gamma_{0}})\) _holds, and that we are in conditions of Proposition_ 4.3_(a) or (b). If_ \(S(D)\neq\varnothing\)_, then the dual sequence generated by_ DSG-\(1\) _is strongly convergent to some dual solution._ Proof. (i) Since the dual sequence \(\{(y_{k},c_{k})\}\) is bounded, it converges strongly to some \((\tilde{y},\tilde{c})\) by Proposition 4.1(a)(c). We only need to prove now that the limit \((\tilde{y},\tilde{c})\in S(D)\). Indeed, we can write \[M_{P}=\lim_{k}q(y_{k},c_{k})\leq q(\tilde{y},\tilde{c})\leq M_{P}, \tag{39}\] where the equality follows from Theorem 4.2 and strong duality (see Theorem 3.1), note that we can write a limit because the sequence \(\{q_{k}\}\) is increasing. The first inequality follows from the fact that \(q\) is weakly (and hence strongly) upper semicontinuous, and the last inequality is a consequence of the fact that \(q(\tilde{y},\tilde{c})\leq M_{D}\leq M_{P}\). Therefore, we showed that \((\tilde{y},\tilde{c})\in S(D)\) and the proof of (i) is complete. We now show (ii). Since the dual set is non-empty, by Proposition 4.5, the dual sequence \(\{(y_{k},c_{k})\}\) is bounded. By (i), we have \(\{(y_{k},c_{k})\}\) converges strongly to some \((\tilde{y},\tilde{c})\in S(D)\). \(\Box\) The following straightforward corollary characterizes the existence of dual solutions. **Corollary 4.2**: _If \((\mathbf{A_{0}})\) and \((\mathbf{\Gamma_{0}})\) hold, and that we are in conditions of Proposition 4.3(a) or (b). Assume that \(\alpha_{k}\geq\bar{\alpha}>0\) for all \(k\). The following statements are equivalent._ * _The dual sequence generated by_ DSG_-_\(1\) _is bounded._ * _The dual set is not empty._ Proof. The proof follows directly from the fact that the assumptions ensure that we are in conditions of both parts (i) and (ii) in Theorem 4.4. \(\Box\) ### Algorithm DSG-2 In this section we adopt the same stepsize proposed in [9, Algorithm 2], which ensures that DSG converges in a finite number of steps. We show in this section that these convergence results are preserved when using the map \(A\) in the Lagrangian. Take \(\beta>0\) and a sequence \(\{\theta_{k}\}\subset\mathbb{R}_{+}\) such that \(\sum_{j}\theta_{j}=\infty\), and \(\theta_{k}\leq\beta\) for all \(k\). Consider the step size \[s_{k}\in[\eta_{k},\beta_{k}], \tag{40}\] where \(\eta_{k}:=\theta_{k}/\sigma(z_{k})\) and \(\beta_{k}:=\beta/\sigma(z_{k})\). DSG with this stepsize selection is denoted by DSG-2. The following result extends [9, Theorem 3.5]. By looking carefully at the proof in [9, Theorem 3.5], it is seen that the assumptions can be weakened by just assuming that \(\sigma\) verifies (37). Since the proof is similar to that in [9, Theorem 3.5], we omit it. **Theorem 4.5**: _Let \(\{(x_{k},z_{k})\}\) and \(\{(y_{k},c_{k})\}\) be the sequences generated by_ DSG-\(2\)_. Suppose that the parameter sequence \(\alpha_{k}\geq\bar{\alpha}>0\). Assume that \(\sigma\) verifies (37). Then only one of the following cases occurs: (a) There exists a \(\bar{k}\) such that_ DSG-\(2\) _stops at iteration \(\bar{k}\). As a consequence \(x_{\bar{k}}\) and \((y_{\bar{k}},c_{\bar{k}})\) are \(\epsilon\)-optimal primal and \(\epsilon\)-optimal dual solutions, respectively. In this situation \(\{(y_{k},c_{k})\}\) must be bounded. (b) The dual sequence \(\{(y_{k},c_{k})\}\) is unbounded. In this case, \(\{z_{k}\}\) converges weakly to \(0\), \(\{q_{k}\}\) converges to \(M_{P}\), the primal sequence \(\{x_{k}\}\) is bounded and all its weak accumulation points are primal solutions._ Proof. Similar to [9, Theorem 3.5]. \(\Box\) This directly gives the following result. **Corollary 4.3**: _Let \(\{(x_{k},z_{k})\}\) and \(\{(y_{k},c_{k})\}\) be the sequences generated by_ DSG-\(2\)_. Suppose that the parameter sequence \(\alpha_{k}\geq\bar{\alpha}>0\). Assume that \(\sigma\) is either coercive or conditionally coercive. Then the conclusion of Theorem 4.5 hold._ Proof. The claim follows from the fact that both coerciveness and conditional coerciveness imply condition (37). \(\Box\) ## 5 Acknowledgements We thank C. Yalcin Kaya for useful discussions on various aspects of the DSG algorithm. ## 6 Concluding Remarks We provide a detailed analysis of primal-dual problems in infinite dimensions, when the dual problem is obtained by using a Lagrangian that involves a map \(A\) in the linear term (see (5) in Definition 3.5), and an augmenting function which, in some cases, does not need to be coercive (see Definition 3.4(a')). For such primal-dual pairs, we establish strong duality. Moreover, we show that the inexact DSG method has the same primal and dual convergence properties as the case when \(A\) is the identity map. Namely, we show that every accumulation point of the primal sequence is a primal solution, and that, under certain technical assumptions, the dual sequence converges strongly to a dual solution (see Theorem 4.4). Our analysis opens the way for the application of DSG to challenging optimal control problems, which are infinite dimensional optimization problems. It is particularly interesting to apply the results of the present paper to the pure penalty method, i.e., when \(A=0\). The latter case is not covered by the results in [15].
2310.12070
The era of the ARG: an empiricist's guide to ancestral recombination graphs
In the presence of recombination, the evolutionary relationships between a set of sampled genomes cannot be described by a single genealogical tree. Instead, the genomes are related by a complex, interwoven collection of genealogies formalized in a structure called an ancestral recombination graph (ARG). An ARG extensively encodes the ancestry of the genome(s) and thus is replete with valuable information for addressing diverse questions in evolutionary biology. Despite its potential utility, technological and methodological limitations, along with a lack of approachable literature, have severely restricted awareness and application of ARGs in empirical evolution research. Excitingly, recent progress in ARG reconstruction and simulation have made ARG-based approaches feasible for many questions and systems. In this review, we provide an accessible introduction and exploration of ARGs, survey recent methodological breakthroughs, and describe the potential for ARGs to further existing goals and open avenues of inquiry that were previously inaccessible in evolutionary genomics. Through this discussion, we aim to more widely disseminate the promise of ARGs in evolutionary genomics and encourage the broader development and adoption of ARG-based inference.
Alexander L. Lewanski, Michael C. Grundler, Gideon S. Bradburd
2023-10-18T16:04:51Z
http://arxiv.org/abs/2310.12070v1
# The era of the ARG: an empiricist's guide to ancestral recombination graphs ###### Abstract In the presence of recombination, the evolutionary relationships between a set of sampled genomes cannot be described by a single genealogical tree. Instead, the genomes are related by a complex, interwoven collection of genealogies formalized in a structure called an _ancestral recombination graph_ (ARG). An ARG extensively encodes the ancestry of the genome(s) and thus is replete with valuable information for addressing diverse questions in evolutionary biology. Despite its potential utility, technological and methodological limitations, along with a lack of approachable literature, have severely restricted awareness and application of ARGs in empirical evolution research. Excitingly, recent progress in ARG reconstruction and simulation have made ARG-based approaches feasible for many questions and systems. In this review, we provide an accessible introduction and exploration of ARGs, survey recent methodological breakthroughs, and describe the potential for ARGs to further existing goals and open avenues of inquiry that were previously inaccessible in evolutionary genomics. Through this discussion, we aim to more widely disseminate the promise of ARGs in evolutionary genomics and encourage the broader development and adoption of ARG-based inference. keywords: ancestral recombination graph, ARG, succinct tree sequence, genealogy, pedigree, genomics, ancestry + Footnote †: journal: ## 1 Introduction Many of the principal pursuits in evolutionary genomics can be recast as questions about the transmission of genetic material from ancestors to descendants. For example, in the study of speciation and hybridization, we may be interested in identifying which sections of a hybrid genome were derived from which parental species (Marques et al., 2019; Moran et al., 2021). As another example, we often want to know about the nature of selection on a genetic variant (e.g., Martinez-Jimenez et al., 2020; Schluter and Rieseberg, 2022; Henn et al., 2015; Barrett and Schluter, 2008), which is, in essence, asking whether the variant has displayed a particular pattern of transmission. For instance, a positively selected variant confers a fitness advantage and thus would be preferentially transmitted between generations. In applied settings, we may want to understand whether a human-made structure such as a road or dam (e.g., Epps et al., 2005; Machado et al., 2022) reduces connectivity between populations, which is implicitly asking how often ancestor-descendant relationships span the potential barrier (e.g., Jasper et al., 2019). So far, direct knowledge of how genetic material is transmitted from ancestors to descendants is extremely limited in nearly all systems, save those with extensive pedigree and genomic information (e.g., Florida Scrub-jays (Chen et al., 2016; Aguillon et al., 2017; Chen et al., 2019), economically important livestock like dairy cattle (Larkin et al., 2012; Ma et al., 2015)]. However, access to this information could revolutionize the study of numerous topics across evolutionary genomics. In population genetics, the central structure that describes how genetic material is passed from ancestors to descendants is called an _ancestral recombination graph_ (ARG). Building on earlier developments in coalescent theory (Kingman, 1982a,b; Tajima, 1983; Hudson, 1983), ARGs were conceptualized in the 1990s by R.C. Griffiths and P. Marjoram (Griffiths, 1991; Griffiths and Marjoram, 1996, 1997) to describe ancestry in the presence of coalescence and recombination. ARGs have subsequently featured prominently in the theoretical and statistical realms of population genetics where they have been extensively studied for their biological, mathematical, and computational properties and utility. In contrast, ARGs remain much less known and appreciated in empirical evolutionary genomics. This inattention can at least partially be ascribed to pragmatism--until recently, ARGs have been purely theoretical constructs, impractical to reconstruct in empirical systems or even simulate at biologically realistic scales. Additionally, although an expansive literature already exists on ARGs, much of this content is targeted at an audience with an extensive theoretical or statistical background in population genetics and thus may be unapproachable for some empirical biologists. Excitingly, recent methodological advances in reconstructing (Box 1) and simulating (Box 2) ARGs together with concurrent progress in genome sequencing and increasingly available high-performance computation means that ARG-based inference is rapidly becoming attainable in empirical- and simulation-based evolutionary genomics research. To help usher in this imminent "era of the ARG," we view now as an opportune moment to provide a widely accessible resource for comprehending ARGs and their potential in evolutionary genomics. We have two primary objectives for this paper. First, we provide a concise and gentle primer on ARGs, including an introduction to what an ARG is, what information can be encoded within it, and an exploration of some of its basic properties. Second, we discuss the current and future potential for ARGs to benefit evolutionary genomics research. Our aim for the second objective is not to exhaustively review existing ARG-based research, but rather to articulate the promise of ARGs to advance diverse topics across evolutionary genomics. We supplement these two main objectives with an overview of recent methodological developments in inferring, simulating, and analyzing ARGs. This discussion will demonstrate the current or impending feasibility of ARG-based inference for many evolutionary genomics questions and systems. ## 2 An ARG primer In the following section, we will incrementally develop an intuition for what ARGs are by starting with the fundamentals of sexual reproduction and genealogical relatedness, which will help clarify how ARGs emerge from these first principles of biology. To simplify our discussion, we will focus on the nuclear genome of sexual, diploid organisms and meiotic recombination throughout the paper. However, the ideas covered here are relevant to any organism across the tree of life as well as viruses whose genomes undergo any type of recombination (e.g., gene conversion, bacterial conjugation). For more technical treatments of ARGs, we direct interested readers to Griffiths and Marjoram (1997), Wiuf and Hein (1999), Hein et al. (2005), and Wong et al. (unpublished). ### Background In sexual, diploid organisms, haploid gametes are generated by the sampling of a single DNA copy of every position in the genome during meiosis. During reproduction, the parents' gametes fuse, which leads to a diploid offspring. The relationships between a set of individuals can be represented by a genealogical pedigree (Figure 1A), in which each individual has two parents, from each of whom it has inherited exactly half of its genome. The pedigree consists of nodes, which represent individual organisms, and edges, which connect a subset of the nodes and signify parent-offspring relationships. By itself, the pedigree can provide coarse estimates of genetic ancestry, such as the expected genetic relatedness between individuals (e.g., 0.50 between full siblings; 0.125 between first cousins), or the expected proportion of the genome inherited from a particular genealogical ancestor. However, for any region of the genome, we are unable to ascertain from the pedigree alone whether it is the parent's maternal or paternal copy that has been transmitted. Thus, we are restricted to calculating expected quantities. We could therefore gain more in-depth knowledge of ancestry in the genome by explicitly tracking the transmission of DNA sequences down the pedigree from specific parental to offspring chromosomes. This discussion of the pedigree highlights multiple key ideas in our build-up to ARGs. First, because each parent contributes only one DNA copy at a particular genomic position to its offspring, each copy (including copies contained within an individual) experiences its own unique history of inheritance through the pedigree. Second, because a parent only contributes half of its genome to each offspring and not all individuals reproduce, only a subset of the genetic material possessed by historical individuals in the pedigree end up in contemporary individuals. As you travel further back in the pedigree, despite the geometric increase in the number of expected genealogical ancestors [\(2^{n}\) ancestors (assuming no inbreeding) where \(n\) equals the number of generations back in time], an increasing proportion of these ancestors contributes no genetic material to their contemporary descendants (Donnelly, 1983; Chang, 1999). Figure 1: Figure 1_(previous page)_: Overview of _ancestral recombination graphs_ (ARG). In all ARG depictions (A, B, D), nodes are indicated by small circles and each represents a single set of one or more chromosomes (a haploid genome) of an individual. The node coloration indicates whether or not it is involved in recombination, and the specific pattern (shading and outline) of the node indicates its type: nonsample, unary (nonsample), sample. The genome is divided into three non-recombining regions (blue, orange, and green). (A) The relationships of multiple individuals can be organized into a pedigree. An ARG is embedded in a pedigree and represents the set of pedigree paths through which genetic material is transmitted. (B) The graphical representation of an ARG. Edges (the connections between nodes) are colored and annotated with the non-recombining region(s) that they transmit. (C) A plot recording the lineage count through time in the ARG. Backward in time, coalescent events, which occur at the dark gray points, merge lineages and thus reduce the lineage count. The red points highlight the times at which recombination occurs, which splits lineages backward in time and therefore increases the lineage count. (D) An ARG can be formulated as a series of local trees that share nodes and edges. Each non-recombining region possesses its own local tree. The regions are separated by a recombination event, which, when moving between regions, prunes a portion of the tree and regrafts it to another node. This action means that nearby trees are generally quite similar in structure. The arrows in the left two trees show how recombination relocates a branch in the tree (reconnecting to the small, light gray node) to form the tree of the region immediately to the right. The dashed lines on the second and third trees highlight each tree's shared structure with its leftward neighbor. If we concentrate on a particular position in an individual's genome, we see that each DNA copy traverses just one of the manifold possible paths (i.e., series of connected nodes and edges) in the pedigree. The specific pedigree paths through which copies at a particular position in contemporary individuals were transmitted from their ancestors represent the genetic genealogy at that position (Hudson, 1991; Mathieson and Scally, 2020). Similar to a pedigree, each edge in the genealogy represents a transmission event of genetic material from parent to offspring. However, in a pedigree, each node is a diploid individual, while in a genetic genealogy, each node represents one of two haploid sequences _within_ a diploid individual--the specific genomic copy sampled to create a gamete that passes genetic material from a parent to the current individual. This genetic genealogy is embedded in the pedigree (Figure 1A). The sequence of relationships defined by the pedigree constrains the possible nodes and edges that can exist in the genealogy, but does not fully dictate the identity of these nodes and edges. The structure of a genetic genealogy is determined by both the pedigree structure and the outcome of the gametogenic genome sampling at each reproduction event in the pedigree. The genetic perspective of relatedness is further complicated by another feature of meiosis: recombination. Meiotic recombination, the shuffling of genetic material in the genome during meiosis, occurs via two processes: (1) exchange of genetic material between homologous chromosomes via crossing over during prophase I; (2) random assortment of homologous chromosomes during anaphase I. These recombinational processes can produce a mosaic of genetic ancestry across the haploid genome of the gamete so that a particular gamete genome potentially contains genetic material inherited from different parents both between non-homologous chromosomes and within chromosomes. Recombination therefore results in different histories of inheritance (and thus different genealogies) across the genome, with topological changes to the genealogy associated with recombination breakpoints and different chromosomes (Rosenberg and Nordborg, 2002). ### Ancestral recombination graphs The complex web of genetic genealogies across the genome is recorded in a graphical structure known as an _ancestral recombination graph_ (ARG), which provides extensive information regarding the history of inheritance for a set of sampled genomes. Each node in an ARG represents a haploid genome (a _haplotype_) in a real individual that exists now or in the past (Wong et al., unpublished). Each diploid individual therefore contains two haploid genomes and is represented by two nodes. We refer to nodes corresponding to sampled genomes [often, though not necessarily (e.g., Schaefer et al., 2021; Speidel et al., 2021; Wohns et al., 2022), sampled in the present] as _sample nodes_ and all other nodes as _nonsample nodes_. If sample nodes have no sampled descendants, they constitute the tips of an ARG. Sample nodes are particularly salient because ARGs are generally specified in terms of the genetic ancestry of these genomes. Edges in an ARG indicate paths of inheritance between nodes. ARGs are technically described as "directed graphs" because genetic material flows unidirectionally from ancestors to descendants. Assuming that sample nodes are sourced from contemporary individuals, the present time in an ARG (the bottom of the vertical axes in Figures 1B and D) contains a lineage (i.e., sets of one or more edges connected by nodes forming continuous paths of inheritance) for each sample. Tracing the lineages back in time, some nodes have two edges enter on the future-facing side but only a single outbound edge on the past-facing side (e.g., node R in Figure 1B). These nodes represent haplotypes in which two lineages find common ancestry and thus merge into a single lineage, which reduces the lineage count by one (the dark gray points in Figure 1C). Common ancestry events additionally represent _coalescence_ when (backward in time) the two merging edges contain the same portion of the genome [note that all nodes corresponding to common ancestry events in Figure 1 (R), P, Q, R, W, and X) also correspond to coalescence]. From an organismal perspective, nodes corresponding to coalescence represent an instance in which a parent provides the same (portion of a) haploid genome to multiple offspring and thus splits a lineage into multiple lineages forward in time. Conversely, other nodes have a single edge enter on the future-facing side but two edges exit the past-facing side (e.g., node Q in Figure 1B), which represents the outcome of recombination (Griffiths and Marjoram, 1997). Backward in time, the node with two outbound edges on the past-facing side is the recombinant offspring node whose genome is inherited from two parental nodes (e.g., node C in Figure 1). The two nodes that each receive one of the outbound edges are the parental nodes whose genomes are recombined in the offspring node. For example, in Figure 1, G and H are the parental nodes of C. From an organismal perspective, these nodes occur when an offspring receives one of its haploid genomes from a parent, and that haploid genome represents the outcome of recombination between the parent's two haploid genomes. Recombination splits the genome into separate lineages and thus each portion of the genome experiences a distinct history of inheritance between (traversing an ARG from present to past) the recombination event from which they split to the coalescence event in which they join back up. Consequently, each recombination event increases the number of lineages in an ARG by one (Nordborg 2001; the red points in Figure 1C). From a forward-in-time perspective, recombination fuses portions of two parental genomes into a single haplotype (in the recombinant offspring), and thus unites separate lineages into a single lineage. Nodes through which ancestral material is transmitted but are involved in neither coalescence nor recombination for genomic material that is ancestral to the samples do not determine the topology of an ARG and thus are frequently omitted (we retain several of these nodes in Figure 1 to highlight the effects of recombination). More generally, nodes with only one descendant (_unary_ nodes; e.g., node $ in Figure 1) do not directly influence genealogical relationships between the sample nodes. In simulations, unary nodes are often removed via a process called _simplification_(Kelleher et al., 2018), and in empirical ARGs, these are not even inferred. ARGs record the timing of each node and the portion of the genome that each edge transmits between ancestors and descendants. To trace the genealogy for a particular position in the genome, you follow the edges through the ARG that contain the focal position (Griffiths and Marjoram, 1997). For example, in Figure 1B, if you want to extract the genealogy for a position in the orange region (between positions 0 and 1) of sample node $, you would follow the edges that transmit the orange region between nodes (i.e., $ in $ and $ in $). If the entire genome finds common ancestry, the first common ancestor is called the _most recent common ancestor_ (MRCA) of the genome or the _Grand MRCA_(GMRCA; Griffiths and Marjoram 1997). The fact that each genomic region bracketed by recombination breakpoints (hereafter _non-recombining region_) possesses its own genealogy and that a non-recombining region in a single sample node traces only one path back to the MRCA of the entire sample suggests an alternative representation of an ARG: an ordered set of genealogical trees along the genome with labeled sample and nonsample nodes to specify how nodes are shared between trees (Griffiths and Marjoram 1997; Figure 1D). Considering this representation of an ARG as a set of trees (which we refer to as the _tree representation_) is worthwhile because ARGs are often formulated (see Box 3) and operationalized in inference (e.g., Stern et al., 2019; Hejase et al., 2022) based on this representation. In this tree representation, each non-recombining region has its own local tree that represents the region's evolutionary history. If each recombination breakpoint occurs at a unique position in the genome, as you shift from one local tree to the next (amounting to traversing one recombination breakpoint), the structure of the new tree is identical to its neighbor except for a single edge that is removed and then affixed to a (potentially new) node (Figure 1D). In computational parlance, this action is called a _subtree-prune-and-regraft_ operation (Song, 2003). When all recombination events occur at unique locations and each event involves only one breakpoint, the total number of local trees will equal one more than the number of recombination events defining the evolutionary relationships in the genome. For example, in Figure 1, two recombina tion events generate three trees. If recombination events occur at the same location (a breakpoint represents \(>\)1 recombination event), then moving between adjacent trees will involve a corresponding number of subtree-tree-prune-and-regraft operations (one representing each recombination event), and the tree count will be less than one plus the number of recombination events. With inclusion of all nodes involved in recombination and coalescence relevant to the sample nodes, it is straightforward to switch between the two representations. As previously discussed, the local tree for a particular non-recombining region can be extracted from the graphical representation of an ARG by starting at each sample node and tracing the lineages that transmit the region through the ARG until all lineages meet in the MRCA. Conversely, you can recover the graphical representation of an ARG from the local trees by starting with the tree at one end of the set and then sequentially working across the trees, combining the shared nodes and edges, adding the nodes and edges that are not yet included in the graphical structure, and annotating each edge with the non-recombining region(s) that it transmits. As a brief illustration, in Figure 1D, the first two trees both contain nodes \(\$\) and \(\$\) with a connecting edge. In the graphical representation, these shared components would be merged and the edge would be annotated with the transmission of the regions between positions 0 and 2 (as shown in Figure 1B). A recombination event can have several consequences for the structure of adjacent trees. First, it could alter the topology (i.e., the specific branching structure) if the new edge joins to a node on a different edge (e.g., the first and second trees in Figure 1D). However, if the new edge joins to a different node on the same edge, the topology will remain unchanged, and only the edge lengths (i.e., coalescent times) will be modified (e.g., the second and third trees in Figure 1D). It is also possible for the lineage to coalesce back into the same node, which would result in no change to the tree structure. Each local tree contains every sample node because all samples possess the entire genome (and thus every non-recombining region represented by each tree). However, the collection of nonsample nodes can differ across trees. If an ARG includes all nodes (i.e., every nonsample node is retained), the absence of a node in a local tree signals that it does not represent a genetic ancestor for that region. If an ARG has been simplified (unary nodes removed), the absence of a node either means that it is not a genetic ancestor or that the node does not represent a genome in which coalescence occurred that involved the sample nodes. There are several key characteristics of an ARG's tree representation. First, the subtree-prune-and-regraft operations that differentiate adjacent trees highlights that nearby trees are generally quite similar and frequently share many nodes and edges (Hudson, 1991; Rosenberg and Nordborg, 2002). A series of shared nodes and edges between trees indicates that the corresponding non-recombining regions were found in the same lineage in that portion of the ARG. The correlated nature of the trees can be exploited for highly efficient tree storage and computation (Kelleher et al. 2016, 2018; Figure 3A,B; see Box 3 for further details). Second, although local trees can overlap in structure, a tree can contain components that are not universally found across the entire set of trees (e.g., in Figure 1D, node S in the first tree is not found in the third tree). The different histories of inheritance mean that each non-recombining region may coalesce in different ancestors that potentially existed at different times in the past and that differ from the GMRCA. For example, in Figure 1D, node S is the MRCA of the first two trees (the same node as the GMRCA) while the third tree's MRCA is node W. If the local trees' MRCAs existed at different times in the past, this will manifest as variation in tree height (Hudson, 1991). Although the information contained in the graphical and tree representations of an ARG is the same, many readers, especially those with a background in phylogenetics, may prefer to think about ARGs via their tree representations. Unlike the graphical representation, each local tree is a familiar object: it is strictly bi- or multi-furcating, meaning that each node has exactly one ancestor and two or more descendants, and that therefore the tree contains no loops (i.e., it is non-reticulate), and is the desired result of a phylogenetic analysis run on a multiple sequence alignment of the DNA in the tree's non-recombining region. Building off this intuition, a phylogeneticist may draw on experience and imagine the set of local trees as analogous to a Bayesian posterior distribution of phylogenies. However, although this intuition may be initially useful, it is important to remember that each local tree is not independent of the others, both because each is generally separated from its neighbors by a small number of recombination events (so is therefore highly correlated), and because the same nodes and edges may appear across multiple local trees. The shared structure of trees imbues the nodes and edges with different properties relative to the analogous components in a standard phylogeny. For example, in a standard phylogeny, branches depict ancestor-descendant relationships through time and thus are one-dimensional. In contrast, edges in an ARG exist both through time and across the genome, and thus can be conceptualized as two-dimensional (Shipilina et al., 2023). This two-dimensionality can be seen in Figure 1B where edges extend along the vertical, time dimension and also along different extents of the genome (edges contain different sets of genomic regions). Equivalently, the genome dimension of edges manifests in an ARG's tree representation (Figure 1D) through edges persisting across different sets of local trees. The overlapping nature of local trees (i.e., shared nodes and edges) underlies much of an ARG's utility and facilitates the power of ARG-based inference, which we discuss later in the review. ### Modeling coalescence with recombination In population genetics, ARGs are commonly generated by simulating under Hudson's (1983) model of coalescent with recombination, which is closely connected to the original conception of ARGs (Griffiths and Marjoram, 1997). Under this model, a set of genomes exists in the present and the lineages describing each genome's ancestry are traced backward in time. Either coalescence or recombination can occur, which represent competing events with exponentially distributed waiting times. With coalescence, two lineages find common ancestry and merge into one. With recombination, a genomic position is selected uniformly as the breakpoint location. The offspring chromosome is inherited from one parental chromosome on one side of the breakpoint and the other parental chromosome on the other side. Recombination splits a lineage into two backwards in time. This process produces a series of genealogies across the genome that describes the ancestry of each genomic position. One question that may arise here is whether recombination could preclude the lineages from finding common ancestry because it increases the lineage count. However, backwards in time, the lineage count grows via recombination at a linear rate (\(kR/2\) where \(k=\) lineage count and \(R=\) recombination rate) whereas lineages coalesce at a quadratic rate \([k(k-1)/2]\), and thus finding common ancestry is guaranteed (Griffiths and Marjoram, 1997). Later in the review, we will be simulating under this model to explore various features of ARGs. ### ARGs in practice In our introduction of ARGs, we mainly focus on the ancestors that are involved in coalescence and recombination. However, when navigating the literature, it is important to recognize that the term _ancestral recombination graph_ is frequently applied to structures that differ in various ways from each other and potentially from how we describe ARGs here. This variation stems from both terminological imprecision and inferential limitations. The degree of completeness in which genetic inheritance from ancestors to descendants is documented can vary extensively. At the most comprehensive extreme, one could record all the genomic material that is passed between ancestors and descendants regardless of whether the material is ancestral or non-ancestral to the samples. Alternatively, one could render an ARG comprehensive to only the focal samples by only keeping track of the material that is ancestral to them (sometimes referred to as a _full ARG_). This structure could be further simplified in various ways such as removing nodes that are unary in one or more local trees. Although these descriptions of ancestry vary in the information that they include, they have all been referred to as ARGs in the literature (Wong et al., unpublished). Although ARGs may fully document genetic ancestry in theory, we rarely work with such a comprehensive structure in practice. First, in empirical settings, it is not possible to infer all of this information. The sample space of possible structures for a comprehensive ARG quickly becomes impractically vast with increasing genome and sample sizes. Hence, assumptions and shortcuts (e.g., the sequentially Markovian coalescent (SMC; McVean and Cardin, 2005)) are often employed (Rasmussen et al., 2014), which sacrifices a capacity to infer a comprehensive and fully accurate ARG for the sake of computational tractability. There are also many components of ARGs that are largely unidentifiable and thus are necessarily omitted. For instance, contemporary samples can provide only limited information on unary nodes, and certain features may be imperceptible in contemporary samples. An example of this is a "diamond" structure (Rasmussen et al., 2014), where (going backward in time) recombination splits a lineage but then the lineages immediately coalesce again. Additionally, many sites in the genome are uninformative regarding the local tree topologies (e.g., invariant and singleton sites), which frequently precludes the identification of precise recombination breakpoint locations and other ARG features. More generally, patterns of shared variants represent the information from which ARGs are inferred, while recombination reduces the informative sites per genealogy by dividing the genome into smaller regions. ARG inference will therefore tend to decline in accuracy when the ratio of mutations to recombination is low (Hubisz and Siepel, 2020). This tension between mutation and recombination imposes a theoretical limit on ARG recoverability from sequencing data (Hayman et al., 2023). As a consequence of these obstacles, in practice, we are restricted in what we can infer about genetic ancestry from genomic data. For example, tsinfer(Kelleher et al., 2019) infers the collection of local trees and their shared structure (i.e., how nodes and edges overlap across trees) by first estimating ancestral haplotypes and then deducing the tree topologies by inferring how haplotypes relate to each other. This output can be thought of as representing the outcome of coalescence and recombination rather than completely encoding the events that generated the relationships (Kelleher et al., 2019). That is, we are inferring the relationships across the genome produced by recombination and coalescence, but we lack detail on the recombination events that determine how these genealogies exactly knit together in an ARG. Importantly, even if we can acquire comprehensive information on genetic ancestry (e.g., in a simulation), many questions may only require certain subsets of this information, such as the structure of local trees. To accommodate both the existing terminological ambiguity and the realities of how well we can infer genetic ancestry, we permissively apply the term _ancestral recombination graph_ to encompass structures that document genetic ancestry in the presence of recombination at varying levels of completeness. ## 3 Deepening ARG intuition with simulations To further develop a foundational intuition for ARGs and reinforce content covered in the primer section, we implemented a series of simulations in msprime v1.2.0 (Baumdicker et al., 2022) using the classical coalescent with recombination model. We completed post-simulation processing, analysis, and visualization using tskit(Kelleher et al., 2018), numpy(Harris et al., 2020), and pandas(McKinney, 2010) in Python 3.11.2 (Python Software Foundation, 2023) and the following packages in R 4.2.3 (R Core Team, 2023): TreeDist(Smith, 2023), ape(Paradis and Schliep, 2018), ggtree(Yu et al., 2017), dplyr(Wickham et al., 2019), ggplot2(Wickham et al., 2023), ggforce(Pedersen, 2022), and ggridges(Wilke, 2022). We include all code in the paper's associated repository and on github ([https://github.com/AlexLewanski/arg_review](https://github.com/AlexLewanski/arg_review)). First, to illustrate several general features of ARGs, we focus on a single simulation involving one population with an effective population size of 100 diploid individuals, a genome size of 10 kilobases (kb), a sample size of 10 diploid individuals, and a uniform recombination rate of \(5\times 10^{-5}\) per base per generation. In the simulation, we recorded the full ARG, in which all nodes involved in common ancestry and recombination are retained. We then simplified the ARG structure, which involves removing unary nodes so that remaining nodes represent those that correspond to at least one coalescence event in the genome. Across the 593 local trees generated from this simulation, tree height (TMRCA of each non-recombining region) varied between 57.29 and 1,214.71 generations (non-integer generations are possible here because simulations involved a continuous time model) with a mean\(\pm\)standard deviation of 448.87\(\pm\)209.38 generations. The step-like pattern of tree height along the genome, in which height is constant for a stretch, then suddenly jumps to another value, appears because each tree (with a single height) applies to all sites in each non-recombining region (Figure 2A). As discussed in the primer section, another ubiquitous feature of the ARG is that nearby local trees are often highly similar. As a simple illustration of this, we quantified the dissimilarity of all pairwise combinations of local trees using the (approximate) subtree-prune-and-regraft (SPR) distance (Hein et al., 1996; de Oliveira Martins et al., 2008), which is the minimum number of subtree moves required to convert one tree to another only based on tip identities (ignoring identities of internal nodes). The topologies of nearby trees were highly similar, with similarity rapidly attenuating with increasing breakpoint separation between trees (Figure 2B). This can also be seen in the matrix of SPR distance values (Figure 2C), with lower values clustered around the diagonal (trees with similar indices and few intervening non-recombining regions) and values rapidly increasing away from this region. The attenuating similarity can also be qualitatively observed in the example trees included in Figure 2A, where the second and third trees, which are adjacent (the 437th and 438th trees, respectively), appear highly similar and are both clearly different in structure compared to the more distant first (45th) and fourth (576th) trees. Next, using the same simulation, we tracked genetic material found in the contemporary sample nodes (hereafter _ancestral material_) back in time through the samples' ancestors. Because we simplified the ARG, tracts of ancestral material identified for a particular sample node also represent tracts of common ancestry (i.e., the material is ancestral to at least one other sample node). For three sample nodes, Figure 2D displays the location of ancestral material (horizontal axis) and the timing of the ancestors carrying that material (vertical axis). At the contemporary time point (\(\mathrm{time}=0\)), the tracts of ancestral material span the entire genome because these represent the sample nodes that by definition possess their entire genome as a single haplotype. Traveling back in time (up the vertical axis in Figure 2D), the tracts of ancestral material are broken up into small pieces. Consequently, the average tract length of ancestral material peaks in the contemporary time period and rapidly declines back in time (Figure 2E). This pattern emerges because the cumulative number of recombination events that have occurred in the transmission of ancestral material grows through time (Figure 2F), resulting in the fragmentation of ancestral material into progressively smaller pieces. This pattern can also be understood through the lens of node-sharing across the local trees. At the present, every node is shared across all trees because all regions of the genome are found in each sample node. However, moving back in time, the tracts of ancestral material become progressively smaller and thus span fewer non-recombining regions. This results in a decline in node-sharing across trees further back in time; any particular node is carrying ancestral material for a decreasing number of non-recombining regions. Figure 2G depicts this phenomenon. Nodes with the highest proportion of sharing between trees are exclusively located near the present, while nodes located further back in time (higher up the vertical axis) show low proportions Figure 2 Figure 2 Figure 2_(previous page)_: Exploration of ARGs via coalescent simulations. Panels (A)-(C) visualize summaries for a single population simulation. (A) Plot of tree height (TMRCA) along the genome with several example trees plotted along this sequence. (B) The topological dissimilarity of all pairwise combinations of trees was quantified with subtree-prune-and-regraft (SPR) distance. The plot shows SPR distance vs. the number of non-recombining regions separating each tree. The different shaded bands correspond to different percentiles of SPR distance values at each tree separation count: 0-100 (lightest gray), 10-90, 20-80, 30-70, 40-60, 50 (black line). (C) Matrix of SPR distances for all combinations of trees organized by tree index (e.g., the 30th tree in the genome has an index of 30). (B) and (C) illustrate how nearby local trees are highly similar with similarity rapidly declining with growing number of breakpoints separating the trees. (D) Tracking the genomic material for three sample nodes back in time through their genetic ancestors (each node's ancestral material is shown in a different shade of gray). Continuous tracts of ancestral material get progressively smaller back in time as recombination repeatedly breaks the tracts into smaller pieces. (E) The size of tracts of ancestral material swiftly declines going back in time. The plot shows the mean (points) and 25th/75th percentiles of tract size (gray bars) for 20 generation bins. (F) The cumulative number of recombination events occurring backwards in time. (G) The number of nodes and node sharing across local trees in an ARG quickly decline backward in time. The plot shows the location of each node in time (vertical axis) versus the proportion of local trees that contains each node (horizontal axis). The marginal density plot along the vertical axis shows the distribution of nodes through time. (H) A series of simulations with all conditions held constant except for population size. (I) A series of simulations with all conditions held constant except for gene flow rate. The left plots in (H) and (I) show the distribution of tree height for each population size or migration value with the purple points representing the mean value per single simulation run. The right plots in (H) and (I) show the mean tree count per simulation run with each point shaded with its mean non-recombining region size. A variety of variables can systematically modify features of an ARG. As a brief illustration, we examined how effective population size and gene flow, which frequently vary across studies and systems, influence three fundamental features: tree height, the number of local trees, and the size of non-recombining regions in an ARG. For the population size demonstration, we completed a set of simulations that kept all variables constant (sequence length = 10 kb, recombination rate = \(3\times 10^{-5}\) per base per generation, sample size = 10 diploid individuals) except for population size, which varied between 50 and 1,000 in increments of 50 (a total of 20 population sizes with 30 replicates per size). Tree height and local tree count both increased while mean region size decreased at greater population sizes (Figure 2H). The correlations between population size and the three variables emerge because, with higher effective population sizes, coalescent times will tend to increase (Coop, 2020) because more individuals exist that act as possible ancestors and thus there is a lower probability of any two lineages finding common ancestry in a particular generation. Because of the deeper coalescent times (which result in greater tree heights), more opportunities exist for recombination to occur, which results on average in more local trees and smaller non-recombining regions. We generated another set of simulations for the gene flow demonstration where we kept all variables constant (sequence length \(=10\) kb, recombination rate \(=3\times 10^{-5}\) per base per generation) except for migration. We simulated two populations of 500 individuals each that merged (backwards in time) after 5,000 generations. While the populations were separated, one of the populations (the _recipient population_) experienced continuous, unidirectional gene flow from the second population (the _donor population_) forward in time. We varied the migration rate between 0 and \(1\times 10^{-4}\) in increments of \(5\times 10^{-6}\) (a total of 20 different migration rates with 30 replicates per rate). We then sampled 10 diploid individuals from the recipient population. With increasing gene flow, trees tended to increase in height on average, which was associated with increasing bimodality in the distribution of tree heights. This bimodality phenomenon emerges because the presence of two populations along with gene flow result in two distinct time periods during which lineages can coalesce (Maruyama, 1970; Rosenberg and Feldman, 2002). The left mode of the distribution corresponds to non-recombining regions whose entire history postdating the population split occurred within the recipient population, and thus coalescence for that region could occur fairly rapidly (small TMRCA values). However, with gene flow, part of a region's history can occur in the donor population. Consequently, a region whose ancestry involves the donor population must wait until the two populations merge in the ancestral population before finding its MRCA. This results in the second, later mode in tree heights. The slight trends of increasing tree count and decreasing region size at greater migration rates occur because the tree heights are increasing on average, which provides opportunities for more recombination events. Note that the ARG summaries we have reported here--tree height, number of local trees, length of non-recombining regions, similarity and node-sharing between local trees--only represent a small glimpse into the innumerable ways that ARGs can be dissected and summarized. We chose this set to exemplify fundamental features of ARGs and illustrate how they reflect and can therefore be informative about demographic and evolutionary phenomena that are frequently of interest in evolutionary genomics. ## 4 ARGs in evolutionary genomics From a practical perspective, two questions logically ensue from the ARG introduction: what is the utility of ARGs in evolutionary genomics, and what advantages does it impart relative to existing approaches? As with many methodological advances, ARGs can offer multiple benefits, including strengthening our ability to answer existing questions and opening up entirely new fields of inquiry. To understand how ARGs facilitate empirical inferences that are equal or superior to existing approaches, it is helpful to consider two topics: (1) how ARGs are shaped by evolutionary phenomena, and (2) how ARGs juxtapose with the paradigm of inquiry that currently predominates evolutionary genomics. A critical idea is that the genealogies underlying the genome are the ultimate record of evolutionary history. The structure of an ARG is governed by processes, including selection, drift, and gene flow, that regulate the fitness and relatedness of haplotypes. The genomic composition of individuals is precisely reflected in an ARG's structure because ARGs encode the ancestral source(s) of samples' genomes, including how new mutations are propagated through time and across individuals (Figure 3A). Consequently, the genomes of sampled individuals and any summary of their content represent derivatives of the underlying ARG, and many of these genomic summaries can be reinterpreted as explicit descriptions of the ARG (Ralph, 2019; Ralph et al., 2020). Currently in evolutionary genomics, genomic data are typically stored as a genotype matrix [e.g., a VCF file (Danecek et al., 2011); Figure 3C]. The data are distilled down to a variety of summaries such as principal components (Menozzi et al., 1978; McVean, 2009), \(F\)-statistics (Reich et al., 2009; Patterson et al., 2012; Peter, 2016), or the site frequency spectrum (SFS) that each reflect particular attributes of the samples' genomes. From these measures, we attempt to infer past phenomena (e.g., selection, demographic changes) that gave rise to the observed data, under the premise that disparities in the generative process translate to corresponding differences in genomic summaries. Indeed, these summaries can often provide powerful and accurate insights into evolutionary processes, and the field of statistical population genetics has made extraordinary strides in divining evolutionary processes from summaries of genetic and genomic data in the six decades since the first empirical measurements of molecular genetic variation were made (Hubby and Lewontin, 1966). As previously discussed, each summary measure calculated from these data (e.g., the SFS, \(F_{ST}\), \(\pi\), \(\theta\), individual heterozygosity, identity-by-state, identity-by-descent, etc.) is a low-dimensional summary of an ARG, so, to the extent that we are able to accurately infer an ARG (Brandt et al., 2022), we can recover any of these quantities at least as accurately as they are estimated from the genomic data from which an ARG is inferred (Ralph et al., 2020). [See Ralph (2019) and Ralph et al. (2020) for instructive discussions of the ways common summaries of genomic data (and many other quantities) can be calculated and interpreted with ARGs.] And, because ARGs can offer computational efficiencies over traditional methods of storing genomic data, in many cases these quantities can be calculated more easily, and with less computational overhead, from ARGs (Ralph et al., 2020; Nowbandegani et al., 2023). In some cases, summaries of genomic data made from ARGs can outperform those made from the data directly. For instance, Nowbandegani et al. (2023) devised a method to efficiently represent linkage disequilibrium (LD) based on genomic genealogies (_LD graphical models_). These LD graphical models enable orders-of-magnitude reductions in computation time and memory usage for LD matrix computations and facilitate better polygenic prediction compared to a similar method using the LD correlation matrix. As another example, Link et al. (2023) found that an expected genetic relatedness matrix calculated from an ARG in a given genomic region more accurately captures relationships than the empirical genetic relatedness matrix calculated in the same region. The higher accuracy may seem counterintuitive; after all, empirical ARGs are estimated from genomic data, so how could statistical inferences conducted on an ARG be _more_ accurate than those made directly from the genotype matrix? To see Figure 3: The encoding of local trees and genotype data in the succinct tree sequence format. (A) Depiction of the local trees shown in Figure 1 with timing and location of mutation events mapped onto the branches and the location of each site shown on the genome. The black, dashed lines represent the invariant sites and the thicker, solid lines represent variant sites corresponding to each mutation. The trees are annotated with horizontal, dashed lines (labelled \(T_{0}\)–\(T_{IX}\)) that denote either the timing of coalescence or mutation events. (B) The trees and genotype data in the succinct tree sequence format. The trees are specified with the nodes and edges tables. The nodes table contains an ID and age for each node. The edges table contains the left (_Genome start_) and right (_Genome end_) positions of the genome over which each edge persists, while the _Parent_ column contains the nodes that transmit material to the nodes in the _Child_ column. The genotypic information is included in the sites [genomic position of each site (_Position_), ancestral state (_Ancestral_)] and mutations [derived state (_Derived_), mutation timing (_Age_)] tables. (C) The equivalent genotype data for the four sample nodes stored in a more conventional matrix format with the rows representing each sample node and the columns representing each genomic site. Note that with small amounts of genetic data such as this simple example, the tree sequence may require more storage space than a standard genotype matrix format. However, when considering realistic genomes, the tree sequence rapidly becomes much more efficient at storing genetic data with growing sample sizes (Kelleher et al., 2019). how this can be the case, consider the structure of the genealogies that comprise an ARG. Each local tree is usually separated from that of the adjacent non-recombining region by a small number of recombination events, leading to high correlation in the genealogical relationships contained in nearby trees (e.g., Figures 1E; 2B,C). Because of this correlation, the other trees contain information about relatedness between samples in a focal tree. The mutational process is intrinsically random, so that the true genealogical relationships between a set of samples may not be apparent in patterns of shared variation associated with any particular region. By leveraging the information about relationships between samples contained across the entire set of trees, we can, in principle, side-step some of the "noise" in the data that exists due to the randomness of the mutational process (Ralph et al., 2020). Beyond facilitating more efficient and accurate population genetic inferences, the increasing availability of empirical ARGs will foster entirely new fields of ARG-based inquiry. A useful analogy here is the way in which the field of phylogenetics opened up the associated field of phylogenetic comparative methods. For example, the question of whether diversification rates vary across a phylogeny (Ricklefs, 2007; Rabosky, 2014) is impossible to pose, let alone answer, without a phylogeny. It is difficult to guess what form the "comparative methods" field of ARGs (i.e., not just asking existing questions better or faster, but entirely new questions that are predicated on ARGs) will take, especially as empirical ARG inference is still in its infancy. However, we can highlight one particularly exciting direction that has already begun to materialize: geographic inference with ARGs. The recent advances in the reconstruction of genomic genealogies have sparked a revolution in spatial population genetics. In particular, several recent approaches (Osmond and Coop, 2021; Wohns et al., 2022) have begun to explore the feasibility of inferring the locations of the genetic ancestors of sampled individuals across space and through time. Although similar geographic inference has been done using non-recombining gene regions (e.g., Neigel and Avise, 1993; Barton and Wilson, 1995; Avise, 2009) or a single phylogenetic tree ["phylogeography" (Knowles, 2009)], it is only with an ARG in hand that it has become feasible to infer locations for _all_ the genetic ancestors of a sample. This power, in turn, has facilitated massively more detailed and nuanced understanding of how organisms move across space and through time. For example, Osmond and Coop (2021) inferred the mean effective dispersal distance of _Arabidopsis thaliana_, and Wohns et al. (2022) recovered the broad strokes of human dispersal history over the last 800,000 years. In the future, this type of inference of ancestral locations could empower specific and biologically principled definitions of "admixture" (e.g., 12.5% of the genetic ancestors of a focal individual are estimated to have lived inside a particular geographic region at a particular slice of time) (Bradburd and Ralph, 2019). The exciting enterprise of geographic inference of ancestor locations (more precisely, of the geographic locations of nodes in an ARG) and of the concomitant historical patterns of dispersal and density described by a sample's georeferenced genealogy, is entirely predicated on the existence of an inferred ARG for a set of samples. An important qualifier to this discussion is that, despite the evident promise of ARG-based inference, it remains less clear the extent to which this promise is achievable in empirical biology. One of the main reasons for this uncertainty is, despite some awareness of empirical limits on ARG reconstruction, little is known regarding the degree of accuracy needed to make quality downstream inferences from ARGs. For example, do accurate inferences generally presuppose highly precise and accurate estimates of ARGs? Or perhaps some questions only require accuracy in specific properties of ARGs. For example, the distribution of local tree TMRCAs may need to be accurate (Brandt et al., 2022), while the accuracy of their topologies are less crucial. Understanding the sensitivities and requirements of downstream inferences will help uncover the particular facet(s) of ARG reconstruction whose improvements would be most beneficial and will also help delineate the limits that empirical ARG reconstruction will enforce on downstream inferences. ## 5 Conclusions In this review, we aimed to introduce ARGs, articulate the capacity of ARGs to enhance the study of evolutionary genomics, and describe the current and/or forthcoming practicability of using ARGs in empirical- and simulation-based research. Indeed, ARGs have the potential to advance empirical evolutionary genomics in both minor and profound ways ranging from improving implementation of existing approaches (e.g., faster calculation of traditional population genetics statistics) to inspiring novel and previously inaccessible avenues of study. The nature and extent to which ARGs will reshape the field remains unclear and will depend on fundamental limits regarding the information contained in empirical ARGs, the degree to which ARGs are integrated into the methods canon of evolutionary genomics, and our collective ingenuity. How do we fully capitalize on ARGs? First, a broader suite of inference methods and tools based on ARGs must (continue to) be developed, evaluated, and made readily accessible to the broader community. Until now, most ARG-based methods development has concentrated on ARG reconstruction and simulation. Although these topics will benefit from additional progress, we are reaching a stage where empirical- and simulation-based ARGs can be realistically acquired in many situations and readily stored and manipulated with an increasingly mature and powerful software infrastructure (e.g., tskit). A more expansive body of methods built on ARGs will enable wider adoption of ARG-based inference. The incipient nature of ARG methods presents an opportunity for more extensive synthesis and synergy between evolutionary genomics and both phylogenetic comparative methods and phylogeography. These fields have developed a sizeable assortment of phylogenetic methods that could be co-opted and modified for tree-based inference in the context of ARGs. Fully capitalizing on our growing ARG capabilities will clearly require a receptivity to new genealogically explicit approaches and ideas that have so far only featured sparingly in empirical evolutionary genomics. However, with a concerted embrace of ARGs, we are confident that this "holy grail of statistical population genetics" (Hubisz and Siepel, 2020) will further realize its potential for many questions in evolutionary biology. ## 6 Boxes [backgroundcolor=gray!10,linewidth=0.4]Box 1: ARG reconstruction A growing arsenal of methods is available to infer ARGs from genomic data. ARGweaver, which was introduced in 2014 by Rasmussen et al., represents a seminal achievement in ARG inference. ARGweaver and its extension (ARGweaver-D; Hubisz et al. 2020) leverage approximations of the coalescent [SMC or SMC' (McVean and Cardin, 2005; Marjoram and Wall, 2006)] and time discretization to simplify the space from which to sample candidate ARGs using Markov Chain Monte Carlo. These methods, along with other recent Bayesian approaches like Arbores (Heine et al., 2018) and ARGinfer (Mahmoudi et al., 2022), enable the rigorous treatment of uncertainty via the incorporation of an ARG's posterior distribution into downstream analyses. One general limitation of these methods is that, due to computational requirements, they can only handle fairly modest sample sizes. For example, ARGweaver can consider between two to about 100 samples (Hubisz and Siepel, 2020). Motivated by the extensive sequencing efforts in human genomics, several methods have been devised to accommodate large and complicated genomic datasets. For example, ARG-Needle (Zhang et al., 2023), tsinfer/tsdate (Kelleher et al., 2019; Wohns et al., 2022), and Relate (Speidel et al., 2019) can infer genomic genealogies for tens of thousands (Relate) to hundreds of thousands (tsinfer, ARG-Needle) of human samples. Relate and tsinfer can additionally incorporate samples from different time periods and have been used to reconstruct unified genomic genealogies for modern humans and ancient samples of humans, Neanderthals, and Denisovans (Speidel et al., 2021; Wohns et al., 2022). This scalability is facilitated by various statistical simplifications, which result in several limitations in the inferences of these approaches. For example, Relate and tsinfer infer less information about recombination than methods like ARGweaver, which attempts to identify the specific recombination events associated with every breakpoint (Wong et al., unpublished). Additionally, they only provide point estimates for the tree topologies, which precludes comprehensive assessments of uncertainty in ARG structure. So far, most ARG inference development has focused on human and other eukaryotic genomes. However, there are also active efforts to create methods tailored to other types of genomes and systems. For example, Vaughan et al. (2017) developed a Bayesian approach dubbed Bacter, which is designed to infer ARGs for bacteria based on the ClonalOrigin model (Didelot et al., 2010). Spurred by the COVID-19 pandemic, Zhan et al. (2023) recently introduced a method (sc2ts) for ARG reconstruction that can involve millions or more of SARS-CoV-2 genomes. sc2ts is designed to construct and repeatedly update the ARG through time with new samples, which is relevant to ongoing surveillance during pandemics wherein pathogen samples are collected and sequenced in real time. In summary, there is a burgeoning assortment of methods that enable ARG reconstruction across a range of dataset and system characteristics including data types, sample sizes, and sampling regimes. ARG reconstruction remains a formidable statistical and computational challenge, and many improvements in the robustness and flexibility of ARG reconstruction are still needed (e.g., Deng et al., 2021; Brandt et al., 2022; Ignatieva et al., 2023). However, ARG inference has emerged as a nexus of methodological development in statistical population genetics, and ongoing efforts exist to address the limitations and combine the strengths of existing methods (e.g., Rasmussen and Guo, 2023). Readers should be prepared for continued innovation in this area. Concurrent with improvements in ARG reconstruction, revolutionary progress in population genomic simulation has occurred over the past decade. One of the most significant developments was msprime(Kelleher et al., 2016; Baumdicker et al., 2022), which can simulate the genomes and ancestry for a set of samples backwards in time using the coalescent. With the coalescent, only the ancestors of the samples (and not entire populations) need to be tracked. This approach is highly efficient but generally entails an assumption of neutral evolution [although it is possible for coalescent theory and simulation to incorporate selection (e.g., Kaplan et al., 1988; Hudson and Kaplan, 1988; Walczak et al., 2012; Spencer and Coop, 2004; Kern and Schrider, 2016; Baumdicker et al., 2022)]. The notable innovation of msprime relative to previous coalescent programs is the speed at which it can perform simulations at biologically realistic scales under a variety of models and with recombination. For example, msprime has been used to simulate realistic whole genome sequences based on genealogical information for \(\sim\)1.4 million people inhabiting Quebec, Canada (Anderson-Trocme et al., 2023). Another noteworthy development in population genomic simulation over the past decade is SLiM(Messer, 2013). In contrast to coalescent simulators, SLiM simulates forward in time using either Wright-Fisher or non-Wright-Fisher models (Haller and Messer, 2019). The forward-in-time nature of SLiM means that all individuals in each generation (including historical individuals that are not genetic ancestors to the contemporary population) must be tracked in the simulation. This elevates the computational burden compared to coalescent simulation. However, it enables substantially more flexibility in the scenarios that can be simulated including complex selection and ecological interactions across multiple species (Haller and Messer, 2023). Relevant to this review, both SLiM and msprime can record ARGs during simulation (Haller et al., 2019; Baumdicker et al., 2022). These and other simulation programs (e.g., discoal(Kern and Schrider, 2016)] can be used for a variety of purposes in ARG-based research including exploration of biological phenomena, statistical and machine learning inference (e.g., Hejase et al., 2022; Campagna et al., 2022; Korfmann et al., 2023), and methods evaluation (Brandt et al., 2022). The correlated nature of an ARG's local trees can be exploited to compactly encode the trees in a data structure termed the _succinct tree sequence_ or _tree sequence_ for short (Figure 3A,B; Kelleher et al., 2016, 2018). The tree sequence defines the trees using two tables. The node table contains an identifier and the timing of each node (first table in Figure 3B). The edge table documents the edges shared between adjoining trees by recording the parent and offspring nodes of each edge and the contiguous extent of the genome that each edge covers (second table in Figure 3B). The key innovation here is that the data structure eliminates substantial redundancy. Instead of storing each tree independently, which would necessitate duplication of shared nodes and edges, the tree sequence records each shared component just once. The basic tree sequence technically does not encode the full ARG, which includes all coalescent and recombination events. The basic tree sequence only explicitly contains information on the coalescent events and does not detail the timing and specific changes that differentiate adjacent trees. Kelleher et al. (2019) explain this distinction as follows: the full ARG "encodes the events that occurred in the history of a sample" while the set of local trees recorded in the tree sequence "encodes the outcome of those events." Nonetheless, the tree sequence can be elaborated with recombination information to more exhaustively document genetic ancestry (e.g., Baumdicker et al., 2022; Mahmoudi et al., 2022). Several properties of the tree sequence have revolutionized ARG-based research. First, its concise nature means that an immensity of genealogical information can be stored in a highly compressed manner. The tree sequence is also a flexible format that can be augmented with additional tables to store other information such as location metadata and DNA data (e.g., third and fourth tables in Figure 3B; Figure 3A). Notably, relative to conventional genotype matrix formats (Figure 3C), DNA data can be represented much more efficiently using the tree sequence. For instance, Kelleher et al. (2019) estimated that the tree sequence format could store genetic variant data for 10 billion haploid human-like chromosomes in \(\sim\)1 TB, which is many orders of magnitude smaller than the \(\sim\)25 PB required to store these data in a VCF (Danecek et al., 2011). The efficiency of the tree sequence also permits significant speed-ups in computation (e.g., through the implementation of fast algorithms). These features have enabled advancements in the scale and scope of ARG-based analyses and are increasingly accessible given that the tree sequence underpins a growing ecosystem of methods and software including tsinfer(Kelleher et al., 2019), sc2ts(Zhan et al., 2023), ARGinfer(Mahmoudi et al., 2022), msprime(Baumdicker et al., 2022), SLiM(Haller et al., 2019), and tskit(Kelleher et al., 2018) built to infer, simulate, and analyze ARGs. Further details on the tree sequence can be found in the papers introducing and expanding the tree sequence (Kelleher et al., 2016, 2018; Mahmoudi et al., 2022) and in the documentation of tskit(Kelleher et al., 2018). ## 7 Acknowledgements We thank Peter Ralph, Sarah Fitzpatrick, Yan Wong, Jerome Kelleher, and members of the Fitzpatrick and Bradburd labs for helpful discussions and manuscript feedback. This work was supported by a University Distinguished Fellowship from Michigan State University (awarded to ALL), the National Defense Science & Engineering Graduate (NDSEG) Fellowship from the Department of Defense (awarded to ALL). Research reported in this publication was supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number R35GM137919 (awarded to GSB). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
2308.07151
Diffusion Based Augmentation for Captioning and Retrieval in Cultural Heritage
Cultural heritage applications and advanced machine learning models are creating a fruitful synergy to provide effective and accessible ways of interacting with artworks. Smart audio-guides, personalized art-related content and gamification approaches are just a few examples of how technology can be exploited to provide additional value to artists or exhibitions. Nonetheless, from a machine learning point of view, the amount of available artistic data is often not enough to train effective models. Off-the-shelf computer vision modules can still be exploited to some extent, yet a severe domain shift is present between art images and standard natural image datasets used to train such models. As a result, this can lead to degraded performance. This paper introduces a novel approach to address the challenges of limited annotated data and domain shifts in the cultural heritage domain. By leveraging generative vision-language models, we augment art datasets by generating diverse variations of artworks conditioned on their captions. This augmentation strategy enhances dataset diversity, bridging the gap between natural images and artworks, and improving the alignment of visual cues with knowledge from general-purpose datasets. The generated variations assist in training vision and language models with a deeper understanding of artistic characteristics and that are able to generate better captions with appropriate jargon.
Dario Cioni, Lorenzo Berlincioni, Federico Becattini, Alberto del Bimbo
2023-08-14T13:59:04Z
http://arxiv.org/abs/2308.07151v1
# Diffusion Based Augmentation for Captioning and Retrieval in Cultural Heritage ###### Abstract Cultural heritage applications and advanced machine learning models are creating a fruitful synergy to provide effective and accessible ways of interacting with artworks. Smart audio-guides, personalized art-related content and gamification approaches are just a few examples of how technology can be exploited to provide additional value to artists or exhibitions. Nonetheless, from a machine learning point of view, the amount of available artistic data is often not enough to train effective models. Off-the-shelf computer vision modules can still be exploited to some extent, yet a severe domain shift is present between art images and standard natural image datasets used to train such models. As a result, this can lead to degraded performance. This paper introduces a novel approach to address the challenges of limited annotated data and domain shifts in the cultural heritage domain. By leveraging generative vision-language models, we augment art datasets by generating diverse variations of artworks conditioned on their captions. This augmentation strategy enhances dataset diversity, bridging the gap between natural images and artworks, and improving the alignment of visual cues with knowledge from general-purpose datasets. The generated variations assist in training vision and language models with a deeper understanding of artistic characteristics and that are able to generate better captions with appropriate jargon. ## 1 Introduction Deep learning applications on fine art suffer from an obvious scarcity of data, due to the fact that an artwork is usually a unique piece. In addition, tasks involving both vision and language require on the one hand the modeling of technical language with domain-specific jargon and on the other hand the understanding of difficult and underrepresented visual concepts, such as abstract or stylized drawings. These difficulties entail a challenge for learning algorithms, which would benefit from a large collection of annotated data. A simple solution is to leverage models pre-trained on general-purpose datasets to address relevant tasks for cultural heritage, such as retrieval, visual question answering or captioning. However, the effect is that such models tend to underperform when applied in the cultural heritage domain. In fact, when the scope moves from real-world images to paintings and other more abstract representations, there is a strong domain shift compared to standard training data, being it composed of natural images. A standard approach to deal with data scarcity is to leverage data augmentation, slightly perturbing the training data to improve variability and let the trained model generalize better. In the vision domain, perturbations usually include adding noise, altering pixel values or changing the overall orientation or illumination of the scene. We argue that these augmentations may indeed alter the semantics of the artworks, where color and spatial distribution of objects can convey significant meanings that are necessary to interpret the painting. In this paper, we address the above-mentioned limitations by proposing a data augmentation strategy for paintings that has the twofold advantage of increasing the training data as well as preserving the content. In particular, we explore the benefits of augmenting artworks datasets for image captioning. To this end, we leverage both textual descriptions of the paintings and a diffusion model to create several variations of the artworks. By conditioning the diffusion model on the captions, we generate a variability in the visual domain that aids the grounding of objects and entities expressed in artistic form with the technical jargon that describes them. What we propose is therefore an image augmentation at a semantic level, generating multiple variations of artworks while retaining their content and style. By leveraging the expert knowledge of art critics contained in painting descriptions and the natural language understanding capabilities of state-of-the-art generative models, we aim to provide a sophisticated augmentation pipeline capable of generating a sufficient intra-class variability of depicted concepts to enable an effective learning (Fig. 1). Our main contributions presented in this paper are * We propose a data augmentation technique for low data regime cultural heritage tasks, that works at a semantic level rather than at a pixel-intensity level as standard data augmentations in vision. * Thanks to our data augmentation strategy based on diffusion models we can favor a visual grounding of linguistic concepts, which in the cultural heritage domain are often expressed using technical and domain-specific jargon. * We show the benefits of the proposed augmentation strategy in captioning tasks as well as cross-domain retrieval tasks. ## 2 Related Works Computer Vision for Cultural HeritageIn the domain of cultural heritage, several computer vision approaches have been proposed in the literature. Artwork classification [34, 48, 15, 11, 35] and recognition [16, 49, 28] have often been placed at the center of such approaches, sometimes with the end-goal to develop user-engagement applications [5, 33, 2, 8]. In this paper, we mostly deal with the task of image captioning, which implies the automatic generation of a natural language textual description of an image based only on the visual input. This has been an extensively addressed research topic in recent years [46, 54, 29], but not many contributions have been made in the domain of art historical data. In this particular domain, which shifts from the one of natural images, the complexity of the task increases due to a simultaneous lack of labeled data and an increased abstraction. Currently, available painting datasets with descriptions are constructed by downloading descriptions from online museums or annotating descriptions by crowdsourcing. The Figure 1: Schematic illustrating the data augmentation pipeline. The conditional generative model allows for both and image and text input while in its _Image&Text_—_Image_ configuration. We provide the model both the original artwork along with its detailed textual analysis from [47] and use the diffusion model’s outputs as new datapoints for training other models for downstream tasks. Artpedia [47] dataset is composed of paintings paired with textual descriptions from WikiPedia. The dataset thus provides information about artworks and their context and each sentence is categorized as either a visual sentence or a contextual sentence. Visual sentences describe the visual content of the painting, while contextual sentences provide information that cannot be inferred from raw pixels alone. Such information includes, for instance, the name of the painter, its artistic style, or the museum in which it is kept. The dataset was originally introduced as a dataset for cross-modal retrieval as well as captioning and it has been further annotated for visual question answering purposes in [7]. Similarly, the AQUA dataset [21] has been proposed to train visual question answering models in the cultural heritage domain. More recently, ArtCap [32] provided an image captioning dataset containing 3,606 paintings, each one associated to five textual descriptions, with a mean length for each caption of 11 words. A larger example of an artwork dataset is presented in [16] consisting of more than 80K webly-supervised images from 3120 classes, and a subset of 200 classes with more than 1300 verified images. Text and metadata for each class is also provided, to support zero-shot learning and other multi-modality techniques in general. An ontological knowledge base has been exploited in [4] to create a large-scale cultural heritage dataset, annotated with visual and contextual data. The authors adopted ArCo [10], the Italian cultural heritage knowledge graph, to extract information about approximately 500K cultural assets and leveraged a semi-automatic annotation approach for generating 6.5M question-answer pairs. Generative Models for Data AugmentationData augmentation can be defined as the process through which data can be transformed without changing its semantics. By using this definition we can tie the efficacy of an augmentation method to a task and not to the type of data alone. In most of the computer vision tasks that work with natural images, the usual augmentation strategies involve flipping the image, adding random noise, and changing its brightness and colors. When it comes to fine-art tough, such changes might be detrimental due to the strict relation between the semantics of the original art piece and its details (i.e. the relative position of characters in religious art, the use of strong light contrast in a Caravaggio painting, or the symbolic choice of a particular color). An attempt to augment training data for object detection in artworks has been recently proposed [27], where a style transfer model is applied to natural images to generate images that resemble paintings. A possible approach to obtain a larger, more diverse, dataset has been explored in many works outside of the scope of cultural heritage. In these cases, the input data is used to train a generative model, which in turn will produce new data coming from the training domain. [44, 3, 26, 25, 6]. Also in [18] the authors used a CycleGAN [56] for image-to-image translation of thermal to pseudo-RGB data. The use of these frameworks to perform data augmentation in order to improve the performance of a separate classifier has been studied in multiple previous works such as [1] in which they focus on improving one-shot learning, and [9], where segmentation of medical images is enhanced by GAN augmented data. In [38] synthetic data coming from a simulator is adapted and used to train an RL agent for autonomous driving. Recently Diffusion Models (DM) [24, 41] reached new impressive levels, compared to GANs, both in terms of output quality and fidelity to the conditional inputs such as text or additional images and have been employed for data augmentation objectives such in [51, 22]. Most of these applications focused mostly on evaluating the ability of diffusion models to generate synthetic data for classification problems, we instead are going to focus on different downstream tasks. Latent Diffusion Models (LDM) [41] perform the diffusion process in a latent space learned by a convolutional auto-encoder. This allows to greatly reduce the training and inference cost of the model compared to pixel-based DMs, while maintaining a high visual fidelity. By introducing cross-attention layers in the diffusion model architecture, the generation can be conditioned on a wide variety of sources, including text and images. The popular Stable Diffusion model is based on LDM [41], bringing further improvements thanks to an internet-scale training. Motivated by the recent success of large generative models, we posed the research question regarding whether diffusion models can be used to augment visual recognition datasets with synthetic images, especially when working in underrepresented domains such as cultural heritage. Our findings show that using images generated by a diffusion model, conditioned by a textual description, leads to improved performance compared to vanilla training as well as training using standard computer vision data augmentation techniques. ## 3 Data Experiments were performed on both the Artpedia [47] and the ArtCap [32] datasets. Albeit similar in structure these two datasets differ in multiple ways from one another. The [32] dataset contains a fixed set of 5 sentences per artwork while [47] has on average, 3.1 visual sentences and 6.5 contextual sentences per artwork. Artpedia contains a collection of 2,930 painting images, each associated with a variable number of textual descriptions, which are combined together into a single description. Overall, the dataset contains a total of 28,212 sentences. Out of these, 9,173 are labeled as visual sentences and the remaining 19,039 are categorized as contextual. In our work, we only consider visual sentences since we focus on augmenting images. The respective syntactic style is also quite different in the two datasets: where Artpedia chooses paragraph-long academic descriptions, Artcap limits itself to shorter and simpler captions. The word count distribution of the captions for the two datasets is shown in Fig. 2. Note that, on average, a single visual sentence of Artpedia is composed by 22 words, and the caption is 70 words, which is considerably longer than most common Image Captioning datasets [12, 43, 45]. We also evaluated randomly sampling one of the visual sentences, but since each visual sentence describes only a small portion of the image, it led to worse results. Both Artpedia and ArtCap provide validation and test splits, composed of 10% and 10% validation samples, and 10% and 50% test samples, respectively. Samples from the two datasets are shown in Fig. 3. ## 4 Method In order to generate the augmented version of the datasets we employ a LDM (Latent Diffusion Model), Stable Diffusion1, to generate multiple version of each image belonging to the original dataset as illustrated in Fig. 1. Footnote 1: [https://stability.ai/blog/stable-diffusion-public-release](https://stability.ai/blog/stable-diffusion-public-release) In our work, we employed versions 1.4 and 1.5 of Stable Diffusion. In Stable Diffusion 1.4 the checkpoint was initialized with the weights of the Stable-Diffusion-v-1-2 checkpoint and subsequently fine-tuned on 225K steps at resolution 512x512 on the "laion-aesthetics v2 5+" subset of LAION dataset [42] and 10% dropping of the text-conditioning to improve classifier-free guidance sampling. In Stable Diffusion 1.5, the initialization checkpoint and the finetuning procedure is the same as Stable-Diffusion-1-4, but the finetuning is performed for more steps (595K). We employed Stable Diffusion v1.4 to augment Artpedia [47] dataset and Stable Diffusion v1.5 for the ArtCap [32] dataset. Given a dataset \(\mathcal{D}\) of \(N\) samples \((x_{i},\mathbf{y}_{i})\) formed by an image \(x_{i}\) and a set of captions \(\mathbf{y}_{i}\), we augment it by generating a set \(S_{i}=\{(\tilde{x}_{i1},\mathbf{y})\ldots(\tilde{x}_{iM},\mathbf{y})\}\) of synthetic variations for each image \(x_{i}\), obtaining a synthetic dataset \(\mathcal{\bar{D}}\) of \(N\times M\) samples. Each variation was generated using both a textual prompt built by providing the caption and the Figure 4: Samples of the augmented images. _Left:_ Original image and its caption.; _Right:_ Multiple samples of the augmented images using the combination of the provided description and the original input image. Figure 3: Samples of images along with their textual descriptions from Artpedia (top) and ArtCap (bottom) datasets. Figure 2: Distribution of caption lengths in the Artpedia [47] and ArtCap [32] datasets original image to guide the generation. To obtain different images, the generation seed was changed for each variation (see Fig. 4). To gain an intuition of the quality of the synthetic dataset, we calculated an embedding of the text and images of each sample in \(\mathcal{D}\) and \(\tilde{\mathcal{D}}\) using a CLIP-ViT/B-16 model [40]. We can see from Fig. 5 (a) and (b) that the average cosine similarity between images and their captions maintains a similar value in the original dataset \(\mathcal{D}\) and the synthetic dataset \(\tilde{\mathcal{D}}\), suggesting that synthetic images preserve the relation with the caption. Moreover, in (see Fig. 5 (c)) we see that variations maintain a high similarity with the original images. This can also be seen in Fig. 4. During training, we insert in each position of the training minibatch a sample \((x_{i},\mathbf{y}_{i})\in\mathcal{D}\) with probability \(\alpha\) or one of its synthetic variations \((x_{ij},\mathbf{y}_{i})\in\tilde{\mathcal{D}}\) with probability \((1-\alpha)\), where \(x_{ij}\) is sampled uniformly from \(S_{i}\). We use a value of \(\alpha=0.5\), to balance real and generated images during training as suggested in [51]. ## 5 Experiments In order to test our augmentation technique we perform multiple experiments over different tasks. As a first downstream task we train an image-captioning model using both augmented and non-augmented versions of the dataset. For this set of experiments, we selected medium-sized, Transformer-based Vision and Language models which can be trained end-to-end and can be employed for a variety of different tasks. In particular, we use the GITbase[54] model and the BLIPbase[29] model. GIT[54] (Generative Image-to-text Transformer) is a Transformer[52] model which can be applied to many Vision and Language tasks. It leverages a CLIP ViT image encoder[40] and a single Transformer text decoder, which are jointly trained under a single language modeling task on large-scale pre-training data. It is publicly available in two sizes, GIT-base (129 M parameters) which employs a CLIP/ViT-B/16 encoder and GIT-large (347M parameters), with a CLIP/ViT-L/14 encoder. BLIP[29] instead is a model that effectively uses noisy web data for pre-training by bootstrapping the captions, generating new synthetic captions and removing the noisy ones. It employs a multimodal mixture of encoder-decoder which are jointly trained with three vision-language objectives: image-text contrastive learning, image-text matching, and image-conditioned language modeling. The architecture is composed of a ViT[20] encoder to encode images and a BERT[19] to encode text. Both the GIT and BLIP models were initialized with the available pre-training weights2 and finetuned for 10 training epochs using the AdamW[31] optimizer with a \(5e^{-05}\) learning rate and 500 steps of warm-up using batches of 8 images. Footnote 2: [https://huggingface.co/microsoft/git-base](https://huggingface.co/microsoft/git-base) The second task is adopt to prove the effectiveness of our proposed strategy is cross-domain retrieval. Here, we perform retrieval both of images given their textual description and vice versa. For this downstream application, we use the CLIP model[40], using the openCLIP3 implementation. To finetune the CLIP model we used again the AdamW optimizer with a learning rate of \(5e^{-04}\). Footnote 3: [https://github.com/mlfoundations/open_clip](https://github.com/mlfoundations/open_clip) ### Quantitative Results #### 5.1.1 Metrics To quantitatively assess the quality of the generated captions, standard language evaluation metrics are used. Those include BLEU[39], ROUGE[30] and METEOR[17], typically used for machine translation tasks, and CIDEr[53], specifically developed for the image captioning task. In addition, a semantic similarity between generated captions and references is measured with BERTScore[55] metric. BLEU score calculates \(n\)-gram precisions between a candidate sentence and a set of human-generated references, multiplied by a brevity penalty. Single \(n\)-gram precisions are then combined following a geometric mean to obtain a final score. It is common practice to report BLEU scores with \(n\)-grams ranging from 1 to 4. ROUGE-L calculates a F-measure using the Longest Common Subsequence (LCS) between a candidate sentence and a set of references. METEOR computes a harmonic mean of precision and recall between unigrams of aligned candidate and reference sentences, where the mapping used for alignment follows various strategies, including exact match, synonyms and para Figure 5: Average cosine similarity between CLIP embeddings of: (a) real images and the associated captions; (b) synthetic images and the associated captions; (c) real images and their synthetic variations. phrases. CIDEr measures the consensus among a candidate sentence and a set of references by computing the cosine similarity of TF-IDF weighted \(n\)-gram vectors. BERTScore uses the word embeddings computed by a pretrained Transformer model to measure the semantic similarity between a candidate sentence and a reference. #### 5.1.2 Baselines We compare our method against state-of-the-art augmentation techniques, such as AutoAugment [13], AugMix [23], RandAugment [14] and TrivialAugment [36]. AutoAugment [13] is an augmentation framework for vision models that casts the search of parameters for data augmentation as an optimization problem and solves it using reinforcement learning. RandAugment [14] improves on AutoAugment [13] by both considerably reducing the parameters search space from \(10^{32}\) to \(10^{2}\) and matching or exceeding performances of [13]. AugMix [23] layers multiple randomly sampled augmentation operations in concert with a consistency loss to improve model robustness. Finally [37] improves on the previous strategies by further simplifying the search space. All of the previous models are tailored to image classification tasks, more recently a number of works focused on data augmentation specifically developed for detection problems. In Fig. 6 we provide a comparison of the augmentation operations performed by the aforementioned state of the art techniques and ours applied to the same image. #### 5.1.3 Results Image CaptioningWe present the results of the image captioning task using differently trained GIT models in Tab. 1 over the two chosen datasets. For the Artpedia dataset, the test results show a clear and consistent improvement using our augmentation technique over all the aforementioned metrics. Similarly, on ArtCap we report gains in most metrics, with only a slight decrease compared to standard training with no data augmentation. It is also easy to notice how the two datasets differ in terms of complexity. In fact, all the models struggle more with Artpedia [47], obtaining results that are much lower in absolute terms. This is due to the nature of the captions in Artpedia, which are composed of long sentences, with lots of details, as can be seen in Fig. 3 and Fig. 4. Therefore, n-gram-based metrics fail to effectively convey the quality of the captions. On the contrary, BERTScore, which captures semantic similarity between sentences rather than analyzing them from a structural point of view, achieves much higher results and confirms the improved quality of the captions generated with our data augmentation. On the contrary, ArtCap has shorter sentences so metrics such as BLEU, METEOR, ROUGE and CIDEr manage to obtain much higher results in absolute terms and can be used effectively in this comparison. In order to assess the quality of our data augmentation strategy we also present a comparison between our method and different state of the art methods for image data augmentation (from Section 5.1.2) in Tab. 2. It is important to note that while our augmentation approach is beneficial to the model, other augmentation techniques actually hurt performance. Intuitively we can infer that data augmentation strategies such as the one we compare our method against are engineered for classification tasks and might not might semantically invariant with reference to image captioning. Image RetrievalFor the retrieval task we test CLIP [40] using a similar setting to image captioning. We test first in a zero-shot configuration and then with and without data augmentation. The CLIP model is pre-trained on the YFCC dataset [50] when performing zero-shot retrieval. As in the \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} Dataset & Model & B@1 & B@2 & B@3 & B@4 & METEOR & ROUGE & CIDEr & BERTScore \\ \hline \multirow{4}{*}{**Artpedia**} & GIT\({}_{b}\) (zero-shot) & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0144 & 0.0749 & 0.0144 & 0.6905 \\ & GIT\({}_{b}\) w/o DA & 0.0179 & 0.0088 & 0.0046 & 0.0026 & 0.0385 & 0.1433 & 0.0505 & 0.7291 \\ & GIT\({}_{b}\) w/ DA & **0.0184** & **0.0092** & **0.0048** & **0.0027** & **0.0390** & **0.1479** & **0.0673** & **0.7316** \\ \cline{2-10} & BLIP\({}_{b}\) (zero-shot) & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0161 & 0.0830 & 0.0043 & 0.7112 \\ & BLIP\({}_{b}\) w/ DA & 0.0050 & 0.0026 & 0.0014 & 0.0009 & 0.0331 & 0.1568 & 0.0766 & 0.7262 \\ & BLIP\({}_{b}\) w/ DA & **0.0118** & **0.0062** & **0.0035** & **0.0020** & **0.0369** & **0.1658** & **0.0906** & **0.7291** \\ \hline \hline \multirow{4}{*}{**ArtCap**} & GIT\({}_{b}\) (zero-shot) & 0.3993 & 0.2541 & 0.1548 & 0.0888 & 0.1237 & 0.3128 & 0.2114 & 0.7877 \\ & GIT\({}_{b}\) w/ DA & 0.7311 & 0.5675 & 0.4263 & 0.3196 & 0.2360 & 0.5148 & 0.6263 & **0.8752** \\ \cline{1-1} & GIT\({}_{b}\) w/ DA & **0.7475** & **0.5825** & **0.4407** & **0.3321** & **0.2376** & **0.5166** & **0.6445** & 0.8737 \\ \cline{1-1} \cline{2-10} & BLIP\({}_{b}\) (zero-shot) & 0.6224 & 0.4007 & 0.2487 & 0.1512 & 0.1606 & 0.3951 & 0.3467 & 0.8098 \\ \cline{1-1} & BLIP\({}_{b}\) w/ DA & **0.7710** & **0.5972** & 0.4515 & 0.3343 & 0.2442 & 0.5128 & 0.6851 & **0.8759** \\ \cline{1-1} & BLIP\({}_{b}\) w/ DA & 0.7654 & 0.5909 & **0.4541** & **0.3491** & **0.2466** & **0.5170** & **0.6862** & 0.8748 \\ \end{tabular} \end{table} Table 1: Image Captioning results on Artpedia and ArtCap using GIT [54] model, measuring BLEU (\(n\)-grams 1 to 4), METEOR, ROUGE, CIDEr and BERTScore metrics. previous task, the results shown in Tab. 3 present a clear indication of an improved performance in the retrieval problem. Tab. 4 instead compares our results with the best ones proposed in [47] using the same experimental protocol by the authors, i.e. by fixing the maximum number of retrievable items to \(N=100\). ### Qualitative Results Due to the subjective nature of the task, it is necessary to perform a visual inspection to better understand how are the models behaving under different data regime conditions. In this section, we present a sample of qualitative results to better appreciate the effect of our method. The example presented in Fig. 7 shows the effectiveness of our model in terms of enriching the quality of the description by comparing the output of the same captioning method trained under different settings. While the pre-trained model tends to offer vague but correct descriptions even in a zero-shot setting, it is necessary to fine-tune the model on the target dataset in order to match the language used in Artpedia dataset. Our data-augmented finetuning helps the model to obtain a better representation of fine visual details in the dataset, allowing to obtain richer captions using the task-related technical knowledge that a large internet-wide trained model might be missing. \begin{table} \begin{tabular}{l|c|c|c} Model & Task & R@1 & R@5 \\ \hline X-Attn GloVe [47] & im2t & 0.086 & 0.227 \\ CLIP - w/ DA & im2t & **0.090** & **0.230** \\ \hline X-Attn GloVe [47] & t2im & 0.041 & 0.136 \\ CLIP - w/ DA & t2im & **0.090** & **0.250** \\ \end{tabular} \end{table} Table 4: We compare our results using the same experimental protocol as in [47] using \(N=100\) retrievable items. We report Recall @1 and @5, testing both in the image-to-text (im2t) setting and in the text-to-image (t2im) setting. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} & No aug & AutoAugment & RandAugment & AugMix & TrivialAugment & Ours \\ \hline Artpedia & 0.0505 & 0.0583 & 0.0533 & 0.0536 & 0.0510 & **0.0673** \\ ArtCap & 0.6263 & 0.5829 & 0.6239 & 0.5717 & 0.5849 & **0.6445** \\ \end{tabular} \end{table} Table 2: Comparison of CIDEr scores with GIT [54] model trained using our proposed diffusion augmentation and other state-of-the-art augmentation techniques for the image captioning task: AutoAugment [13], RandAugment [14], AugMix [23], and TrivialAugment [36]. Figure 6: Sample of images produced by different augmentation methods: No augmentation, AutoAugment [13], RandAugment [14], AugMix [23], and TrivialAugment [36], Ours. \begin{table} \begin{tabular}{l|l|c|c|c} Model & Task & R@1 & R@5 & R@10 \\ \hline CLIP - (zero-shot) & im2t & 0.0853 & 0.1557 & 0.2096 \\ CLIP - w/o DA & im2t & 0.1048 & 0.2081 & 0.2665 \\ CLIP - w/ DA & im2t & **0.1108** & **0.2096** & **0.2740** \\ \hline CLIP - (zero-shot) & t2im & 0.0644 & 0.1751 & 0.2290 \\ CLIP - w/o DA & t2im & **0.0883** & 0.1751 & 0.2305 \\ CLIP - w/ DA & t2im & 0.0868 & **0.1976** & **0.2470** \\ \end{tabular} \end{table} Table 3: Test on Artpedia on the retrieval task with CLIP using a ResNet50 pretrained on YFCC [50]. We report Recall @1,@5,@10. We test both in the image-to-text (im2t) setting and in the text-to-image (t2im) setting. ## 6 Conclusions This paper presented technique for augmenting and better exploit fine art datasets with the intent of making the fruition of semantically complex visual art easier to digitalise, to access, and to retrieve for the general public. In the field of cultural heritage a feature such as the uniqueness of the artworks can become an obstacle for machine learning techniques that requires large amount of data. At the same time the usual augmentation techniques such as image flipping, random brightness change, random hue change do not suit the task as they semantically change the original data-point by changing small the visual details that are actually meaningful. Therefore our contributions aims at semantically enrich the popular pretrained LLMs models leveraging the expert knowledge to create a more sophisticated image data augmentation pipeline. ## Acknowledgements This work is partially supported by the European Commission under European Horizon 2020 Programme, Grant No. 101004545-ReInHerit.
2306.00392
Coneheads: Hierarchy Aware Attention
Attention networks such as transformers have achieved state-of-the-art performance in many domains. These networks rely heavily on the dot product attention operator, which computes the similarity between two points by taking their inner product. However, the inner product does not explicitly model the complex structural properties of real world datasets, such as hierarchies between data points. To remedy this, we introduce cone attention, a drop-in replacement for dot product attention based on hyperbolic entailment cones. Cone attention associates two points by the depth of their lowest common ancestor in a hierarchy defined by hyperbolic cones, which intuitively measures the divergence of two points and gives a hierarchy aware similarity score. We test cone attention on a wide variety of models and tasks and show that it improves task-level performance over dot product attention and other baselines, and is able to match dot-product attention with significantly fewer parameters. Our results suggest that cone attention is an effective way to capture hierarchical relationships when calculating attention.
Albert Tseng, Tao Yu, Toni J. B. Liu, Christopher De Sa
2023-06-01T06:53:14Z
http://arxiv.org/abs/2306.00392v2
# Coneheads: Hierarchy Aware Attention ###### Abstract Attention networks such as transformers have achieved state-of-the-art performance in many domains. These networks rely heavily on the dot product attention operator, which computes the similarity between two points by taking their inner product. However, the inner product does not explicitly model the complex structural properties of real world datasets, such as hierarchies between data points. To remedy this, we introduce cone attention, a drop-in replacement for dot product attention based on hyperbolic entailment cones. Cone attention associates two points by the depth of their lowest common ancestor in a hierarchy defined by hyperbolic cones, which intuitively measures the divergence of two points and gives a _hierarchy aware_ similarity score. We test cone attention on a wide variety of models and tasks and show that it improves task-level performance over dot product attention and other baselines, and is able to match dot-product attention with significantly fewer parameters. Our results suggest that cone attention is an effective way to capture hierarchical relationships when calculating attention. ## 1 Introduction In recent years, attention networks have achieved highly competitive performance in a variety of settings, often outperforming highly-engineered deep neural networks [7; 10; 33]. The majority of these networks use dot product attention, which defines the similarity between two points \(u,v\in\mathbb{R}^{d}\) by their inner product \(u^{\top}v\)[33]. Although dot product attention empirically performs well, it also suffers from drawbacks that limit its ability to scale to and capture complex relationships in large datasets [35; 30]. The most well known of these issues is the quadratic time and memory cost of computing pairwise attention. While many works on attention mechanisms have focused on reducing the computational cost of dot product attention, few have considered the properties of the dot product operator itself [35; 8]. Many real world datasets exhibit complex structural patterns and relationships which may not be well captured by an inner product [19]. For example, NLP tasks often contain hierarchies over tokens, and images may contain clusters over pixels [36; 16]. Motivated by this, we propose a new framework based on hyperbolic entailment cones to compute attention between sets of points [12; 39]. Our attention mechanism, which we dub "cone attention", utilizes partial orderings defined by hyperbolic cones to better model hierarchical relationships between data points. More specifically, we associate two points by the depth of their lowest common ancestor (LCA) in the cone partial ordering, which is analogous to finding their LCA in a latent tree and captures how divergent two points are. Cone attention effectively relies on two components: hyperbolic embeddings and entailment cones. Hyperbolic embeddings, which use the underlying geometric properties of hyperbolic space, give low distortion embeddings of hierarchies that are not possible with Euclidean embeddings [28]. Entailment cones, which rely on geometric cones to define partial orders between points, allow us to calculate explicit relationships between points, such as their LCA [12; 39]. To the best of our knowledge, we are the first to define a hierarchy-aware attention operator with hyperbolic entailment cones. Functionally, cone attention is a drop-in replacement for dot product attention. We test cone attention in both "classical" attention networks and transformers, and empirically show that cone attention consistently improves end task performance across a variety of NLP, vision, and graph prediction tasks. Furthermore, we are able to match dot product attention with significantly fewer embedding dimensions, resulting in smaller models. To summarize, our contributions are: * We propose cone attention, a hierarchy aware attention operator that uses the lowest common ancestor of points in the partial ordering defined by hyperbolic entailment cones. * We evaluate cone attention on NLP, vision, and graph prediction tasks, and show that it consistently outperforms dot product attention and other baselines. For example, we achieve +1 BLEU and +2% ImageNet Top-1 Accuracy on the transformer_iwslt_de_en and DeiT-Ti models, respectively. * We test cone attention at low embedding dimensions and show that we can significantly reduce model size while maintaining performance relative to dot product attention. With cone attention, we can use 21% fewer parameters for the IWSLT'14 De-En NMT task. ## 2 Background In this section, we provide background on attention mechanisms, motivate the use of hyperbolic space to embed hierarchies, and describe our choice of entailment cones to encode partial orderings. ### Attention The attention operator has gained significant popularity as a way to model interactions between sets of tokens [33]. At its core, the attention operator \(A\) performs a "lookup" between a single query \(q_{i}\in\mathbb{R}^{d}\) and a set of keys \(k\in\mathbb{R}^{n\times d}\), and aggregates values \(v\in\mathbb{R}^{n\times d}\) associated with the keys to "read out" a single value for \(q_{i}\). Mathematically, this can be represented as \[A(q_{i},k)=C\sum_{j}\left(K(q_{i},k_{j})v_{j}\right) \tag{1}\] where \(C\sum_{j}K(q_{i},k_{j})=1\). In "traditional" dot product attention, which has generally superseded "older" attention methods such as Additive Attention and Multiplicative Attention [6; 18], \[K(q_{i},k_{j})=\exp\left(\frac{q_{i}k_{j}}{\sqrt{d}}\right)\qquad C=\frac{1}{ \sum_{j}K(q_{i},k_{j})}\qquad A(q_{i},k)=\text{softmax}\left(\frac{q_{i}k^{ \top}}{\sqrt{d}}\right)v \tag{2}\] similarity is scored with a combination of cosine similarity and embedding magnitudes. Figure 1: Overview of cone attention vs. dot product attention. In dot product attention (left), similarity scores are calculated with \(K=qk^{\top}\). In cone attention (right), \(q\) and \(k\) are first projected onto hyperbolic space. Then, pairwise similarity is calculated from the lowest common ancestor of points in the partial ordering defined by entailment cones. Cone attention allows us to explicitly encode notions of hierarchy in attention, and empirically gives better performance than dot product attention. Existing works have proposed replacing dot product attention to various degrees. A large body of works focus on efficiently computing dot product attention, such as with Random Fourier Features and low-rank methods [26; 35]. These methods generally perform worse than dot product attention, as they are approximations [40]. Some recent works, such as EVA attention, parameterize these approximations and effectively get a larger class of dot-product-esque attention methods [40]. EVA outperforms dot product attention, but at least on overlapping evaluation tasks, cone attention outperforms EVA [40]. Beyond this, Tsai et al. [32] replace \(K\) with compositions of classical kernels. Others extend dot product attention, such as by controlling the "width" of attention with a learned Gaussian distribution or by using an exponential moving average on inputs, although extensions do not usually depend on \(K\)[14; 19]. Closer to our work, Gulcehre et al. [13] introduced hyperbolic distance attention, which defines \(K(q_{i},k_{i})=\exp(-\beta d_{\mathbb{H}}(q_{i},k_{i})-c)\) where \(d_{\mathbb{H}}\) is the hyperbolic distance, \(\beta\in\mathbb{R}^{+}\), and \(c\in\mathbb{R}\). \(K\) can be interpreted as an analog of the distance from \(q_{i}\) to \(k_{i}\) on a latent tree. Finally, in an orthogonal direction, Tay et al. [29] ignore token-token interactions and synthesize attention maps directly from random alignment matrices. Whether token-token interactions are actually needed is outside the scope of this work, and we compare cone attention accordingly. ### Hyperbolic Space \(d\)-dimensional Hyperbolic space, denoted \(\mathbb{H}_{d}\), is a simply connected Riemannian manifold with constant negative sectional curvature [4]. This negative curvature results in geometric properties that makes hyperbolic space well-suited for embedding tree-like structures [28; 37]. For example, the volume of a hyperbolic ball grows exponentially with respect to its radius; in a tree, the number of leaves grows exponentially with respect to depth. Furthermore, \(d_{\mathbb{H}}(u,v)\approx d_{\mathbb{H}}(u,O)+d_{\mathbb{H}}(O,v)\), where \(O\) is the origin, which again mirrors a tree where \(d_{T}(u,v)=d_{T}(u,\text{LCA}(u,v))+d_{T}(\text{LCA}(u,v),v)\). Since hyperbolic space cannot be isometrically embedded into Euclidean space, it is usually represented on a subset of Euclidean space by a "model" of \(\mathbb{H}_{d}\)[13]. These models are isometric to each other, and the key differences between them lie in their different parameterizations, which allow for cleaner computations and visualizations for certain tasks. In this work, we primarily use the Poincare half-space model, which is the manifold \(\mathsf{H}^{d}=(\mathcal{U}^{d},g_{u})\) where \(\mathcal{U}^{d}=\{x\in\mathbb{R}^{d}:x_{d}>0\}\), \(g_{u}(x)=g_{e}/x_{d}^{2}\), and \(g_{e}\) is the Euclidean metric [4]. In the Poincare half-space, "special" points and curves have particularly nice Euclidean forms. Ideal points, or points at infinity, are the points where \(x_{d}=0\) (the "\(x\)-axis") and the single point \(x_{d}=\infty\) at which all lines orthogonal to the \(x\)-axis converge. Geodesics, the shortest path between two points, are Euclidean semicircles with the origin on the \(x\)-axis or vertical rays orthogonal to the \(x\)-axis. Horospheres, curves where all normal curves converge at the ideal point, are represented by either an Euclidean ball tangent to the \(x\)-axis or a horizontal hyperplane when the ideal point is \(x_{d}=\infty\). ### Entailment Cones Entailment cones in hyperbolic space were first introduced by Ganea et al. [12] to embed partial orders. The general concept of Ganea's entailment cones is to capture partial orders between points with membership relations between points and geodesically convex cones rooted at said points. That is, if \(u\in\text{the cone of }v\), then \(v\prec u\). Ganea's cones (figure 2 right) are defined on the Poincare ball by a radial angle function \(\psi(r)\), with an \(\epsilon\)-ball around the origin where cones are undefined [4; 12]. This makes learning complicated models with Ganea's cones difficult, as optimization on the Poincare ball is nontrivial and the \(\epsilon\)-ball negatively impacts embedding initializations [39; 12]. In this work, we instead use the shadow cone construction introduced in [39] and operate on the Poincare half-space, which makes computing our desired attention function numerically simpler. Shadow cones are defined by shadows cast by points and a single light source \(\mathcal{S}\), and consist of the penumbral and umbral settings (figure 2 left quadrant). In the penumbral setting, \(\mathcal{S}\) is a ball of fixed radius and points are points. The shadow and cone of \(u\) are both the region enclosed by geodesics through \(u\) tangent to \(\mathcal{S}\). In the umbral setting, \(\mathcal{S}\) is instead a point, and points are centers of balls of fixed radius. Here, the shadow of \(u\) is the region enclosed by geodesics tangent to the ball around \(u\) that intersect at \(\mathcal{S}\). However, to preserve transitivity, the cone of \(u\) is a subset of the shadow of \(u\) (see figure 2). The shadow cone formulation can also be achieved with subset relations between shadows instead of membership relations between points and cones, which may be conceptually clearer. **Infinite-setting Shadow Cones.** For penumbral cones, when \(\mathcal{S}\) is a ball with center at \(x_{d}=\infty\), \(\mathcal{S}\)'s boundary is a horosphere of user-defined height \(h\). Here, all shadows are defined by intersections of Euclidean semicircles of Euclidean radius \(h\). Notably, the infinite setting penumbral cone construction is similar to Ganea's cones under an isometry from the Poincare half-space to the Poincare ball where \(\mathcal{S}\) maps to the \(\epsilon\)-ball [39]. For umbral cones, when \(\mathcal{S}\) is \(x_{d}=\infty\), shadows are regions bounded by Euclidean lines perpendicular to the \(x\)-axis. Unlike Ganea's cones and penumbral cones, umbral cones are not geodesically convex [39]. That is, the shortest path between two points in an umbral cone may not necessarily lie in the cone. In a tree, this corresponds to the shortest path between two nodes not being in the subtree of their LCA, which is not possible. Empirically, while still better than dot product attention, umbral attention usually performs worse than penumbral attention. ### Lowest Common Ancestor (LCA) and Least Upper Bound The LCA of two nodes \(u,v\) in a directed acyclic graph is the lowest (deepest in hierarchy) node that is an ancestor of both \(u\) and \(v\). Although the two terms are similar, the LCA is _not_ the same as the least upper bound of a partial ordering. The least upper bound of two points \(x,y\) in a partial ordering, denoted \(\sup(x,y)\), is the point \(p\) such that \(p\preceq x,y\) and \(\forall q\) where \(q\preceq x,y\), \(q\preceq p\). The key difference is that all other upper bounds must precede the least upper bound, while not all ancestors of \(u\) and \(v\) must also be ancestors of LCA\((u,v)\). Furthermore, \(p\) may not actually exist. ## 3 Hierarchy Aware Attention Here, we describe cone attention using the shadow cone construction, and discuss projection functions onto \(\mathbb{H}^{d}\) that allow us to use cone attention within attention networks. All definitions use the infinite-setting shadow cones, and proofs and derivations are in the appendix. For clarity, we refer to Ganea's entailment cones as "Ganea's cones" and the general set of cones that captures entailment relations (e.g. Ganea's cones and shadow cones) as "entailment cones" [12; 39]. Our methods are agnostic entailment cone choice, and can also be used with Ganea's cones. Figure 2: (Left Quadrant) Clockwise from top left: finite setting penumbral cone, infinite setting penumbral cone, infinite setting umbral cone, and finite setting umbral cone. All figures are in H\({}^{2}\). Shadows are represented by shaded regions, and cones are enclosed in green. In all figures, \(u\prec v\) since \(v\) is in the cone of \(u\). (Right) Ganea’s entailment cones, as defined on the Poincaré ball. Note the \(\epsilon\)-ball where cones are not defined, which makes optimization nontrivial. Figure from [12]. ### Cone Attention We wish to associate \(u,v\in\mathsf{H}^{d}\) by their LCA in some latent tree \(T\), which is analogous to finding their LCA, denoted \(\sup_{2}(u,v)\), in the partial ordering defined by entailment cones. Formally, \[\sup_{2}(u,v)=r\left(\operatorname*{arg\,max}_{C:u,v\in C}d_{H}(\mathcal{S},r(C ))\right) \tag{3}\] where \(r(C)\) denotes the root of a cone \(C\). This corresponds to finding the cone that is farthest away from \(\mathcal{S}\), which is the root of all hierarchies in the shadow cones construction. When \(d=2,\sup_{2}(u,v)\) also has the nice property of being the least upper bound \(\sup(u,v)\) in the partial ordering defined by shadow cones. Then, we define the similarity between \(u\) and \(v\) as \[K(u,v)=f(d_{\mathbb{H}}(\sup_{2}(u,v),\mathcal{S})) \tag{4}\] where \(f\) is a user-defined monotonically increasing function. If \(K(u,v)>K(u,w)\), then \(d_{\mathbb{H}}(\sup_{2}(u,v),\mathcal{S})>d_{\mathbb{H}}(\sup_{2}(u,w), \mathcal{S})\), which implies that \(u\) and \(v\) have a more recent "ancestor" than \(u\) and \(w\). Thus, \(K\) gives a higher similarity score to points who have a more recent "ancestor" in \(T\). In the infinite-setting shadow cone construction, \(\sup_{2}(u,v)\) is root of the minimum height (literal lowest) cone that contains both \(u\) and \(v\), or \(\sup_{2}(u,v)=r\left(\operatorname*{arg\,min}_{C:u,v\in C}r(C)_{d}\right)\). Using this, we provide definitions for \(K(u,v)\) in the infinite-setting shadow cone construction. Both the umbral and penumbral definitions correspond to the Euclidean height of \(\sup_{2}(u,v)\). In the penumbral setting, when \(\sup_{2}(u,v)\) does not exist, we return the Euclidean height of lowest light source where \(\sup_{2}(u,v)\) exists. \(x_{d}\) denotes the last dimension of \(x\), and \(x_{:-1}\) denotes the first \(d-1\) dimensions of \(x\). \(\gamma\in\mathbb{R}^{+}\) corresponds to the softmax "temperature" in attention. In the penumbral setting, \(r\in\mathbb{R}^{+}\) is the user-defined height of the horosphere light source. In the umbral setting, \(r\in\mathbb{R}^{+}\) is the user-defined radius of the ball centered at each point. **Definition 1**.: _Penumbral Attention:_ \[K(u,v)=\exp\left(-\gamma\max\left(u_{d},v_{d},\sqrt{r^{2}-\left(\frac{\sqrt{ r^{2}-u_{d}^{2}}+\sqrt{r^{2}-v_{d}^{2}}-\|u_{:-1}-v_{:-1}\|}{2}\right)^{2}} \right)\right) \tag{5}\] _when there exists a cone that contains \(u\) and \(v\), and_ \[K(u,v)=\exp\left(-\gamma\sqrt{\left(\frac{\|u_{:-1}-v_{:-1}\|^{2}+u_{d}^{2}-v_ {d}^{2}}{2\|u_{:-1}-v_{:-1}\|}\right)^{2}+v_{d}^{2}}\right) \tag{6}\] _otherwise. There exists a cone that contains \(u\) and \(v\) when_ \[\left(\|u_{:-1}-v_{:-1}\|-\sqrt{r^{2}-u_{d}^{2}}\right)^{2}+v_{d}^{2}<r^{2} \tag{7}\] Figure 3: In this example using penumbral cones in \(\mathsf{H}^{d},\sup_{2}(u,v)\) is the lowest common ancestor of \(u\) and \(v\). The red region is the set of points \(P\) s.t. \(p\in P\preceq u,v\). Of these, \(\sup_{2}(u,v)\) is the lowest point whose cone (light gray) contains both \(u\) and \(v\). Here, \(\mathcal{S}\) is the root of all hierarchies, and points closer to \(x_{d}=0\) are closer to the “leaves” of hierarchies. If \(d=2\), then \(\sup_{2}(u,v)\) is also the least upper bound of \(u\) and \(v\) in the partial ordering defined by entailment cones. **Definition 2**.: _Umbral Attention:_ \[K(u,v)=\exp\left(-\gamma\max\left(u_{d},v_{d},\frac{\|u_{:-1}-v_{:-1}\|}{2\sinh(r) }+\frac{u_{d}+v_{d}}{2}\right)\right) \tag{8}\] These definitions possess a rather interesting property - when \(u_{d}=v_{d}\) and \(K\) is normalized across a set of \(v\)s, cone attention reduces to the Laplacian kernel \(K(u_{:-1},v_{:-1})=\exp(-\gamma\|u_{:-1}-v_{:-1}\|)\). Since Euclidean space is isomorphic to a horosphere, and \(u\) and \(v\) are on the same horosphere if \(u_{d}=v_{d}\), cone attention can also be seen as an extension of the Laplacian kernel [22]. Cone attention and dot product attention both take \(O(n^{2}d)\) time to compute pairwise attention between two sets of \(n\)\(d\)-dimensional tokens \(q,k\in\mathbb{R}^{n\times d}\)[35]. However, cone attention takes more operations than dot product attention, which computes \(qk^{\top}\). In transformers, our PyTorch cone attention implementations with torch.compile were empirically 10-20% slower than dot product attention with torch.bmm (cuBLAS)[24].torch.compile is not optimal, and a raw CUDA implementation of cone attention would likely be faster and narrow the speed gap between the two methods [24]. ### Mapping Functions Here, we discuss mappings from Euclidean space to the Poincare half-space. These mappings allow us to use cone attention within larger models. The canonical map in manifold learning is the exponential map \(\text{Exp}_{x}(v)\), which maps a vector \(v\) from the tangent space of a manifold \(\mathcal{M}\) at \(x\) onto \(\mathcal{M}\) by following the geodesic corresponding to \(v\)[27; 4]. In the Poincare half-space [38], \[\text{Exp}_{x}(v)_{:-1}=x_{:-1}+\frac{x_{d}}{\|v\|/\tanh(\|v\|)-v_{d}}v_{:-1} \qquad\text{Exp}_{x}(v)_{d}=\frac{x_{d}}{\cosh(\|v\|)-v_{d}\sinh(\|v\|)/\|v\|} \tag{9}\] While \(\text{Exp}_{x}(v)\) is geometrically well motivated, it suffers from numerical instabilities when \(\|v\|\) is very large or small. These instabilities make using the exponential map in complicated models such as transformers rather difficult. Using fp64 reduces the risk of numerical over/underflows, but fp64 significantly reduces performance on GPUs, which is highly undesirable [1]. Instead, we use maps of the form \((x_{:-1},x_{d})\rightarrow(x_{:-1}f(x_{d}),f(x_{d}))\) that preserve the exponential volume of hyperbolic space while offering better numerics for large-scale optimization. Our use of alternatives to \(\text{Exp}_{x}(v)\) follows Gulcehre et al. [13]'s psuedopolar map onto the Hyperboloid. Geometrically, since \(n\)-dimensional Euclidean space is isomorphic to a horosphere in \(\mathbb{H}^{n+1}\), these maps correspond to selecting a horosphere with \(f(x_{d})\) and then projecting \(x_{:-1}\) onto that horosphere [22]. To achieve exponential space as \(x_{d}\rightarrow-\infty\), we use functions \(f\) of the form \(\exp(\cdot)\). For the infinite-setting umbral construction, since there is no restriction on where points can go in the Poincare half-space, we map \(x\in\mathbb{R}^{d}\) to \(\mathbb{H}^{d}\) with \(\psi:\mathbb{R}^{d}\rightarrow\mathbb{H}^{d}\): \[\psi(x)_{:-1}=x_{:-1}\exp(x_{d})\qquad\psi(x)_{d}=\exp(x_{d}) \tag{10}\] For the infinite-setting penumbral construction, since the mapped point cannot enter the light source at height \(h\), we map \(x\in\mathbb{R}^{d}\) to area below the light source with \(\xi:\mathbb{R}^{d}\rightarrow\mathbb{H}^{d}\) \[\xi(x)_{:-1}=x_{:-1}\frac{h}{1+\exp(-x)}\qquad\xi(x)_{d}=\frac{h}{1+\exp(-x)} \tag{11}\] While \(\xi\) is structurally the sigmoid operation, note that \(\text{sigmoid}(x)=\exp(-\text{softplus}(-x))\). Since \(\text{softplus}(x)\approx x\) for large values of \(x\), \(\xi\) preserves the exponential volume properties we seek. ## 4 Experiments Here, we present an empirical evaluation of cone attention in various attention networks. For each model we test, our experimental procedure consists of changing \(K\) in attention and training a new model from scratch. Unless otherwise noted in the appendix, we use the code and training scripts that the authors of each original model released. We assume released hyperparameters are tuned for dot product attention, as these models were state-of-the-art (SOTA) when new. ### Baselines Our main baseline is dot product attention, as it is the most commonly used form of attention in modern attention networks. Additionally, we compare cone attention against Gulcehre et al. [13]'s hyperbolic distance attention and the Laplacian kernel. To the best of our knowledge, few other works have studied direct replacements of the dot product for attention. Gulcehre et al. [13]'s original hyperbolic distance attention formulation not only computed the similarity matrix \(K\) in the Hyperboloid, but also aggregated values \(v\) in hyperbolic space by taking the Einstein midpoint with respect to weights \(\alpha\) in the Klein model (see appendix for definitions) [27]. However, the Einstein midpoint, defined as \[m(\alpha,v)=\sum_{i}\left(\frac{\alpha_{i}\gamma(v_{i})}{\sum_{j}\alpha_{j} \gamma(v_{j})}\right) \tag{12}\] where \(\gamma(v)=1/\sqrt{1-\|v\|^{2}}\), does not actually depend on how \(\alpha\) is computed. That is, \(\alpha\) could be computed with Euclidean dot product attention or cone attention and \(m(\alpha,v)\) would still be valid. The focus of our work is on computing similarity scores, so we do not use hyperbolic aggregation in our experiments. We test hyperbolic distance attention using both Gulcehre et al. [13]'s original pseudopolar projection onto the Hyperboloid model and with our \(\psi\) map onto the Poincare half-space. ### Models We use the following models to test cone attention and the various baselines. These models span graph prediction, NLP, and vision tasks, and range in size from a few thousand to almost 250 million parameters. While we would have liked to test larger models, our compute infrastructure limited us from feasibly training billion-parameter models from scratch. **Graph Attention Networks.** Graph attention networks (GATs) were first introduced by Velickovic et al. [34] for graph prediction tasks. GATs use self-attention layers to compute node-level attention maps over neighboring nodes.The original GAT used a concatenation-based attention mechanism and achieved SOTA performance on multiple transductive and inductive graph prediction tasks [34]. We test GATs on the transductive Cora and inductive multi-graph PPI datasets [20; 15]. **Neural Machine Translation (NMT) Transformers.** Transformers were first applied to NMT in Vaswani et al. [33]'s seminal transformer paper. We use the fairseq transformer_iwslt_de_en architecture to train a German to English translation model on the IWSLT'14 De-En dataset [23; 11]. This architecture contains 39.5 million parameters and achieves near-SOTA performance on the IWSLT'14 De-En task for vanilla transformers [23]. As this model is the fastest transformer to train out of the tested models, we use it for ablations. **Vision Transformers.** Vision transformers (ViT) use transformer-like architectures to perform image classification [10]. In a ViT, image patches are used as a tokens in a transformer encoder to classify the image. We use the _Data Efficient_ Vision Transformer (DeiT) model proposed by FAIR, which uses a student-teacher setup to improve the data efficiency of ViTs [31]. DeiTs and ViTs share the same architecture, and the only differences are how they are trained and the distillation token. We train DeiT-Ti models with 5 million parameters on the ImageNet-1K dataset [31; 9]. **Adaptive Input Representations for Transformers.** Adaptive input representations were introduced by Baevski and Auli for transformers in language modeling tasks [5]. In adaptive inputs, rarer tokens use lower dimensional embeddings, which serves as a form of regularization. We use the fairseq transformer_lm_wiki103 architecture (246.9M parameters) and train models on the WikiText-103 language modeling dataset with a block size of 512 tokens [23; 21]. We also test the same architecture without adaptive inputs, which has 520M parameters. This version converges in significantly fewer iterations, allowing us to train such a large model. **Diffusion Transformers.** Diffusion transformers (DiT) are diffusion models that replace the U-Net backbone with a transformer [25]. DiTs operate on latent space patches, so we expect there to be less hierarchical information vs. taking image patches. We use DiTs to test cone attention when the data is less hierarchical. We train DiT-B/4 models with 130M parameters on ImageNet-1K [25; 9]. ## 5 Results Table 1 summarizes the performance of cone attention across various models and tasks. Both penumbral and umbral attention significantly outperform baselines on the NMT IWSLT and DeiT-Ti Imagenet tasks. For DeiT-Ti, penumbral attention achieves 74.24% top-1 and 92.39% top-5 accuracy, which matches a distilled DeiT-Ti with more parameters (74.5% / 91.9%) [31]. On the GAT tasks, cone attention again outperforms baselines and achieves Graph Convolutional Network-level performance on the PPI dataset. Interestingly, almost all baselines, including dot product attention, outperform the concatenation-based attention in the original GAT paper. The two GAT tasks also reveal some interesting differences between penumbral and umbral attention. The Cora citation dataset is more tree-like with an average of 2 edges per node and chronological dependencies between nodes (a paper cannot cite a newer paper), while the PPI dataset is closer to a clustered graph with 14.3 edges per node [34; 15; 20]. As noted in [39], umbral cones appear to be better for strict hierarchies, while penumbral cones capture more complicated relationships such as those in the PPI dataset. The adaptive inputs and DiT models each take almost a week to train on 4 NVIDIA A6000 GPUs, so we only evaluate penumbral attention and dot product attention on these models [5; 25]. The adaptive inputs method regularizes models by reducing \(d\) for rare words. However, we expect rarer words to be closer to the leaves of a word hierarchy, so the adaptive inputs method also acts as a hierarchical prior on the data [36]. We use adaptive inputs to test cone attention when combined with other hierarchical priors. Penumbral attention outperforms dot product attention with or without adaptive inputs, but the gap between the two methods is larger without adaptive inputs. This matches our intuition that adaptive inputs captures some hierarchical information via regularization, and also indicates that cone attention is compatible with other hierarchical methods. In the DiT model, patches are taken at the latent space-level, which we suspect gives less hierarchical information than when patches are taken from an input image [25]. Here, cone attention still outperforms dot product attention, but to a lesser degree. This mirrors our expectation that the DiT model does not benefit as much from hierarchical attention, and verifies that in such cases, using cone attention does not hurt performance. ### Effect of Mapping Functions Table 2 compares how \(\psi\) and \(\xi\) compare to the exponential map and psueodpolar map (section 3.2) on the NMT IWSLT task. Here, we use the exponential map at the origin of \(\mathsf{H}^{d}\), \(O=(0,0,...,1)\). To compare \(\text{Exp}_{O}(v)\) to \(\psi\), we first take \(\text{Exp}_{O}(v)\) and then project the point onto the boundary of \(L\) if \(\text{Exp}_{O}(v)\) is in \(L\). In the infinite penumbral setting, this corresponds to taking \(\text{Exp}_{O}(v)_{d}=\min(\text{Exp}_{O}(v)_{d},h)\). \(\psi\) and \(\xi\) generally perform much better than taking the exponential map at the origin \(O\), which suggests that \(\psi\) and \(\xi\) have better optimization properties. For \(d_{\mathbb{H}}\) attention, Gulcehre et al.'s pseudopolar map slightly outperforms \(\xi\). However, table 1 indicates that outside of this specific task, using the psueodpolar map and Hyperboloid is less numerically stable. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{NMT IWLST (BLEU \(\uparrow\))} & DeiT-Ti Imagenet & \multirow{2}{*}{GAT Cora / PPI (Acc. \(\uparrow\))} \\ & & Top- 1 / Top-5 (Acc. \(\uparrow\)) & \\ \hline Model Default & 34.56 & 72.202 / 91.124 & 0.831 / 0.977 \\ Dot Product & 34.56 & 72.202 / 91.124 & 0.834 / 0.985 \\ Penumbral\({}^{\dagger}\) & **35.56** & **74.240 / 92.388** & 0.835 / **0.990** \\ Umbral\({}^{\dagger}\) & 35.07 & 73.118 / 91.610 & **0.836** / 0.989 \\ \(d_{\mathbb{H}}\) H\({}^{n}\)\(\xi\) & 32.54 & 49.190\({}^{*}\) / 74.682\({}^{*}\) & 0.834 / 0.987 \\ \(d_{\mathbb{H}}\) Hyperboloid & 33.80 & 0.098\({}^{*}\) / 0.450\({}^{*}\) & 0.13\({}^{*}\) / 0.989 \\ Laplacian Kernel & 34.68 & 71.250 / 90.836 & 0.823 / 0.986 \\ \hline \hline \multirow{2}{*}{Method} & Adaptive Inputs WikiText-103 & Without Adaptive & DiT-B/4 @ 400K Steps \\ & 0 / 480 Context Window (Ppl. \(\downarrow\)) & Inputs & (FID-50K \(\downarrow\)) \\ \hline Dot Product & 20.86 / 19.22 & 26.62 / 24.73 & 68.9 \\ Penumbral\({}^{\dagger}\) & **20.72 / 19.01** & **26.44 / 24.31** & **67.7** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of various attention methods across models and tasks. \({}^{*}\) indicates the model failed to converge or ran into NaN errors. \(\uparrow\) indicates higher is better, and \(\downarrow\) indicates lower is better. Cone attention methods (\(\uparrow\)) generally outperform dot product attention and other baselines. Model default refers to a model’s default attention method, which is dot product attention except for GATs. ### Attention Efficiency and Model Size Figure 4 shows the performance of cone attention vs. dot product attention at low _token_ (\(q,k\)) embedding dimensions (\(d\) from before) on the NMT IWSLT task. Both umbral and penumbral attention are able to achieve significantly better performance than dot product attention in this regime, matching dot product attention at \(d=128\) with only \(d=16\). For this model, using 16 dimensions reduces the number of parameters from 39.5M to 31.2M. Table 3 shows the performance of DeiT-Ti at \(d=16\). While not able to match dot product attention at \(d=64\), cone attention significantly outperforms dot product attention at \(d=16\). Furthermore, the performance gap is larger than at \(d=64\), which implies that cone attention is more parameter-efficient at capturing relationships in data. Table 1 mirrors this, where cone attention improved performance more gave a larger performance uplift over dot product attention in the two smaller architectures (transformer_iwslt_de_en and DeiT-Ti) vs. the two larger architectures (transformer_lm_wiki103 and DiT-B/4). ## 6 Conclusion We introduce cone attention, a _hierarchy-aware_ method for calculating attention. Cone attention relies on entailment cones and the geometric properties of hyperbolic space to capture complex structural patterns that dot product attention does not explicitly model. We test cone attention in a variety of attention networks ranging from a few thousand to a few hundred million parameters, and \begin{table} \begin{tabular}{l c c c} \hline \hline Method & \(\text{d}=64\) & \(\text{d}=16\) \\ \hline Dot Product & 72.202 / 91.124 & 64.470 / 86.516 \\ Penumbral & **74.240 / 92.388** & **69.758 / 89.858** \\ Umbral & 73.118 / 91.610 & 66.802 / 88.228 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of DeiT-Ti at 64 (default) and 16 dimensions. Figure 4: Performance of cone and dot product attention at low dimensions on NMT IWSLT. The base architecture uses 128 dimensions, and cone attention is able to match dot product attention with only 16 dimensions. For this model, this allows us to use 21% fewer parameters to reach performance parity, indicating that cone attention more efficiently captures hierarchical information. achieve consistent performance improvements in NLP, vision, and graph prediction tasks over dot product attention and baselines. Cone attention also matches dot product attention with significantly fewer embedding dimensions, opening the potential for smaller models. These results suggest that cone attention is an effective way to encode hierarchical relationships in attention, and can potentially improve task-level performance in a wide variety of models and task domains. **Future Work.** It remains to be seen how cone attention scales to very large models. Beyond this, [12] and [39] suggest that hyperbolic embeddings are sensitive to initialization, which implies that different transformer weight initializations may affect cone attention.
2303.05434
The Rosický Tangent Categories of Algebras over an Operad
Tangent categories provide a categorical axiomatization of the tangent bundle. There are many interesting examples and applications of tangent categories in a variety of areas such as differential geometry, algebraic geometry, algebra, and even computer science. The purpose of this paper is to expand the theory of tangent categories in a new direction: the theory of operads. The main result of this paper is that both the category of algebras of an operad and its opposite category are tangent categories. The tangent bundle for the category of algebras is given by the semi-direct product, while the tangent bundle for the opposite category of algebras is constructed using the module of K\"ahler differentials, and these tangent bundles are in fact adjoints of one another. To prove these results, we first prove that the category of algebras of a coCartesian differential monad is a tangent category. We then show that the monad associated to any operad is a coCartesian differential monad. This also implies that we can construct Cartesian differential categories from operads. Therefore, operads provide a bountiful source of examples of tangent categories and Cartesian differential categories, which both recaptures previously known examples and also yield new interesting examples. We also discuss how certain basic tangent category notions recapture well-known concepts in the theory of operads.
Sacha Ikonicoff, Marcello Lanfranchi, Jean-Simon Pacaud Lemay
2023-03-09T17:25:37Z
http://arxiv.org/abs/2303.05434v5
# The Rosicky Tangent Categories of Algebras over an Operad ###### Abstract Tangent categories provide a categorical axiomatization of the tangent bundle. There are many interesting examples and applications of tangent categories in a variety of areas such as differential geometry, algebraic geometry, algebra, and even computer science. The purpose of this paper is to expand the theory of tangent categories in a new direction: the theory of operads. The main result of this paper is that both the category of algebras of an operad and its opposite category are tangent categories. The tangent bundle for the category of algebras is given by the semi-direct product, while the tangent bundle for the opposite category of algebras is constructed using the module of Kahler differentials, and these tangent bundles are in fact adjoints of one another. To prove these results, we first prove that the category of algebras of a coCartesian differential monad is a tangent category. We then show that the monad associated to any operad is a coCartesian differential monad. This also implies that we can construct Cartesian differential categories from operads. Therefore, operads provide a bountiful source of examples of tangent categories and Cartesian differential categories, which both recaptures previously known examples and also yield new interesting examples. We also discuss how certain basic tangent category notions recapture well-known concepts in the theory of operads. **Acknowledgements.** The authors would first like to thank Martin Frankland for providing a very useful result about adjoint tangent structure. The authors would also like to thank Martin Frankland (again), Geoff Cruttwell and Dorette Pronk for very useful discussions. For this research, the first author was financially supported by a PIMS-CNRS Postdoctoral Fellowship, and the third author was financially supported by a JSPS Postdoctoral Fellowship, Award #: P21746. ## 1 Introduction Tangent categories provide a categorical description of the tangent bundle, one of the fundamental structures of differential geometry. Tangent categories were introduced by Rosicky in [31], and then, thirty years later, generalized and further developed by Cockett and Cruttwell in [5]. Briefly, a tangent category (Definition 2.1.1) is a category which comes equipped with an endofunctor \(\mathsf{T}\) where for every object \(A\), \(\mathsf{T}(A)\) is interpreted as a generalized version of a tangent bundle over \(A\). Furthermore, the functor \(\mathsf{T}\) also comes equipped with five (or six) natural transformations that satisfy various axioms that capture the basic properties of the classical tangent bundle over a smooth manifold including natural projection, being a vector bundle, local triviality, linearity of the derivative, etc. Nowadays, the theory of tangent categories is a well established area of research and fits into the larger theory of differential categories. As expected, the theory of tangent categories and its applications are fundamentally linked to differential geometry. Many important concepts from differential geometry can be generalized in a tangent category, including vector fields [6], Euclidean spaces [3], vector bundles [8, 27], connections [7], differential equations [9], differential forms and de Rham cohomology [12]. Most well known examples of tangent categories are based on differential geometry and synthetic differential geometry [15]. Indeed, the canonical example of a tangent category is the category of smooth manifolds, where the tangent structure is induced by the classical tangent bundle of a smooth manifold. Recently, there has been an upswing on new interesting examples and novel applications of tangent categories beyond differential geometry such as in commutative algebra and algebraic geometry [11], and even in computer science, in particular in relation to differential linear logic [10] and the differential lambda calculus [29]. The objective of this paper is to further expand the theory and applications of tangent categories into a new direction: the theory of operads. Operad theory is a firmly established field of mathematics. Operads first originated as a useful tool in algebraic topology back in the late 1960's/early 1970's, in particular to characterize iterated loop spaces [30]. The theory of operads went through a reinvention period in the 1990's, shifting from a topological point of view to a more algebraic one. Since then, operads have found applications in many mathematical domains, including homological algebra, category and higher category theory, combinatorics, and algebraic deformation theory. Operads have even found applications outside the realm of mathematics, where they appear notably in some aspects of mathematical physics, computer sciences, and biology. For an overview of applications of operads and a more detailed introduction, we invite the readers to see [26, 14, 24]. Naively, an operad \(\mathcal{P}\) (Section 4) is a device that encodes a type of algebra structure on modules over a ring \(R\) - these operads are sometimes also referred to as algebraic operads. Every operad \(\mathcal{P}\) has a canonical monad associated to it, and the algebras of said monad are what we call the algebras of the operad \(\mathcal{P}\), or simply just the \(\mathcal{P}\)-algebras. Together, the algebras of an operad \(\mathcal{P}\) form a category \(\mathsf{ALG}_{\mathcal{P}}\). For many sorts of algebraic objects, there is an operad whose algebras are precisely those algebraic objects. For example, there is an operad \(\mathrm{Com}\) where the \(\mathrm{Com}\)-algebras are the commutative \(R\)-algebras and so \(\mathsf{ALG}_{\mathrm{Com}}\) is the category of commutative \(R\)-algebras (Example 4.1.3). The main objective of this paper is to show that both \(\mathsf{ALG}_{\mathcal{P}}\) and the opposite category \(\mathsf{ALG}_{\mathcal{P}}^{op}\) are tangent categories (Theorem 4.3.3 and Theorem 4.4.4), whose tangent bundles are adjoints to one another (Lemma 4.4.2). This is a generalization of the fact that \(\mathsf{ALG}_{\mathrm{Com}}\) and \(\mathsf{ALG}_{\mathrm{Com}}^{op}\) are both well-known examples of tangent categories, see for example [11] for full details. Briefly, the tangent bundle of a commutative \(R\)-algebra \(A\) in \(\mathsf{ALG}_{\mathrm{Com}}\) is given by the algebra of dual numbers over \(A\) (Example 4.3.5): \[\mathsf{T}(A):=A[\epsilon]\] In fact, \(\mathsf{ALG}_{\mathrm{Com}}\) was one of the main examples in Rosicky's original paper [31, Example 2]. On the other hand, the tangent bundle of a commutative algebra \(A\) in \(\mathsf{ALG}_{\mathrm{Com}}^{op}\) is given by the free symmetric algebra over \(A\) of its module of Kahler differentials (Example 4.4.6): \[\mathsf{T}^{\circ}(A):=\mathsf{Sym}_{A}\left(\Omega_{A}\right)\] The tangent category \(\mathsf{ALG}_{\mathrm{Com}}^{op}\) is closely related to algebraic geometry, as explained in [11]. Indeed, it is famously known that \(\mathsf{ALG}_{\mathrm{Com}}^{op}\), the opposite category of commutative algebras, is equivalent to the category of affine schemes over \(R\), the building blocks in algebraic geometry. Furthermore, Grothendieck himself calls \(\mathsf{T}^{\circ}(A)\) the "fibre tangent" (french for tangent bundle) of \(A\)[18, Definition 16.5.12.I], while Juhin calls it the tangent algebra of \(A\)[22, Section 2.6]. These two tangent bundles are related since \(\mathsf{T}\) and \(\mathsf{T}^{\circ}\), viewed as endofunctors on \(\mathsf{ALG}_{\mathrm{Com}}\), are mutual adjoints. This means that \(\mathsf{ALG}_{\mathrm{Com}}\) is a tangent category with adjoint tangent structure (Section 2.2). For any operad \(\mathcal{P}\), by generalizing the constructions for commutative algebras, we are able to obtain tangent category structures for both \(\mathsf{ALG}_{\mathcal{P}}\) and \(\mathsf{ALG}_{\mathcal{P}}^{op}\). For a \(\mathcal{P}\)-algebra \(A\), its tangent bundle in \(\mathsf{ALG}_{\mathcal{P}}\) is given by the semi-direct product with itself (Section 4.3): \[\mathsf{T}(A):=A\ltimes A\] where the semi-direct product \(\ltimes\) is a generalization of the dual numbers construction for \(\mathcal{P}\)-algebras [26, Section 12.3.2]. On the other hand, the tangent bundle of a \(\mathcal{P}\)-algebra \(A\) in \(\mathsf{ALG}_{\mathcal{P}}^{op}\) requires more setup. Firstly, there is a notion of modules over a \(\mathcal{P}\)-algebra \(A\), referred to as \(A\)-modules [26, Section 12.3.1], and in particular, a generalization of a module of Kahler differentials over \(A\)[26, Section 12.3.8], denoted \(\Omega_{A}\). Secondly, there is also a notion of \(\mathcal{P}\)-algebras over \(A\), called simply \(A\)-algebras, and for any \(A\)-module \(M\), there is a free \(A\)-algebra over \(M\)[16, Lemma 5.2], denoted \(\mathsf{Free}_{A}(M)\). As such, the tangent bundle of a \(\mathcal{P}\)-algebra \(A\) in \(\mathsf{ALG}_{\mathcal{P}}^{op}\) is defined as the free \(A\)-algebra of its module of Kahler differentials (Section 4.4): \[\mathsf{T}^{\circ}(A):=\mathsf{Free}_{A}(\Omega_{A})\] Furthermore, we also have that \(\mathsf{ALG}_{\mathcal{P}}\left(\mathsf{Free}_{A}(\Omega_{A}),A^{\prime} \right)\cong\mathsf{ALG}_{\mathcal{P}}(A,A^{\prime}\ltimes A^{\prime})\). Therefore, the tangent bundles \(\mathsf{T}\) and \(\mathsf{T}^{\circ}\) are mutual adjoints, as desired. In particular, following the discussions of [11], we may interpret \(\mathsf{ALG}_{\mathcal{P}}^{op}\) as a tangent category model of algebraic geometry relative to the operad \(\mathcal{P}\). It is worth mentioning that, while the tangent bundle \(\mathsf{T}\) is mostly the same for each operad, the adjoint tangent bundle \(\mathsf{T}^{\circ}\) can vary quite drastically from operad to operad. As a consequence, operads provide a large source of examples of tangent categories, including both previously known ones and many new ones, some of which may be exotic or very unexpected. Of course, by taking \(\mathcal{P}=\mathrm{Com}\), we recapture precisely the tangent category of commutative algebras and the tangent category of affine schemes. However, we can take other operads \(\mathcal{P}\) to obtain models of tangent categories that have not been previously considered. For example, we may take the operad \(\mathrm{Ass}\), where \(\mathsf{ALG}_{\mathrm{Ass}}\) is instead the category of (associative and unital) algebras (Example 4.1.4). Therefore, \(\mathsf{ALG}_{\mathrm{Ass}}\) is a tangent category model of non-commutative algebras (Example 4.3.6), while \(\mathsf{ALG}_{\mathsf{Ass}}^{op}\) is a tangent category of model of non-commutative algebraic geometry (Example 4.4.7). In particular, Ginzburg calls the tangent bundle \(\mathsf{T}^{\circ}(A)\) the "space of noncommutative differential forms of \(A\)" [17, Definition 10.2.3]. We can also take the operad Lie, where \(\mathsf{ALG}_{\mathsf{Lie}}\) is the category of Lie algebras (Example 4.1.5). In this case we get the surprising new examples of tangent categories of Lie algebras (Example 4.3.7 and Example 4.4.8). Since operad theory has such a large range of relations to a variety of domains, we expect that the results of this paper will not only lead to a multitude of new examples of tangent categories but also greatly expand the reaches of the theory of tangent categories and its applications to new areas. We also discuss two basic, yet important, tangent category notions in \(\mathsf{ALG}_{\mathcal{P}}\) and \(\mathsf{ALG}_{\mathcal{P}}^{op}\). The first is vector fields (Definition 2.3.1), which, as the name suggests, generalizes the classical notion of vector fields from differential geometry. We will show that vector fields in both \(\mathsf{ALG}_{\mathcal{P}}\) and \(\mathsf{ALG}_{\mathcal{P}}^{op}\) correspond to derivations in the operadic sense [26, Section 12.3.7], which generalize the notion of algebraic derivations (Section 4.5). The second is differential objects (Definition 2.4.2), which provide analogues of Euclidean spaces in a tangent category. In particular, the tangent bundle of a differential object \(A\) is just the product of \(A\) with itself, \(\mathsf{T}(A)\cong A\times A\). In \(\mathsf{ALG}_{\mathcal{P}}\), this means that a differential object is a \(\mathcal{P}\)-algebra whose \(\mathcal{P}\)-algebra structure is essentially everywhere zero (Proposition 4.6.1) - which for some operads means that the only differential object is the zero algebra. On the other hand, the differential objects in \(\mathsf{ALG}_{\mathcal{P}}^{op}\) are precisely the modules (in the operadic sense) over the \(\mathcal{P}\)-algebra \(\mathcal{P}(0)\) (Theorem 4.6.8). In future work, it would be interesting to study other tangent category notions in \(\mathsf{ALG}_{\mathcal{P}}\) and \(\mathsf{ALG}_{\mathcal{P}}^{op}\) such as connections, differential forms and their cohomology, differential bundles, differential equations, etc. To prove our main results, we will in fact prove a more general result, allowing us to avoid checking all the tangent category axioms for \(\mathsf{ALG}_{\mathcal{P}}\) and \(\mathsf{ALG}_{\mathcal{P}}^{op}\). Indeed, \(\mathsf{ALG}_{\mathcal{P}}\) can also be described as the Eilenberg-Moore category of the monad associated to the operad \(\mathcal{P}\). We investigate conditions under which the Eilenberg-Moore category of a monad is a tangent category (Section 3). It turns out that the solution to this problem is linked to a special class of tangent categories called Cartesian differential categories [3]. Briefly, a Cartesian differential category can be defined as a category with finite products and equipped with a differential combinator \(\mathsf{D}\) which takes a map \(f\) and produces its derivative \(\mathsf{D}[f]\) (Definition 2.4.1). Every Cartesian differential category is a tangent category, where the tangent bundle is built using the differential combinator [5, Proposition 4.7]. In [20], the first and third authors introduced the notion of a coCartesian differential monad (Definition 3.2.1), which is precisely the kind of monad for which the opposite category of its Kleisli category is a Cartesian differential category. In [20], the problem of identifying the structure of the Eilenberg-Moore category of a coCartesian differential monad was left open. In this paper, we will show that this category is a tangent category (Theorem 3.2.7), and that under mild assumption, its opposite category is also a tangent category (Section 3.3). We will then prove that the monad associated to any operad \(\mathcal{P}\) is always a coCartesian differential monad (Theorem 4.1.1). This will immediately imply that \(\mathsf{ALG}_{\mathcal{P}}\) is a tangent category (Theorem 4.3.3). After some extra work, we will then also obtain that \(\mathsf{ALG}_{\mathcal{P}}^{op}\) is a tangent category (Theorem 4.4.4). Furthermore, since the monad associated to an operad \(\mathcal{P}\) is a coCartesian differential monad, it also follows that the opposite category of its Kleisli category, \(\mathsf{KL}_{\mathcal{P}}^{op}\), is a Cartesian differential category (Section 4.2). Intuitively, the maps of \(\mathsf{KL}_{\mathcal{P}}^{op}\) can interpreted as special kinds of smooth functions, which we call \(\mathcal{P}\)-polynomials. In particular, the subcategory of finite dimensional \(R\)-module of \(\mathsf{KL}_{\mathcal{P}}^{op}\) is the Lawvere theory for \(\mathcal{P}\)-polynomials, and is again a Cartesian differential category. Therefore, operads also give a source of examples of Cartesian differential categories, again both recapturing known examples, like classical polynomial differentiation (Example 4.2.4), and providing new unexpected examples, like the differentiation of Lie bracket polynomials (Example 4.2.6). It is our hope that this paper is but the exciting start of a new unified theory for geometry for algebra structures obtained by applying the theory of tangent categories and Cartesian differential categories to the notion of operads. **Outline:** Section 2 is a background section on the basics of tangent categories, where we setup most of the terminology, notation, and constructions that we will use throughout the paper. Section 3 is a general theory section on coCartesian differential monads, the results of this section are key to providing the main results of the following section. Section 4 is the main section of this paper, where we study the tangent categories of algebras of an operad, and also discuss the Cartesian differential categories induced by an operad. We conclude this paper in Section 5, where we discuss future work that we hope to pursue that builds on the ideas presented in this paper. **Conventions:** We assume the reader is familiar with the basic notions of category theory such as categories, opposite categories, functors, natural transformations, and (co)limits like (co)products, pullbacks, pushouts, etc. In some cases, if only to introduce notation, we recall some of these concepts. In an arbitrary category \(\mathbb{X}\), we denote identity maps as \(1_{A}:A\to A\), and we use the classical notation for composition, \(g\circ f\), as opposed to diagrammatic order which was used in other papers on tangent categories such as in [5]. Finally, homsets in a category \(\mathbb{X}\) of morphisms from an object \(A\) to an object \(B\) will be denoted by \(\mathbb{X}(A,B)\). Tangent Categories In this background section we review tangent categories and their basic theory including adjoint tangent structure (where we also prove a new useful lemma), vector fields, differential objects, and Cartesian differential categories. ### Tangent Categories We begin by recalling the necessary structural maps for a tangent category. We do not recall the full definition here, and refer readers to see the full definition of a tangent category, including the axioms expressed as commutative diagrams, and intuitions in [5, 15]. The key difference between Rosicky's original definition of a tangent category [31, Section 2] and Cockett and Cruttwell's definition [5] is that the former assumes an Abelian group structure on the fibres of the tangent bundle, while the latter generalizes to only a commutative monoid structure. As such, Rosicky's definition includes one extra natural transformation capturing the negatives in the tangent bundle. **Definition 2.1.1**.: _[_5_, Definition 2.3 and Section 3.3]_ _A (**Rosicky) tangent structure** on a category \(\mathbb{X}\) is a sextuple \(\mathbb{T}:=(\mathsf{T},p,s,z,l,c)\) (resp. a septuple \(\mathbb{T}:=(\mathsf{T},p,s,z,l,c,n)\)) consisting of:_ 1. _An endofunctor_ \(\mathsf{T}:\mathbb{X}\xrightarrow{}\mathbb{X}\)_, called the_ _tangent bundle functor__;_ 2. _A natural transformation_ \(p_{A}:\mathsf{T}(A)\xrightarrow{}A\)_, called the_ _projection__, such that for each_ \(n\in\mathbb{N}\)_, the_ \(n\)_-fold pullback_1 _of_ \(p_{A}\) _exists, denoted as_ \(\mathsf{T}_{n}(A)\) _with projections_ \(q_{j}:\mathsf{T}_{n}(A)\xrightarrow{}\mathsf{T}(A)\)_, and such that for all_ \(m\in\mathbb{N}\)_,_ \(\mathsf{T}^{m}:=\mathsf{T}_{0}\cdots\mathsf{T}\) _preserves these pullbacks, that is,_ \(\mathsf{T}^{m}(\mathsf{T}_{n}(A))\) _is the_ \(n\)_-fold pullback of_ \(\mathsf{T}^{m}(p_{A})\) _with projections_ \(\mathsf{T}^{m}(q_{j})\)_;_ Footnote 1: By convention, \(\mathsf{T}_{0}(A)=A\) and \(\mathsf{T}_{1}(A)=\mathsf{T}(A)\)__ 3. _A natural transformation_2__\(s_{A}:\mathsf{T}_{2}(A)\xrightarrow{}\mathsf{T}(A)\)_, called the_ _sum__;_ Footnote 2: Note that by the universal property of the pullback, it follows that we can define functors \(\mathsf{T}_{n}:\mathbb{X}\xrightarrow{}\mathbb{X}\). 4. _A natural transformation_ \(z_{A}:A\xrightarrow{}\mathsf{T}(A)\)_, called the_ _zero map__;_ 5. _A natural transformation_ \(l_{A}:\mathsf{T}(A)\xrightarrow{}\mathsf{T}^{2}(A)\)_, called the_ _vertical lift__;_ 6. _A natural transformation_ \(c_{A}:\mathsf{T}^{2}(A)\xrightarrow{}\mathsf{T}^{2}(A)\)_, called the_ _canonical flip__;_ 7. _(And if Rosicky, natural transformation_ \(n_{A}:\mathsf{T}(A)\xrightarrow{}\mathsf{T}(A)\)_, called the_ _negative map__;)_ _such that the equalities in [5, Definition 2.3] (and if Rosicky, also [5, Definition 3.3]) are satisfied. A (**Rosicky) tangent category** is a pair \((\mathbb{X},\mathbb{T})\) consisting of a category \(\mathbb{X}\) equipped with a (Rosicky) tangent structure \(\mathbb{T}\) on \(\mathbb{X}\)._ We can also ask our tangent categories to have finite products in such a way that the tangent bundle of a product is naturally isomorphic to the product of the tangent bundles, and that the tangent bundle of the terminal object is the terminal object. For a category with finite products, we denote \(n\)-ary products by \(A_{1}\times\ldots\times A_{n}\) with projections \(\pi_{j}:A_{1}\times\ldots\times A_{n}\xrightarrow{}A_{j}\) and \(\left\langle-,\ldots,-\right\rangle\) for the pairing operation, the terminal object as \(*\), and for every object \(A\), the unique map from \(A\) to \(*\) is denoted by \(t_{A}:A\xrightarrow{}*\). **Definition 2.1.2**.: _[_5_, Definition 2.8]_ _A **Cartesian (Rosicky) tangent category** is a (Rosicky) tangent category \((\mathbb{X},\mathbb{T})\) such that \(\mathbb{X}\) has finite products and the canonical maps:_ \[\left\langle\mathsf{T}(\pi_{1}),\ldots,\mathsf{T}(\pi_{n})\right\rangle: \mathsf{T}(A_{1}\times\ldots\times A_{n})\xrightarrow{}\mathsf{T}(A_{1}) \times\ldots\times\mathsf{T}(A_{n})\qquad\qquad\qquad t_{\mathsf{T}(*)}: \mathsf{T}(*)\xrightarrow{}*\] _are isomorphisms: \(\mathsf{T}(A_{1}\times\ldots\times A_{n})\cong\mathsf{T}(A_{1})\times \ldots\times\mathsf{T}(A_{n})\) and \(\mathsf{T}(*)\cong*\)._ See [8, Example 2.2] and [15, Example 2] for lists of examples of tangent categories. Arguably the canonical example of a tangent category is the category of smooth manifolds, where the tangent structure is induced by the classical tangent bundle. This example provides a direct link between tangent categories and differential geometry. In Lemma 3.1.1, we will review how every additive category is a tangent category, and in Section 2.4, we will review an important subclass of tangent categories: Cartesian differential categories. Furthermore, the main objective of this paper is to show that the category of algebras over an operad and its opposite category are both tangent categories. As such, more examples of tangent categories can be found in Section 4, including the (opposite) categories of algebras, commutative algebras, and Lie algebras. ### Adjoint Tangent Structure In [5], Cockett and Cruttwell introduce an important construction for this paper, called the dual tangent structure - not to be confused with the notion of cotangent structure. This construction allows one to build a tangent structure on the opposite category of a tangent category. This is possible when the tangent bundle functor admits a left adjoint, and in this case, said left adjoint becomes a tangent bundle functor on the opposite category. To avoid confusion, we will refer to this construction as the adjoint tangent structure. In particular, in Section 4.4 we will show that the category of algebras over an operad always has adjoint tangent structure, and therefore the opposite category of algebras over an operad is a tangent category. Recall that an adjunction between two categories \(\mathbb{X}\) and \(\mathbb{Y}\) consists of two functors \(\mathsf{L}:\mathbb{X}\xrightarrow{}\mathbb{Y}\), called the left adjoint, and \(\mathsf{R}:\mathbb{Y}\xrightarrow{}\mathbb{X}\), called the right adjoint, and two natural transformations \(\eta_{A}:A\xrightarrow{}\mathsf{RL}(A)\), called the unit, and \(\varepsilon_{B}:\mathsf{LR}(B)\xrightarrow{}B\), called the counit, such that \(\varepsilon_{\mathsf{L}(A)}\circ\mathsf{L}(\eta_{A})=\mathsf{l}_{\mathsf{L}(A)}\) and \(\mathsf{R}(\varepsilon_{A})\circ\eta_{\mathsf{R}(A)}=\mathsf{l}_{\mathsf{R}(A)}\). As a shorthand, we write adjunctions as \((\eta,\varepsilon):\mathsf{L}\dash\mathsf{R}\). **Definition 2.2.1**.: _A tangent category \((\mathbb{X},\mathbb{T})\) is said to have **adjoint tangent structure** if, for every \(n\in\mathbb{N}\), the functor \(\mathsf{T}_{n}\) admits a left adjoint \(\mathsf{T}_{n}^{\circ}\) with unit \(\eta(n)_{A}:A\xrightarrow{}\mathsf{T}_{n}\mathsf{T}_{n}^{\circ}(A)\) and counit \(\varepsilon(n)_{A}:\mathsf{T}_{n}^{\circ}\mathsf{T}_{n}(A)\xrightarrow{}A\), or again, \((\eta(n),\varepsilon(n)):\mathsf{T}_{n}^{\circ}\dashv\mathsf{T}_{n}\). By convention, we denote \(\mathsf{T}_{1}=\mathsf{T}\), \(\mathsf{T}^{\circ}:=\mathsf{T}_{1}^{\circ}\), \(\eta=\eta(1)\), and \(\varepsilon=\varepsilon(1)\), so \((\eta,\varepsilon):\mathsf{T}^{\circ}\dashv\mathsf{T}\)._ Using the adjoint tangent structure, we now give a full description of the resulting tangent category on the opposite category. Giving a tangent structure on the (opposite) category \(\mathbb{X}^{op}\) corresponds to giving a "dual tangent structure" on \(\mathbb{X}\), that is, the types of all the natural transformations are reversed. **Theorem 2.2.2**.: _[_5_, Proposition 5.17]_ _Let \((\mathbb{X},\mathbb{T})\) be a tangent category with adjoint tangent structure. Consider:_ 1. _The adjoint projection_ \(p_{A}^{\circ}:A\xrightarrow{}\mathsf{T}^{\circ}(A)\)_, defined by_ \(p_{A}^{\circ}:=p_{\mathsf{T}^{\circ}(A)}\circ\eta_{A}\)_, where the_ \(n\)_-fold pushout of_ \(p_{A}^{\circ}\) _is_ \(\mathsf{T}_{n}^{\circ}(A)\) _with injections_ \(q_{j}^{\circ}:\mathsf{T}^{\circ}(A)\xrightarrow{}\mathsf{T}_{n}^{\circ}(A)\) _defined as_ \(q_{j}^{\circ}=\varepsilon_{\mathsf{T}_{n}^{\circ}(A)}\circ\mathsf{T}^{\circ}( q_{j})\circ\mathsf{T}^{\circ}(\eta_{A})\)_;_ 2. _The adjoint sum_ \(s_{A}^{\circ}:\mathsf{T}^{\circ}(A)\xrightarrow{}\mathsf{T}_{2}^{\circ}(A)\)_, defined by_ \(s_{A}^{\circ}:=\varepsilon_{\mathsf{T}_{n}^{\circ}(A)}\circ\mathsf{T}^{\circ }(s_{\mathsf{T}_{n}^{\circ}(A)})\circ\mathsf{T}\left(\eta(2)_{A}\right)\)_;_ 3. _The adjoint zero map_ \(z_{A}^{\circ}:\mathsf{T}^{\circ}(A)\xrightarrow{}A\)_, defined by_ \(z_{A}^{\circ}:=\varepsilon_{A}\circ\mathsf{T}^{\circ}(z_{A})\)_;_ 4. _The adjoint vertical lift_ \(l_{A}^{\circ}:\mathsf{T}^{\circ}(A)\xrightarrow{}\mathsf{T}^{\circ}(A)\)_, defined by_ \(l_{A}^{\circ}:=\varepsilon_{\mathsf{T}^{\circ}(A)}\circ\mathsf{T}^{\circ}( \varepsilon_{\mathsf{T}^{\circ}(A)})\circ\mathsf{T}^{\circ}(l_{\mathsf{T}^{ \circ}(A)})\circ\mathsf{T}^{\circ}{}^{2}(\eta_{A})\)_;_ 5. _The adjoint canonical flip_ \(c_{A}^{\circ}:\mathsf{T}^{\circ}{}^{2}(A)\xrightarrow{}\mathsf{T}^{\circ}{}^ {2}(A)\)_, defined by:_ \[c_{A}^{\circ}:=\varepsilon_{\mathsf{T}^{\circ}{}^{2}(A)}\circ\mathsf{T}^{\circ }(\varepsilon_{\mathsf{T}^{\circ}{}^{2}(A)})\circ\mathsf{T}^{\circ}{}^{2}( \Gamma_{\mathsf{T}^{\circ}(A)})\circ\mathsf{T}^{\circ}{}^{2}(\eta_{A})\] _Then, \(\mathsf{T}^{\circ}=(\mathsf{T}^{\circ},p^{\circ},s^{\circ},z^{\circ},l^{\circ},c^{\circ})\) is a tangent structure on \(\mathbb{X}^{op}\), and so, \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) is a tangent category. Similarly, if \((\mathbb{X},\mathbb{T})\) is a Rosicky tangent category with adjoint tangent structure, consider:_ 1. _The adjoint negative map_ \(n_{A}^{\circ}:\mathsf{T}^{\circ}(A)\xrightarrow{}\mathsf{T}^{\circ}(A)\) _as_ \(n_{A}^{\circ}:=\varepsilon_{\mathsf{T}^{\circ}(A)}\circ\mathsf{T}^{\circ}(n_{ \mathsf{T}^{\circ}(A)})\circ\eta_{\mathsf{T}^{\circ}(A)}\)_._ _Then, \(\mathsf{T}^{\circ}=(\mathsf{T}^{\circ},p^{\circ},s^{\circ},z^{\circ},l^{\circ},c^{\circ},n^{\circ})\) is a Rosicky tangent structure on \(\mathbb{X}^{op}\), and so, \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) is a Rosicky tangent category. Furthermore, if \((\mathbb{X},\mathbb{T})\) is a Cartesian (Rosicky) category with adjoint tangent structure and \(\mathbb{X}\) also has finite coproducts, then \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) is a Cartesian (Rosicky) tangent category._ Note that, if a tangent category \((\mathbb{X},\mathbb{T})\) has adjoint tangent structure, then \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) also has adjoint tangent structure. Applying Theorem 2.2.2 on \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) gives back the original tangent structure \((\mathbb{X},\mathbb{T})\). To show that a tangent category has adjoint tangent structure, proving that \(\mathsf{T}_{n}\) admits a left adjoint \(\mathsf{T}_{n}^{\circ}\) for each \(n\) can sometimes be a strenuous task. However, when \(\mathsf{T}\) admits a left adjoint, and when the \(n\)-fold pushout of the adjoint projection \(p_{A}^{\circ}:A\xrightarrow{}\mathsf{T}^{\circ}(A)\) exist, then this pushout provides a left adjoint for \(\mathsf{T}_{n}\). This is particularly useful if the starting tangent category is cocomplete. We thank Martin Frankland for stating and proving the following lemma, for which we propose here our own version of the proof: **Lemma 2.2.3** (Frankland).: _Let \(\mathbb{X}\) be a category, \(\mathsf{T}:\mathbb{X}\xrightarrow{}\mathbb{X}\) a functor, and \(p_{A}:\mathsf{T}(A)\xrightarrow{}A\) a natural transformation such that for each \(n\in\mathbb{N}\), the \(n\)-fold pullback of \(p_{A}\) exists, denoted by \(\mathsf{T}_{n}(A)\) with projections \(q_{j}:\mathsf{T}_{n}(A)\xrightarrow{}\mathsf{T}(A)\). Suppose that \(\mathsf{T}\) has a left adjoint \(\mathsf{T}^{\circ}\) with unit \(\eta_{A}:A\xrightarrow{}\mathsf{T}\mathsf{T}^{\circ}(A)\) and counit \(\varepsilon_{A}:\mathsf{T}^{\circ}\mathsf{T}(A)\xrightarrow{}A\), or again, \((\eta,\varepsilon):\mathsf{T}^{\circ}\dashv\mathsf{T}\). Furthermore, define the natural transformation \(p_{A}^{\circ}:A\xrightarrow{}\mathsf{T}^{\circ}(A)\) as \(p_{A}^{\circ}:=p_{T^{\circ}(A)}\circ\eta_{A}\), and suppose that the \(n\)-fold pushout of \(p_{A}^{\circ}\) exists, denoted as \(\mathsf{T}_{n}^{\circ}(A)\) with injections \(q_{j}^{\circ}:\mathsf{T}^{\circ}(A)\xrightarrow{}\mathsf{T}_{n}^{\circ}(A)\). Then \(\mathsf{T}_{n}^{\circ}\) is a left adjoint for \(\mathsf{T}_{n}\), where the unit \(\eta(n)_{A}:A\xrightarrow{}\mathsf{T}_{n}\mathsf{T}_{n}^{\circ}(A)\) and counit \(\varepsilon(n)_{A}:\mathsf{T}_{n}^{\circ}\mathsf{T}_{n}(A)\xrightarrow{}A\) are defined using the universal property of the pullback and pushout, that is, as the unique maps such that:_ \[q_{j}\circ\eta(n)_{A}=\mathsf{T}(q_{j}^{\circ})\circ\eta_{A} \varepsilon(n)_{A}\circ q_{j}^{\circ}=\varepsilon_{A}\circ\mathsf{T}^{\circ}(q_{j}) \forall Proof.: First, note that, for all \(1\leq i,j\leq n\), we have \(p_{A}\circ\mathsf{T}(q_{j}^{\circ})\circ\eta_{A}=p_{A}\circ\mathsf{T}(q_{i}^{ \circ})\circ\eta_{A}\) and \(\varepsilon_{A}\circ\mathsf{T}^{\circ}(q_{j})\circ p_{A}^{\circ}=\varepsilon_ {A}\circ\mathsf{T}^{\circ}(q_{i})\circ p_{A}^{\circ}\) (which we leave as an exercise for the reader). Therefore, it follows that \(\eta(n)_{A}\) and \(\varepsilon(n)_{A}\) are indeed well-defined. To prove that \(\mathsf{T}_{n}^{\circ}\) is a left adjoint of \(\mathsf{T}_{n}\), we need to show that the two triangle identities hold. To do so, we will take advantage of the (co)universal property of the pullback and pushout by instead proving that the desired identities hold when precomposed by the pushout injections or postcomposed by the pullback projections. First, note that for all \(n\in\mathbb{N}\) and \(1\leq j\leq n\), \(q_{j}:\mathsf{T}_{n}(A)\xrightarrow{}\mathsf{T}(A)\) and \(q_{j}^{\circ}:\mathsf{T}^{\circ}(A)\xrightarrow{}\mathsf{T}_{n}^{\circ}(A)\) are natural transformations. Therefore, we compute: \[\varepsilon(n)_{\mathsf{T}_{n}(A)}\circ\mathsf{T}_{n}^{\circ} \left(\eta(n)_{A}\right)\circ q_{j}^{\circ}=\varepsilon(n)_{\mathsf{T}_{n}(A) }\circ q_{j}^{\circ}\circ\mathsf{T}^{\circ}\left(\eta(n)_{A}\right)= \varepsilon_{\mathsf{T}_{n}(A)}\circ\mathsf{T}^{\circ}(q_{j})\circ\mathsf{T }^{\circ}\left(\eta(n)_{A}\right)=\varepsilon_{\mathsf{T}_{n}(A)}\circ \mathsf{T}^{\circ}\left(q_{j}\circ\eta(n)_{A}\right)\\ =\varepsilon_{\mathsf{T}_{n}^{\circ}(A)}\circ\mathsf{T}^{\circ} \left(\mathsf{T}(q_{j}^{\circ})\circ\eta_{A}\right)=\varepsilon_{\mathsf{T}_{n }^{\circ}(A)}\circ\mathsf{T}^{\circ}\mathsf{T}^{\circ}(q_{j}^{\circ})\circ \mathsf{T}^{\circ}(\eta_{A})=q_{j}^{\circ}\circ\varepsilon_{\mathsf{T}^{\circ} (A)}\circ\mathsf{T}^{\circ}(\eta_{A})=q_{j}^{\circ}\circ 1_{\mathsf{T}^{\circ}(A)}=q_{j}^{\circ}\] \[q_{j}\circ\mathsf{T}_{n}(\varepsilon(n)_{A})\circ\eta(n)_{\mathsf{T}_{n}(A) }=\mathsf{T}(\varepsilon(n)_{A})\circ q_{j}\circ\eta(n)_{\mathsf{T}_{n}(A)}= \mathsf{T}(\varepsilon(n)_{A})\circ\mathsf{T}(q_{j}^{\circ})\circ\eta_{ \mathsf{T}_{n}(A)}=\mathsf{T}\left(\varepsilon(n)_{A}\circ q_{j}^{\circ} \right)\circ\eta_{\mathsf{T}_{n}(A)}\\ =\mathsf{T}\left(\varepsilon_{A}\circ\mathsf{T}^{\circ}(q_{j}) \right)\circ\eta_{\mathsf{T}_{n}(A)}=\mathsf{T}(\varepsilon_{A})\circ\mathsf{ T}\mathsf{T}^{\circ}(q_{j})\circ\eta_{\mathsf{T}_{n}(A)}=\mathsf{T}(\varepsilon_{A}) \circ\eta_{\mathsf{T}(A)}\circ q_{j}=1_{\mathsf{T}(A)}\circ q_{j}=q_{j}\] So, for all \(1\leq j\leq n\), \(\varepsilon(n)_{\mathsf{T}_{n}^{\circ}(A)}\circ\mathsf{T}_{n}^{\circ}\left( \eta(n)_{A}\right)\circ q_{j}^{\circ}=q_{j}^{\circ}\) and \(q_{j}\circ\mathsf{T}_{n}(\varepsilon(n)_{A})\circ\eta(n)_{\mathsf{T}_{n}(A)}=q _{j}\). Therefore, by the couniversal property of the pushout and the universal property of the pullback respectively, it follows that: \[\varepsilon(n)_{\mathsf{T}_{n}^{\circ}(A)}\circ\mathsf{T}_{n}^{ \circ}\left(\eta(n)_{A}\right)=1_{\mathsf{T}_{n}^{\circ}(A)} \mathsf{T}_{n}(\varepsilon(n)_{A})\circ\eta(n)_{\mathsf{T}_{n}(A)}=1_{ \mathsf{T}_{n}(A)}\] So we conclude that \((\eta(n),\varepsilon(n)):\mathsf{T}_{n}^{\circ}\dashv\mathsf{T}_{n}\). **Corollary 2.2.4**.: _Let \((\mathbb{X},\mathbb{T})\) be a (Rosicky) tangent category. Suppose that the tangent bundle functor \(\mathsf{T}\) has a left adjoint \(\mathsf{T}^{\circ}\), and that, for all \(n\), the \(n\)-fold pushout of the map \(p_{A}^{\circ}:A\xrightarrow{}\mathsf{T}^{\circ}(A)\) from Lemma 2.2.3 exists. Then, \((\mathbb{X},\mathbb{T})\) has adjoint tangent structure, and therefore, \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) inherits the structure of a (Rosicky) tangent category defined in Theorem 2.2.2._ Per the above corollary, if \((\mathbb{X},\mathbb{T})\) is a Cartesian (Rosicky) tangent category whose tangent bundle functor has a left adjoint, and if \(\mathbb{X}\) is (finitely) cocomplete, then \((\mathbb{X},\mathbb{T})\) has adjoint tangent structure, and therefore \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) is a Cartesian (Rosicky) tangent category. In Section 4.4 we will show that the opposite category of algebras of an operad is a tangent category using this fact. In particular, we will discuss the adjoint tangent structure of algebras, commutative algebras, and Lie algebras. There are also important examples of tangent categories with adjoint tangent structures related to synthetic differential geometry [15], differential linear logic [10], and algebraic geometry [11]. ### Vector Fields and their Lie Bracket Vector fields are a fundamental concept in differential geometry which, heuristically, correspond to assigning smoothly to each point of a smooth manifold a tangent vector in the tangent space over that point. The notion of a vector field can easily be generalized to tangent categories, and is simply defined as a section of the projection. **Definition 2.3.1**.: _[_5_, Definition 3.1]_ _In a tangent category \((\mathbb{X},\mathbb{T})\), a **vector field** on an object \(A\) of \(\mathbb{X}\) is a map \(v:A\xrightarrow{}\mathsf{T}(A)\) which is a section of the projection \(p_{A}\), that is, \(p_{A}\circ v=1_{A}\). The set of all vector fields on \(A\) in \((\mathbb{X},\mathbb{T})\) is denoted \(\mathsf{V}_{\mathbb{T}}(A)\)._ In any tangent category, the zero map \(z_{A}:A\xrightarrow{}\mathsf{T}(A)\) is a vector field, and the map \(\nu_{A}:\mathsf{T}_{2}(A)\xrightarrow{}\mathsf{T}^{2}(A)\) from [T.6] induces a vector field \(\mathcal{L}_{A}:\mathsf{T}(A)\xrightarrow{}\mathsf{T}^{2}(A)\)[5, Section 3.1], which generalizes the Liouville vector field, the canonical vector field on the tangent bundle of a smooth manifold. One can also define a category of vector fields [9, Definition 2.8], which turns out to also be a tangent category [9, Proposition 2.10]. Vector fields can also be used to generalize dynamical systems and solve differential equations in a tangent category [9]. In a Rosicky tangent category, the set of vector fields of any object is in fact a Lie algebra: **Proposition 2.3.2**.: _[_6_, Theorem 4.2]_ _In a Rosicky tangent category \((\mathbb{X},\mathbb{T})\), for any object \(A\), \(\mathsf{V}_{\mathbb{T}}(A)\) is a Lie algebra where in particular the Lie bracket of vector fields is defined as in [5, Definition 3.14]._ In Section 4.5, we will show that vector fields in the tangent category of algebras over an operad correspond precisely to derivations in operadic sense. This can be seen as a generalization of the famous result that vector fields of a smooth manifold are in bijective correspondence with derivations of the associated \(\mathcal{C}^{\infty}\)-ring of said manifold. We turn our attention to vector fields in the setting of adjoint tangent structure. If \((\mathbb{X},\mathbb{T})\) is a tangent category with adjoint tangent structure, a vector field \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) corresponds to a map \(v:\mathsf{T}^{\circ}(A)\xrightarrow{\ \ }A\) which is a retract of the adjoint projection in \(\mathbb{X}\), that is, \(v\circ p^{\circ}_{A}=1_{A}\). It turns out that vector fields over \(A\) in \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) correspond precisely to vector fields over \(A\) in \((\mathbb{X},\mathbb{T})\). This comes as no surprise since \(\mathsf{T}^{\circ}\) is a left adjoint of \(\mathsf{T}\), and therefore, there is a natural bijective correspondence between maps of type \(A\xrightarrow{\ \ }\mathsf{T}(A)\) and of type \(\mathsf{T}^{\circ}(A)\xrightarrow{\ \ }A\). Furthermore, this equivalence also preserves the Lie algebra structure. **Lemma 2.3.3**.: _Let \((\mathbb{X},\mathbb{T})\) be a tangent category with adjoint tangent structure and let \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) be the induced tangent category as defined in Theorem 2.2. For any object \(A\) of \(\mathbb{X}\),_ 1. _If_ \(v\in\mathsf{V}_{\mathbb{T}}(A)\)_, define_ \(v^{\sharp}:\mathsf{T}^{\circ}(A)\xrightarrow{\ \ }A\) _by_ \(v^{\sharp}:=\varepsilon_{A}\circ\mathsf{T}^{\circ}(v)\)_. Then_ \(v^{\sharp}\in\mathsf{V}_{\mathbb{T}^{\circ}}(A)\)_._ 2. _If_ \(w\in\mathsf{V}_{\mathbb{T}^{\circ}}(A)\)_, define_ \(w^{\flat}:A\xrightarrow{\ \ }\mathsf{T}(A)\) _by_ \(w^{\flat}:=\mathsf{T}(w)\circ\eta_{A}\)_. Then_ \(w^{\flat}\in\mathsf{V}_{\mathbb{T}}(A)\)_._ _Furthermore, these constructions are inverses of each other, that is, \(v^{\flat}=v\) and \(w^{\flat}=w\), and therefore, we have \(\mathsf{V}_{\mathbb{T}}(A)\cong\mathsf{V}_{\mathbb{T}^{\circ}}(A)\). If \((\mathbb{X},\mathbb{T})\) is a Rosicky tangent category, then \(\mathsf{V}_{\mathbb{T}}(A)\cong\mathsf{V}_{\mathbb{T}^{\circ}}(A)\) is also a Lie algebra isomorphism._ ### Cartesian Differential Categories and Differential Objects In this section, we review differential objects and Cartesian differential categories. While tangent categories axiomatize the apparatus necessary to provide a differential calculus over smooth manifolds, Cartesian differential categories instead axiomatize differential calculus over Euclidean spaces. In particular, a Cartesian differential category is defined in terms of a differential combinator, which is a generalization of the total derivative operator. Every Cartesian differential category is a Cartesian tangent category, where the tangent bundle functor is constructed using the differential combinator. On the other hand, to extract a Cartesian differential category from a Cartesian tangent category, one must look at a special class of objects: the differential objects. Essentially, differential objects generalize the Euclidean spaces in a tangent category, and the subcategory of differential objects is a Cartesian differential category, where the differential combinator is built using the tangent bundle functor. In fact, this results in an adjunction between the category of Cartesian differential categories and the category of Cartesian tangent categories [5, Theorem 4.12]. Let us begin with Cartesian differential categories, which were introduced by Blute, Cockett, and Seely in [3]. The underlying category of a Cartesian differential category is a **Cartesian left additive category**, which in particular is a category with finite products, such that every homset is a commutative monoid [3, Definition 1.2.1]. Cartesian differential categories are Cartesian left additive categories equipped with a differential combinator, whose axioms include analogue versions of the chain rule, linearity of the derivative, symmetry of the partial derivatives, etc. We do not provide axioms here and invite interested readers to learn about Cartesian differential categories in [3, 20]. **Definition 2.4.1**.: _[_3_, Definition 2.1.1]_ _A Cartesian differential category is a Cartesian left additive category \(\mathbb{X}\) equipped with a **differential combinator**\(\mathsf{D}\), which is a family of operators \(\mathsf{D}:\mathbb{X}(A,B)\xrightarrow{\ \ }\mathbb{X}(A\times A,B)\), such that seven axioms **[CD.1]** to **[CD.7]** from [20, Definition 2.3] hold. For a map \(f:A\xrightarrow{\ \ }B\), \(\mathsf{D}[f]:A\times A\xrightarrow{\ \ }B\) is called the **derivative** of \(f\)._ Every Cartesian differential category is a Cartesian tangent category, where in particular, the tangent bundle functor is defined on objects as \(\mathsf{T}(A)=A\times A\), and on maps as \(\mathsf{T}(f)=\langle f\circ\pi_{1},\mathsf{D}[f]\rangle\)[5, Proposition 4.7]. See [20, Section 2] for examples of Cartesian differential categories. The canonical example of a Cartesian differential category is the Lawvere theory of real smooth functions, which provides a direct link to classical multivariable calculus. In Section 4 we will explain how the opposite category of the Kleisli category of an operad, and a certain Lawvere theory of polynomials of an operad, are both Cartesian differential categories. Let us now turn our attention to differential objects. Differential objects were first introduced in [5, Definition 4.8], however, the definition was later updated in [8, Definition 3.1] to include an important compatibility with the vertical lift. **Definition 2.4.2**.: _[_8_, Definition 3.1]_ _In a Cartesian tangent category \((\mathbb{X},\mathbb{T})\), a **differential object** is a quadruple \((A,\hat{p},\sigma,\zeta)\) consisting of:_ 1. _An object_ \(A\) _of_ \(\mathbb{X}\)_;_ 2. _A map_ \(\hat{p}:\mathsf{T}(A)\xrightarrow{\ \ }A\)_, called the **differential projection**;_ 3. _A map_ \(\sigma:A\times A\xrightarrow{\ \ }A\)_, called the **sum**;_ 4. _A map_ \(\zeta:\ast\xrightarrow{\ \ }A\)_, called the_ **zero**_;_ _and such that the equalities in [8, Definition 3.1] hold. Let \(\mathsf{DIFF}[(\mathbb{X},\mathbb{T})]\) be the category of differential objects of \((\mathbb{X},\mathbb{T})\) and all maps of \(\mathbb{X}\) between the underlying objects._ In Lemma 4.6.6, we will provide an alternative, but equivalent, characterization of differential objects in a Cartesian Rosicky tangent category. A differential object \(A\) should be interpreted as a Euclidean space. One of the axioms of a differential object says that \((A,\sigma,\zeta)\) is a commutative monoid, generalizing the fact a Euclidean space is also a vector space. Another axiom says that \(\langle p_{A},\hat{p}_{A}\rangle:\mathsf{T}(A)\xrightarrow{}A\times A\) is an isomorphism, so \(\mathsf{T}(A)\cong A\times A\). This is an analogue of the fact that the tangent bundle of a Euclidean space is isomorphic to the product of the Euclidean space with itself. The differential projection then arises from the association of a Euclidean space with each tangent space, and this association is compatible with the vertical lift embedding. For any Cartesian tangent category \((\mathbb{X},\mathbb{T})\), \(\mathsf{DIFF}[(\mathbb{X},\mathbb{T})]\) is a Cartesian differential category where for a map \(f:A\xrightarrow{}B\), its differential is defined as \(\mathsf{D}[f]=\hat{p}\circ\mathsf{T}(f)\circ(p_{A},\bar{p}_{A})^{-1}\)[5, Theorem 4.11]. Conversely, in a Cartesian differential category, every object has a canonical and unique differential object structure [5, Proposition 4.7]. Interestingly, differential objects do not usually behave well with respect to the adjoint tangent structure. Indeed, even if a Cartesian tangent category \((\mathbb{X},\mathbb{T})\) has adjoint tangent structure, a differential object in \((\mathbb{X},\mathbb{T})\) does not necessarily result in a differential object in \((\mathbb{X}^{op},\mathbb{T}^{\circ})\), and vice-versa. In fact, \((\mathbb{X}^{op},\mathbb{T}^{\circ})\) could have many differential objects while \((\mathbb{X},\mathbb{T})\) may have no non-trivial ones. This is precisely the case for algebras over an operad. Indeed, in Section 4.6, we will see how the differential objects in the opposite category of algebras over an operad always correspond to modules (in the operadic sense) over the arity-zero part (the units) of the operad. In particular, for the opposite category of (commutative) algebras, the differential objects correspond precisely to modules over the base commutative ring. On the other hand, we will explain why differential objects in the category of algebras of an operad are in a certain sense trivial (and sometimes literally trivial). ## 3 CoCartesian Differential Monads The main objective of this section is to prove that the category of algebras of a coCartesian differential monad is a tangent category, obtained by lifting the biproduct tangent structure from the base category. This is a crucial result for the story of this paper: in Section 4.1, we will show that the monad associated to any operad is always a coCartesian differential monad. As such, from this general result, we are able to obtain a tangent structure for the category of algebras of an operad without having to check all the axioms for a tangent category. In this section, we also discuss adjoint tangent structures, vector fields, and differential objects for coCartesian differential monads. By dualizing the results of this section, we also obtain the answer to the question asked in the conclusion of [20] regarding the coEilenberg-Moore category of a Cartesian differential comonad: we show that this category is a tangent category. ### Tangent Monads for Biproducts In this section, we discuss the canonical tangent structure induced by biproducts and tangent monads, which are precisely the kind of monads that lift said tangent structure to their categories of algebras. Recall that a **monad** on a category \(\mathbb{X}\) is a triple \((\mathsf{S},\mu,\eta)\) consisting of a functor \(\mathsf{S}:\mathbb{X}\xrightarrow{}\mathbb{X}\), a natural transformation \(\mu_{A}:\mathsf{S}(A)\xrightarrow{}\mathsf{S}(A)\), called the **monad multiplication**, and a natural transformation \(\eta_{A}:A\xrightarrow{}\mathsf{S}(A)\), called the **unit multiplication**, such that the following equalities hold: \[\mu_{A}\circ\mathsf{S}(\eta_{A})=1_{\mathsf{S}(A)}=\mu_{A}\circ\eta_{\mathsf{ S}(A)} \mu_{A}\circ\mathsf{S}(\mu_{A})=\mu_{A}\circ\mu_{\mathsf{S}(A)} \tag{1}\] For a monad \((\mathsf{S},\mu,\eta)\), an \(\mathsf{S}\)-**algebra** is a pair \((A,\alpha)\) consisting of an object \(A\) and a map \(\alpha:\mathsf{S}(A)\xrightarrow{}A\) of \(\mathbb{X}\), called the \(\mathsf{S}\)**-algebra structure map**, such that the following equalities hold: \[\alpha\circ\eta_{A}=1_{A} \alpha\circ\mu_{A}=\alpha\circ\mathsf{S}(\alpha) \tag{2}\] An \(\mathsf{S}\)**-algebra morphism**\(f:(A,\alpha)\xrightarrow{}(B,\beta)\) is a map \(f:A\xrightarrow{}B\) in \(\mathbb{X}\) such that the following equality holds: \[f\circ\alpha=\beta\circ\mathsf{S}(f) \tag{3}\] We denote \(\mathsf{ALG}_{\mathsf{S}}\) the category whose objects are \(\mathsf{S}\)-algebras and whose maps are \(\mathsf{S}\)-algebra morphisms. \(\mathsf{ALG}_{\mathsf{S}}\) is also often called the **Eilenberg-Moore category** of the monad \((\mathsf{S},\mu,\eta)\). Lastly, recall that the **free \(\mathsf{S}\)-algebra** over an object \(A\) is the \(\mathsf{S}\)-algebra \((\mathsf{S}(A),\mu_{A})\). A **(Rosicky) tangent monad**[10, Definition 19] on a (Rosicky) tangent category \((\mathbb{X},\mathbb{T})\) is a monad \((\mathsf{S},\mu,\eta)\) on \(\mathbb{X}\) which also comes equipped with a distributive law [21, Lemma 1], which is a natural transformation of type \(\lambda_{A}:\mathsf{ST}(A)\xrightarrow{}\mathsf{TS}(A)\) which is compatible with both the monad structure and the (Rosicky) tangent structure in the sense of [5, Definition 2.7]. By [10, Proposition 20], the category of algebras of a (Rosicky) tangent monad is a (Rosicky) tangent category, where the tangent bundle on an \(\mathsf{S}\)-algebra is: \[\mathsf{T}(A,\alpha)=(\mathsf{T}(A),\mathsf{T}(\alpha)\circ\lambda_{A})\] Furthermore, the forgetful functor from \(\mathsf{S}\)-algebras down to the base (Rosicky) tangent category preserves the (Rosicky) tangent structure strictly. Therefore, we say that tangent monads "lift" the (Rosicky) tangent structure of the base (Rosicky) tangent category to the category of algebras. The finite products in the category of \(\mathsf{S}\)-algebras are also "lifted" from the base category. Hence, for a (Rosicky) tangent monad \(\mathsf{S}\) on a Cartesian (Rosicky) tangent category, the category of \(\mathsf{S}\)-algebras will also be a Cartesian (Rosicky) tangent category, such that the forgetful functor preserves the Cartesian (Rosicky) tangent structure strictly. In this paper, we are interested in the specific case of tangent monads on categories with finite biproducts, and will therefore give the definition of a tangent monad in this setting. To do so, we must first review the tangent structure induced by biproducts. By a **semi-additive category**, we mean a category with finite biproducts. Alternatively, recall that a semi-additive category can be described as a category with finite products which is enriched over commutative monoids. In this setting, each homset is a commutative monoid, so we can sum parallel maps together \(f+g\) and we have zero maps \(0\), and composition preserves this additive structure. Keeping this in mind, we will use product notation \(\times\) for biproducts rather than direct sum notation \(\oplus\), as the tangent structure is more intuitive from the product perspective. By an **additive category**, we mean a semi-additive category that is also enriched over Abelian groups, that is, each homset is furthermore an Abelian group, and so, each map \(f\) admits a negative \(-f\). Let us now describe in full detail the canonical tangent structure for (semi-)additive categories. **Lemma 3.1.1**.: _[_10_, Section 5]_ _Let \(\mathbb{X}\) be a semi-additive category. Then define:_ 1. _The tangent bundle functor_ \(\mathsf{B}:\mathbb{X}\xrightarrow{}\mathbb{X}\) _to be the diagonal functor, that is, the functor defined on objects as_ \(\mathsf{B}(A)=A\times A\) _and on maps as_ \(\mathsf{B}(f)=f\times f\)_;_ 2. _The projection_ \(p_{A}^{\times}:A\times A\xrightarrow{}A\) _as the first projection of the product,_ \(p_{A}^{\times}=\pi_{1}\)_, and where the_ \(n\)_-fold pullback of_ \(p_{A}^{\times}\) _is_ \(\mathsf{B}_{n}(A):=\prod\limits_{i=1}^{n+1}A\) _and where the_ \(j\)_-th pullback projection_ \(q_{j}^{\times}:\prod\limits_{i=1}^{n+1}A\xrightarrow{}A\times A\) _projects out the first and_ \(j+1\)_-th term,_ \(q_{j}^{\times}:=\langle\pi_{1},\pi_{j+1}\rangle\)_;_ 3. _The sum_ \(s_{A}^{\times}:A\times A\times A\xrightarrow{}A\times A\) _as the sum of the last two components,_ \(s_{A}^{\times}:=\langle\pi_{1},\pi_{2}+\pi_{3}\rangle\)_;_ 4. _The zero map_ \(z_{A}^{\times}:A\xrightarrow{}A\times A\) _as the injection into the first component,_ \(z_{A}^{\times}:=\langle 1_{A},0\rangle\)_;_ 5. _The vertical lift_ \(l_{A}^{\times}:A\times A\xrightarrow{}A\times A\times A\times A\) _as the injection of the first component in the first component and the second component in the fourth component,_ \(l_{A}^{\times}=\langle\pi_{1},0,0,\pi_{2}\rangle\)_;_ 6. _The canonical flip_ \(c_{A}^{\times}:A\times A\times A\times A\xrightarrow{}A\times A\times A \times A\) _as the transposition of the second and third components,_ \(c_{A}^{\times}:=\langle\pi_{1},\pi_{3},\pi_{2},\pi_{4}\rangle\)_._ _Then, \(\mathbb{B}=(\mathsf{B},p^{\times},s^{\times},z^{\times},l^{\times},c^{\times})\) is a tangent structure on \(\mathbb{X}\), and so, \((\mathbb{X},\mathbb{B})\) is a Cartesian tangent category. Similarly, if \(\mathbb{X}\) is an additive category, then define:_ 1. _The negative map_ \(n_{A}^{\times}:A\times A\xrightarrow{}A\times A\) _as the negative of the second component_ \(n_{A}^{\times}:=\langle\pi_{1},-\pi_{2}\rangle\)_._ _Then, \(\mathbb{B}=(\mathsf{B},p^{\times},s^{\times},z^{\times},l^{\times},c^{\times}, n^{\times})\) is a Rosicky tangent structure on \(\mathbb{X}\), and so, \((\mathbb{X},\mathbb{B})\) is a Cartesian Rosicky tangent category._ The following definition of a tangent monad is indeed the same definition as in [10, Definition 19], but in the specific case of the canonical tangent structure on a (semi-)additive category. **Definition 3.1.2**.: _[_10_, Definition 19]_ _Let \(\mathbb{X}\) be a semi-additive category (resp. additive category), and let \((\mathsf{S},\mu,\eta)\) be a monad on \(\mathbb{X}\). A \(\mathbb{B}\)**-distributive law** over \((\mathsf{S},\mu,\eta)\) is a natural transformation \(\lambda_{A}:\mathsf{S}(A\times A)\xrightarrow{}\mathsf{S}(A)\times\mathsf{S}(A)\) such that:_ 1. \(\lambda\) _is a_ _distributive law_ _of the functor_ \(\mathsf{B}\) _over the monad_ \((\mathsf{S},\mu,\eta)\)_, that is, the following equalities hold:_ \[\lambda_{A}\circ\mu_{A\times A}=(\mu_{A}\times\mu_{A})\circ\lambda_{\mathsf{S} (A)}\circ\mathsf{S}(\lambda_{A}) \lambda_{A}\circ\eta_{A\times A}=\eta_{A}\times\eta_{A}\] (4) _._ 2. \(\lambda\) _is compatible with the biproduct (Rosicky) tangent structure_ \(\mathbb{B}\) _in the sense that the following equalities hold:_ \[p_{A}^{\times}\circ\lambda_{A}=\mathsf{S}(p_{A}^{\times})\qquad\qquad\lambda_{ A}\circ\mathsf{S}\left(z_{A}^{\times}\right)=z_{\mathsf{S}(A)}^{\times}\] (5) \[\lambda_{A}\circ\mathsf{S}\left(s_{A}^{\times}\right)=s_{\mathsf{S}(A)}^{ \times}\circ\left\langle\mathsf{S}(\pi_{1}),\pi_{2}\circ\lambda_{A}\circ \mathsf{S}(q_{1}^{\times}),\pi_{2}\circ\lambda_{A}\circ\mathsf{S}(q_{2}^{ \times})\right\rangle\] \[l_{\mathsf{S}(A)}^{\times}\circ\lambda_{A}=(\lambda_{A}\times \lambda_{A})\circ\lambda_{A\times A}\circ\mathsf{S}\left(l_{A}^{\times}\right) \qquad\qquad c_{\mathsf{S}(A)}^{\times}\circ(\lambda_{A}\times\lambda_{A}) \circ\lambda_{A\times A}=(\lambda_{A}\times\lambda_{A})\circ\lambda_{A\times A }\circ\mathsf{S}\left(c_{A}^{\times}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ Proof.: To prove that \(\mathsf{S}(\langle\pi_{1},-\pi_{2}\rangle)\circ\partial_{A}=-\partial_{A}\), it suffices to show that \(\partial_{A}+\mathsf{S}(\langle\pi_{1},-\pi_{2}\rangle)\circ\partial_{A}=0\). First observe that, by **[DC.1]** and **[DC.2]**, it follows that: \[\mathsf{S}(\langle\pi_{1},0\rangle)\circ\partial_{A}=0 \tag{6}\] Also, note that we have the following equalities: \[\langle\pi_{1},\pi_{2}-\pi_{3}\rangle\circ\langle\pi_{1},\pi_{2},0\rangle=1_{ A\times A}\quad\langle\pi_{1},\pi_{2}-\pi_{3}\rangle\circ\langle\pi_{1},0,\pi_{2} \rangle=\langle\pi_{1},-\pi_{2}\rangle\quad\langle\pi_{1},\pi_{2}-\pi_{3} \rangle\circ\langle\pi_{1},\pi_{2},\pi_{2}\rangle=\langle\pi_{1},0\rangle \tag{7}\] Then, using these identities, and both **[DC.1]** and **[DC.2]**, we compute: \[\partial_{A}+\mathsf{S}(\langle\pi_{1},-\pi_{2}\rangle)\circ \partial_{A}=\mathsf{S}\left(\langle\pi_{1},\pi_{2}-\pi_{3}\rangle\right) \circ\big{(}\mathsf{S}(\langle\pi_{1},\pi_{2},0\rangle)\circ\partial_{A}+ \mathsf{S}(\langle\pi_{1},0,\pi_{2}\rangle)\circ\partial_{A}\big{)}\] \[\qquad\qquad=\mathsf{S}\left(\langle\pi_{1},\pi_{2}-\pi_{3} \rangle\right)\circ\mathsf{S}(\langle\pi_{1},\pi_{2},\pi_{2}\rangle)\circ \partial_{A}=\mathsf{S}(\langle\pi_{1},0\rangle)\circ\partial_{A}=0\] We conclude that \(\mathsf{S}(\langle\pi_{1},-\pi_{2}\rangle)\circ\partial_{A}=-\partial_{A}\). As mentioned above, the opposite category of the Kleisli category of a coCartesian differential monad is a Cartesian differential category. Recall that for a monad \((\mathsf{S},\mu,\eta)\) on a category \(\mathbb{X}\), its **Kleisli category** is the category \(\mathsf{Kl}_{\mathsf{S}}\) whose objects are the same as \(\mathbb{X}\) and where a map from \(A\) to \(B\) in \(\mathsf{Kl}_{\mathsf{S}}\) is a map of type \(A\xrightarrow{}\mathsf{S}(B)\) in \(\mathbb{X}\). In a Cartesian differential category, for a map \(f:A\xrightarrow{}B\), its derivative will be of type \(\mathsf{D}[f]:A\times A\xrightarrow{}B\). For a coCartesian differential monad \((\mathsf{S},\mu,\eta,\vartheta)\), since \(\mathsf{Kl}_{\mathsf{S}}^{\mathrm{op}}\) is a Cartesian differential category, then for a map \(f:A\xrightarrow{}\mathsf{S}(B)\) in \(\mathbb{X}\), its derivative will be of type \(\mathsf{D}[f]:A\xrightarrow{}\mathsf{S}(B\times B)\) in \(\mathbb{X}\), which is given by post-composing with the differential combinator transformation. We invite interested readers to see [20] for full details on the subject. **Proposition 3.2.3**.: _[_20_, Theorem 3.5]_ _Let \(\mathbb{X}\) be a semi-additive category and let \((\mathsf{S},\mu,\eta,\vartheta)\) be a coCartesian differential monad on \(\mathbb{X}\). Then, \(\mathsf{Kl}_{\mathsf{S}}^{\mathrm{op}}\) is a Cartesian differential category, where the differential combinator \(\mathsf{D}\), viewed as operators \(\mathsf{D}:\mathbb{X}(A,\mathsf{S}(B))\xrightarrow{}\mathbb{X}(A,\mathsf{S}(A \times A))\) is defined as the following composition in \(\mathbb{X}\): \(\mathsf{D}[f]:=\partial_{A}\circ f\)._ In Section 4.2, we will also review the notion of a \(\mathsf{D}\)-linear counit for coCartesian differential monads [20, Definition 3.8]. We elected not to review it in this section since the \(\mathsf{D}\)-linear counit does not appear to play a role for the tangent structure (but is important for the Cartesian differential structure). We will now prove that every coCartesian differential monad is a tangent monad, where the \(\mathbb{B}\)-distributive law is constructed using the differential combinator transformation. **Proposition 3.2.4**.: _Let \((\mathsf{S},\mu,\eta,\vartheta)\) be a coCartesian differential monad on a semi-additive category (resp. additive category) \(\mathbb{X}\). Define the natural transformation \(\lambda_{A}:\mathsf{S}(A\times A)\xrightarrow{}\mathsf{S}(A)\times\mathsf{S}(A)\) as follows:_ \[\lambda_{A}:=\big{\langle}\mathsf{S}(\pi_{1}),\mathsf{S}\left(\pi_{1}+\pi_{4} \right)\circ\partial_{A\times A}\big{\rangle} \tag{8}\] _Then, \((\mathsf{S},\mu,\eta,\lambda)\) is a (Rosicky) tangent monad on \((\mathbb{X},\mathbb{B})\)._ Proof.: Recall the following useful identities regarding the product's pairing operator: \[(f\times g)\circ\langle h,k\rangle\circ j=\langle f\circ h\circ j,g\circ k \circ j\rangle \langle f\circ\pi_{1},g\circ\pi_{2}\rangle=f\times g \tag{9}\] We will show that \(\lambda_{A}\) satisfies the equalities from Definition 3.1.2. For simplicity and readability, we will omit the subscripts for the natural transformations. 1. \((\mu\times\mu)\circ\lambda\circ\mathsf{S}(\lambda)=\lambda\circ\mu\) First, observe that, by definition of \(\lambda\): \[(\pi_{1}+\pi_{4})\circ(\lambda\times\lambda)=\mathsf{S}(\pi_{1})\circ\pi_{1}+ \mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\partial\circ\pi_{2}\] (10) and, also, for four copies of \(A\): \[(\pi_{1}+\pi_{4})\circ\langle 1_{A\times A},0\rangle=\pi_{1}\] (11) Then, using **[DC.4]**, we compute that: \[(\mu\times\mu)\circ\lambda\circ\mathsf{S}(\lambda)=\big{\langle} \mu\circ\mathsf{S}(\pi_{1})\circ\mathsf{S}(\lambda),\mu\circ\mathsf{S}\left( \pi_{1}+\pi_{4}\right)\circ\partial\circ\mathsf{S}(\lambda)\big{\rangle}\] \[\qquad=\Big{\langle}\mu\circ\mathsf{S}\mathsf{S}(\pi_{1}),\mu \circ\mathsf{S}\left(\mathsf{S}(\pi_{1})\circ\pi_{1}+\mathsf{S}\left(\pi_{1}+ \pi_{4}\right)\circ\partial\circ\pi_{2}\right)\circ\partial\Big{\rangle}\] \[=\left\langle\mathsf{S}(\pi_{1})\circ\mu,\mu\circ\mathsf{S}\left( \mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\mathsf{S}(\left\langle 1,0\right\rangle)\circ\pi_{1}+ \mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\partial\circ\pi_{2}\right)\circ \partial\right\rangle\] \[=\left\langle\mathsf{S}(\pi_{1})\circ\mu,\mathsf{S}\left(\pi_{1} +\pi_{4}\right)\circ\mu\circ\mathsf{S}\left(\mathsf{S}(\left\langle 1,0\right\rangle)\circ\pi_{1}+ \mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\partial\circ\pi_{2}\right)\circ \partial\right\rangle\] \[=\left\langle\mathsf{S}(\pi_{1})\circ\mu,\mathsf{S}(\pi_{1}+\pi_{ 4})\circ\partial\circ\mu\right\rangle=\left\langle\mathsf{S}(\pi_{1}),\mathsf{ S}\left(\pi_{1}+\pi_{4}\right)\circ\partial\right\rangle\circ\mu=\lambda\circ\mu\] 2. \(\lambda\circ\eta=\eta\times\eta\) Note that, in the case of four copies of \(A\): \[(\pi_{1}+\pi_{4})\circ\left\langle 0,1_{A\times A}\right\rangle=\pi_{2}\] (12) Then, using **[DC.3]**, we compute: \[\lambda\circ\eta=\left\langle\mathsf{S}\left(\pi_{1}\right)\circ \eta,\mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\partial\circ\eta\right\rangle =\left\langle\mathsf{S}\left(\pi_{1}\right)\circ\eta,\mathsf{S}\left(\pi_{1}+ \pi_{4}\right)\circ\eta\circ\left\langle 0,1\right\rangle\right\rangle\] \[=\left\langle\eta\circ\pi_{1},\eta\circ(\pi_{1}+\pi_{4})\circ(0,1 )\right\rangle=\left\langle\eta\circ\pi_{1},\eta\circ\pi_{2}\right\rangle= \eta\times\eta\] 3. \(p^{\times}\circ\lambda=\mathsf{S}(p^{\times})\) is immediate by the definition of \(\lambda\) and \(p^{\times}\). 4. \(\lambda\circ\mathsf{S}(z^{\times})=z^{\times}\) First, observe that we have: \[\pi_{1}\circ z^{\times}=1\] (13) Then, using **[DC.1]**, we compute: \[\lambda\circ\mathsf{S}(z^{\times})=\left\langle\mathsf{S}\left(\pi_{1} \right)\circ\mathsf{S}(z^{\times}),\mathsf{S}\left(\pi_{1}+\pi_{4}\right) \circ\partial\circ\mathsf{S}(z^{\times})\right\rangle=\left\langle 1,\mathsf{S}(\pi_{1}) \circ\partial\right\rangle=\left\langle 1,\mathsf{S}\left(\pi_{1}\right)\circ \partial\right\rangle=\left\langle 1,0\right\rangle=z^{\times}\] 5. \(\lambda\circ\mathsf{S}\left(s^{\times}\right)=s^{\times}\circ\left\langle \mathsf{S}(\pi_{1}),\pi_{2}\circ\lambda\circ\mathsf{S}(q_{1}^{\times}),\pi_{2} \circ\lambda\circ\mathsf{S}(q_{2}^{\times})\right\rangle\) First, observe that we have: \[s^{\times}\circ\left\langle f,g,h\right\rangle=\left\langle f,g+h\right\rangle\] (14) We leave it as an exercise for the reader to check that the following equality also holds for \(\pi_{2}\circ\lambda_{A}\circ\mathsf{S}\left(q_{1}^{\times}\right):\mathsf{S}(A \times A\times A)\to\mathsf{S}(A)\) and \(\pi_{2}\circ\lambda_{A}\circ\mathsf{S}\left(q_{2}^{\times}\right):\mathsf{S}( A\times A\times A)\to\mathsf{S}(A)\): \[\pi_{2}\circ\lambda_{A}\circ\mathsf{S}\left(q_{1}^{\times}\right)=\mathsf{S} \left(\left\langle\pi_{1},\pi_{5}\right\rangle\right)\circ\partial\hskip 56.905512pt\pi_{2} \circ\lambda_{A}\circ\mathsf{S}\left(q_{2}^{\times}\right)=\mathsf{S}\left( \left\langle\pi_{1},\pi_{6}\right\rangle\right)\circ\partial\] (15) and also that: \[\pi_{1}\circ s^{\times}=\pi_{1}\hskip 56.905512pt(\pi_{1}+\pi_{4})\circ(s^{ \times}\times s^{\times})=\left\langle\pi_{1},\pi_{5}+\pi_{6}\right\rangle\] (16) and lastly that for six copies of \(A\) and nine copies of \(A\): \[\left\langle\pi_{1},\pi_{5}\right\rangle=\left\langle\pi_{1},\pi_{ 8}+\pi_{9}\right\rangle\circ\left\langle\pi_{1},\pi_{2},0\right\rangle\hskip 28.452756pt \left\langle\pi_{1},\pi_{6}\right\rangle=\left\langle\pi_{1},\pi_{8}+\pi_{9} \right\rangle\circ\left\langle\pi_{1},0,\pi_{2}\right\rangle\] (17) \[\left\langle\pi_{1},\pi_{5}+\pi_{6}\right\rangle=\left\langle\pi_{1}, \pi_{8}+\pi_{9}\right\rangle\circ\left\langle\pi_{1},\pi_{2},\pi_{2}\right\rangle\] Then, using **[DC.2]**, we compute: \[\lambda\circ\mathsf{S}\left(s^{\times}\right)=\left\langle\mathsf{S}(\pi_{1} )\circ\mathsf{S}\left(s^{\times}\right),\mathsf{S}\left(\pi_{1}+\pi_{4}\right) \circ\partial\circ\mathsf{S}\left(s^{\times}\right)\right\rangle\] \[=\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}\left(\left\langle\pi_{1},\pi_{5}+\pi_{6}\right\rangle\right)\circ\partial\right\rangle=\left\langle \mathsf{S}(\pi_{1}),\mathsf{S}\left(\left\langle\pi_{1},\pi_{8}+\pi_{9}\right\rangle \right)\circ\mathsf{S}\left(\left\langle\pi_{1},\pi_{2},\pi_{2}\right\rangle \right)\circ\partial\right\rangle\] \[=\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}\left(\left\langle\pi_{1},\pi_{8}+\pi_{9}\right\rangle\right)\circ\left\langle\mathsf{S}(\left\langle\pi_{1},\pi_{2},0\right\rangle)\circ\partial+\mathsf{S}(\left\left\langle\pi_{1},0,\pi_{ 2}\right\rangle\right)\circ\partial\right\rangle\right\rangle\] \[=s^{\times}\circ\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}\left( \left\langle\pi_{1},\pi_{5}\right\rangle\right)\circ\partial,\mathsf{S}\left( \left\langle\pi_{1},\pi_{6}\right\rangle\right)\circ\partial\right\rangle=s^{ \times}\circ\left\langle\mathsf{S}(\pi_{1}),\pi_{2}\circ\lambda_{A}\circ\mathsf{S }(q_{1}^{\times}),\pi_{2}\circ\lambda_{A}\circ\mathsf{S}(q_{2}^{\times})\right\rangle\] For the next two identities it will be useful to expand out \((\lambda\times\lambda)\circ\lambda:\mathsf{S}(A\times A\times A\times A) \rightarrow\mathsf{S}(A)\times\mathsf{S}(A)\times\mathsf{S}(A)\times\mathsf{S}(A)\). We leave it as an exercise for the reader to check that: \[(\lambda\times\lambda)\circ\lambda=\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}( \pi_{1}+\pi_{6})\circ\partial,\mathsf{S}(\pi_{1}+\pi_{7})\circ\partial, \mathsf{S}(\pi_{1}+\pi_{7}+\pi_{10}+\pi_{16})\circ\partial\circ\partial\right\rangle\] (18) * \(l^{\times}\circ\lambda=(\lambda\times\lambda)\circ\lambda\circ\mathsf{S}\left( l^{\times}\right)\) First, observe that we have: \[l^{\times}\circ\left\langle f,g\right\rangle=\left\langle f,0,0,g\right\rangle\] (19) and also, using additive enrichment and the definition of \(\left\langle-,-\right\rangle\), we have: \[\pi_{1}\circ l^{\times}=\pi_{1}\qquad(\pi_{1}+\pi_{6})\circ\left( l^{\times}\times l^{\times}\right)=\pi_{1}=(\pi_{1}+\pi_{7})\circ\left(l^{ \times}\times l^{\times}\right)\] (20) \[(\pi_{1}+\pi_{7}+\pi_{10}+\pi_{16})\circ\left(l^{\times}\times l^{ \times}\times l^{\times}\times l^{\times}\right)=(\pi_{1}+\pi_{4})\circ\left\langle \pi_{1},\pi_{4}\right\rangle\] Then, using **[DC.1]** and **[DC.5]**, we compute: \[(\lambda\times\lambda)\circ\lambda\circ\mathsf{S}\left(l^{\times}\right)\] \[=\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}(\pi_{1}+\pi_{6})\circ \partial\circ\mathsf{S}\left(l^{\times}\right),\mathsf{S}(\pi_{1}+\pi_{7}) \circ\partial\circ\mathsf{S}\left(l^{\times}\right),\mathsf{S}(\pi_{1}+\pi_{7} +\pi_{10}+\pi_{16})\circ\partial\circ\partial\circ\mathsf{S}\left(l^{\times} \right)\right\rangle\] \[=\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}(\pi_{1})\circ \partial,\mathsf{S}(\pi_{1})\circ\partial,\mathsf{S}(\pi_{1}+\pi_{4})\circ \mathsf{S}(\left\langle\pi_{1},\pi_{4}\right\rangle)\circ\partial\circ \partial\right\rangle=\left\langle\mathsf{S}(\pi_{1}),0,0,\mathsf{S}(\pi_{1}+ \pi_{4})\circ\partial\right\rangle\] \[=l^{\times}\circ\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}(\pi_{ 1}+\pi_{4})\circ\partial\right\rangle=l^{\times}\circ\lambda\] * \(\left\langle\pi_{1},\pi_{3},\pi_{2},\pi_{4}\right\rangle\circ(\lambda\times \lambda)\circ\lambda=(\lambda\times\lambda)\circ\lambda\circ\mathsf{S}\left( (\pi_{1},\pi_{3},\pi_{2},\pi_{4})\right)\) First, observe that we have: \[c^{\times}\circ\left\langle f,g,h,k\right\rangle=\left\langle f,h,g,k\right\rangle\] (21) and also, using additive enrichment and the definition of \(\left\langle-,-\right\rangle\), we have: \[\pi_{1}\circ c^{\times}=\pi_{1}\qquad\left(\pi_{1}+\pi_{6})\circ c^{\times}= \pi_{1}+\pi_{7}\qquad\left(\pi_{1}+\pi_{7}\right)\circ\left(c^{\times}\times c ^{\times}\right)=\pi_{1}+\pi_{6}\] (22) \[(\pi_{1}+\pi_{7}+\pi_{10}+\pi_{16})\circ\left(c^{\times}\times c^{ \times}\times c^{\times}\times c^{\times}\right)=(\pi_{1}+\pi_{7}+\pi_{10}+\pi_ {16})\circ\left\langle\pi_{1},\pi_{3},\pi_{2},\pi_{4}\right\rangle\] Then, using **[DC.6]**, we compute: \[(\lambda\times\lambda)\circ\lambda\circ\mathsf{S}\left(c^{\times}\right)=\] \[=\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}\left(c^{\times}\right),\mathsf{S}(\pi_{1}+\pi_{6})\circ\partial\circ\mathsf{S}\left(c^{\times}\right),\mathsf{S}(\pi_{1}+\pi_{7})\circ\partial\circ\mathsf{S}\left(c^{\times} \right),\mathsf{S}(\pi_{1}+\pi_{7}+\pi_{10}+\pi_{16})\circ\partial\circ\partial \circ\mathsf{S}\left(c^{\times}\right)\right\rangle\] \[=\] \[= c^{\times}\circ\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}(\pi_{1}+ \pi_{7})\circ\partial,\mathsf{S}(\pi_{1}+\pi_{6})\circ\partial,\mathsf{S}(\pi_{ 1}+\pi_{7}+\pi_{10}+\pi_{16})\circ\partial\circ\partial\right\rangle\] \[= c^{\times}\circ\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}(\pi_{1}+ \pi_{6})\circ\partial,\mathsf{S}(\pi_{1}+\pi_{7})\circ\partial,\mathsf{S}(\pi_{ 1}+\pi_{7}+\pi_{10}+\pi_{16})\circ\partial\circ\partial\right\rangle\] \[= c^{\times}\circ\left(\lambda\times\lambda\right)\circ\lambda\] We conclude that \((\mathsf{S},\mu,\eta,\lambda)\) is a tangent monad. If \(\mathbb{X}\) is also an additive category, it remains to show that \(\lambda\) also satisfies the last identity from Definition 3.1.2. \(n^{\times}\circ\lambda=\lambda\circ\mathsf{S}(n^{\times})\) First, observe that: \[n^{\times}\circ(\left\langle f,g\right\rangle)=\left\langle f,-g\right\rangle\] (23) and also that: \[\pi_{1}\circ n^{\times}=\pi_{1}\qquad\qquad\qquad\qquad(\pi_{1}+\pi_{4})\circ(n^{ \times}\times n^{\times})=(\pi_{1}+\pi_{4})\circ n^{\times}\] (24) Then, using **[DC.N]**, we compute: \[n^{\times}\circ\lambda=n^{\times}\circ\left\langle\mathsf{S}\left( \pi_{1}\right),\mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\partial\right\rangle= \left\langle\mathsf{S}\left(\pi_{1}\right),\mathsf{S}\left(\pi_{1}+\pi_{4} \right)\circ(-\partial)\right\rangle=\left\langle\mathsf{S}\left(\pi_{1} \right),\mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\mathsf{S}(n^{\times}) \circ\partial\right\rangle\] \[=\left\langle\mathsf{S}\left(\pi_{1}\right)\circ\mathsf{S}(n^{ \times}),\mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\partial\circ\mathsf{S}(n^ {\times})\right\rangle=\lambda\circ\mathsf{S}(n^{\times})\] We conclude that \((\mathsf{S},\mu,\eta,\lambda)\) is a Rosicky tangent monad. The converse of Proposition 3.2.4 is also true, that is, every tangent monad on a semi-additive category induces a coCartesian differential monad. Furthermore, these constructions are inverses of each other, and therefore, for a semi-additive category, the data of a tangent monad is equivalent to that of a coCartesian differential monad. **Lemma 3.2.5**.: _Let \(\mathbb{X}\) be a semi-additive category, and let \((\mathsf{S},\mu,\eta,\lambda)\) be a tangent monad on \((\mathbb{X},\mathbb{B})\). Define the natural transformation \(\partial_{A}:\mathsf{S}(A)\xrightarrow{}\mathsf{S}(A\times A)\) as follows:_ \[\partial_{A}:=\pi_{2}\circ\lambda_{A\times A}\circ\mathsf{S}((1_{A},0,0,1_{A})) \tag{25}\] _Then, \((\mathsf{S},\mu,\eta,\partial)\) is a coCartesian differential monad on \(\mathbb{X}\)._ Proof.: Since the proof is essentially again by brute force calculations, and not necessarily more enlightening for this paper, we omit them here and instead simply give a sketch. To prove **[DC.1]**, we use the axiom between \(\lambda\) and \(z^{\times}\). To prove **[DC.2]**, we use the axiom between \(\lambda\) and \(s^{\times}\). To prove **[DC.3]**, we use the axiom between \(\lambda\) and \(\eta\). To prove **[DC.4]**, we use the axiom between \(\lambda\) and \(\mu\). To prove **[DC.5]**, we use the axiom between \(\lambda\) and \(l^{\times}\). And lastly, to prove **[DC.6]**, we use the axiom between \(\lambda\) and \(c^{\times}\). **Corollary 3.2.6**.: _For a monad \((\mathsf{S},\mu,\eta)\) on a semi-additive category \(\mathbb{X}\), there is a bijective correspondence between \(\mathbb{B}\)-distributive laws and differential combinator transformations. Therefore, for a semi-additive category \(\mathbb{X}\), there is a bijective correspondence between tangent monads on \((\mathbb{X},\mathbb{B})\) and coCartesian differential monads on \(\mathbb{X}\)._ Proof.: We must show that the constructions of Proposition 3.2.4 and Lemma 3.2.5 are mutually inverse. Starting with a \(\mathbb{B}\)-distributive law \(\lambda\), observe first that, for two copies of \(A\), we have: \[\left(\left(\pi_{1}+\pi_{4}\right)\times\left(\pi_{1}+\pi_{4}\right)\right) \circ\left\langle 1_{A\times A},0,0,1_{A\times A}\right\rangle=1_{A\times A} \tag{26}\] Then, we compute: \[\left\langle\mathsf{S}(\pi_{1}),\mathsf{S}\left(\pi_{1}+\pi_{4} \right)\circ\partial_{A\times A}\right\rangle=\left\langle\mathsf{S}(\pi_{1} ),\mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\pi_{2}\circ\lambda_{A\times A \times A}\circ\mathsf{S}(\left\langle 1_{A\times A},0,0,1_{A\times A}\right\rangle)\right\rangle\] \[= \left\langle\pi_{1}\circ\lambda_{A},\pi_{2}\circ\lambda_{A} \right\rangle=\left\langle\pi_{1},\pi_{2}\right\rangle\circ\lambda_{A}= \lambda_{A}\] Starting instead from a differential combinator transformation \(\partial\), observe first that, for eight copies of \(A\): \[\left(\pi_{1}+\pi_{4}\right)\circ\left(\left\langle 1_{A},0,0,1_{A}\right\rangle \times\left\langle 1_{A},0,0,1_{A}\right\rangle\right)=1_{A\times A} \tag{27}\] Then, we compute: \[\pi_{2}\circ\lambda_{A\times A}\circ\mathsf{S}(\left\langle 1_{A},0,0,1_{A} \right\rangle)=\mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\partial_{A\times A }\circ\mathsf{S}(\left\langle 1_{A},0,0,1_{A}\right\rangle)=\partial_{A}\] We conclude that \(\mathbb{B}\)-distributive laws and differential combinator transformations are indeed in bijective correspondence, and therefore, so are coCartesian differential monads and tangent monads. Since every coCartesian differential monad is a tangent monad, by applying [10, Proposition 20], we obtain a tangent structure on the category of algebras of a coCartesian differential monad. We expand out this construction in details. Recall that, for a monad \((\mathsf{S},\mu,\eta)\) on a category \(\mathbb{X}\) with finite products, \(\mathsf{Alg}_{\mathsf{S}}\) also has finite products: \[\left(A,\alpha\right)\times\left(A^{\prime},\alpha^{\prime}\right):=\left(A \times A^{\prime},\left\langle\alpha\circ\mathsf{S}(\pi_{1}),\alpha^{\prime} \circ\mathsf{S}(\pi_{2})\right\rangle\right)\] The terminal object is \((*,t_{\mathsf{S}(*)})\), and the projections and pairings are the same as in \(\mathbb{X}\). **Theorem 3.2.7**.: _Let \((\mathsf{S},\mu,\eta,\partial)\) be a coCartesian differential monad on a semi-additive category \(\mathbb{X}\). Define:_ 1. _The tangent bundle functor as the functor_ \(\mathsf{T}:\mathsf{ALG}_{\mathsf{S}}\xrightarrow{}\mathsf{ALG}_{\mathsf{S}}\) _defined on objects as:_ \[\mathsf{T}(A,\alpha)=\left(A\times A,(\alpha\times\alpha)\circ\mathsf{S}\left( \pi_{1}\right)\right.\alpha\circ\mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ \partial_{A\times A}\right)\] (28) _and on maps as_ \(\mathsf{T}(f)=f\times f\)_;_ 2. _The projection as the natural transformation_ \(p_{(A,\alpha)}:\mathsf{T}(A,\alpha)\xrightarrow{}(A,\alpha)\) _defined as_ \(p_{(A,\alpha)}:=\pi_{1}\)_, where the_ \(n\)_-fold pullback of_ \(p_{(A,\alpha)}\) _is:_ \[\mathsf{T}_{n}(A,\alpha)\!=\!\left(\prod_{i=1}^{n+1}A,\left\langle\alpha\circ \mathsf{S}(\pi_{1}),\alpha\circ\mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ \partial_{A\times A}\circ\mathsf{S}(\left\langle\pi_{1},\pi_{2}\right\rangle), \ldots,\alpha\circ\mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\partial_{A \times A}\circ\mathsf{S}(\left\langle\pi_{1},\pi_{n+1}\right\rangle)\right)\right)\] _with pullback projections_ \(q_{j}:\mathsf{T}_{n}(A,\alpha)\xrightarrow{}\mathsf{T}(A,\alpha)\) _defined as_ \(q_{j}=\left\langle\pi_{1},\pi_{j+1}\right\rangle\)_;_ 3. _The sum as the natural transformation_ \(s_{A}:\mathsf{T}_{2}(A,\alpha)\xrightarrow{}\mathsf{T}(A,\alpha)\) _defined as_ \(s_{(A,\alpha)}:=\left\langle\pi_{0},\pi_{2}+\pi_{3}\right\rangle\)_;_ 4. _The zero map as the natural transformation_ \(z_{(A,\alpha)}:(A,\alpha)\xrightarrow{}\mathsf{T}(A,\alpha)\) _defined as_ \(z_{(A,\alpha)}:=\left\langle 1_{A},0\right\rangle\)_;_ 5. _The vertical lift as the natural transformation_ \(l_{(A,\alpha)}:\mathsf{T}^{2}(A,\alpha)\xrightarrow{}\mathsf{T}(A,\alpha)\) _defined as_ \(l_{(A,\alpha)}:=\left\langle\pi_{1},0,\pi_{2}\right\rangle\)_;_ 6. _The canonical flip as the natural transformation_ \(c_{(A,\alpha)}:\mathsf{T}^{2}(A,\alpha)\xrightarrow{}\mathsf{T}^{2}(A,\alpha)\) _defined as_ \(c_{(A,\alpha)}:=\left\langle\pi_{1},\pi_{3},\pi_{2},\pi_{4}\right\rangle\)_._ _Then, \(\mathbb{T}=(\mathsf{T},p,s,z,l,c)\) is a tangent structure on \(\mathsf{ALG}_{\mathsf{S}}\), and so \((\mathsf{ALG}_{\mathsf{S}},\mathbb{T})\) is a Cartesian tangent category. If \(\mathbb{X}\) is also an additive category, then define:_ 1. _The negative map as the natural transformation_ \(n_{(A,\alpha)}:\mathsf{T}(A,\alpha)\xrightarrow{}\mathsf{T}(A,\alpha)\) _defined as_ \(n_{(A,\alpha)}:=\left\langle\pi_{1},-\pi_{2}\right\rangle\)_._ _Then,_ \(\mathbb{T}=(\mathsf{T},p,s,z,l,c,n)\) _is a Rosicky tangent structure on_ \(\mathsf{ALG}_{\mathsf{S}}\)_, and so,_ \((\mathsf{ALG}_{\mathsf{S}},\mathbb{T})\) _is a Cartesian Rosicky tangent category._ We again stress that, even if for an \(\mathsf{S}\)-algebra \((A,\alpha)\), its tangent bundle \(\mathsf{T}(A,\alpha)\) and \((A,\alpha)\times(A,\alpha)\) have the same underlying object \(A\times A\), they are in general not equal (or even isomorphic) as \(\mathsf{S}\)-algebras. They are only equal when \((A,\alpha)\) is a differential object, as we will discuss below in Section 3.5. ### Adjoint Tangent Structure for coCartesian Differential Monads In this section, we discuss adjoint tangent structure for the category of algebras of a coCartesian differential monad. The first thing to observe is that, for any semi-additive category \(\mathbb{X}\), \((\mathbb{X},\mathbb{B})\) has adjoint tangent structure, where, since \(\mathbb{X}^{op}\) is also a (semi-)additive category, the tangent bundle functor is the same as the one in \(\mathbb{X}\). Concretely, to explain why \((\mathbb{X},\mathbb{B})\) has adjoint tangent structure, by Lemma 2.2.3 it suffices to explain why the tangent bundle functor has a left adjoint. **Lemma 3.3.1**.: _[_10_, Section 6]_ _Let \(\mathbb{X}\) be a semi-additive category. The tangent bundle functor \(\mathsf{B}\) is its own left adjoint, where the unit \(\eta^{\times}_{A}:A\xrightarrow{}A\times A\times A\times A\) of the adjunction injects \(A\) into the first and last component, \(\eta^{\times}_{A}:=\left\langle 1_{A},0,0,1_{A}\right\rangle\), and where the counit \(\varepsilon_{A}:A\times A\times A\times A\xrightarrow{}A\) sums the first and last components, \(\varepsilon^{\times}_{A}:=\pi_{1}+\pi_{4}\). So \((\eta^{\times},\varepsilon^{\times}):\mathsf{B}\operatorname{\mathsf{+}} \mathsf{B}\). Therefore, \((\mathbb{X},\mathbb{B})\) has adjoint tangent structure, where:_ 1. _The adjoint tangent bundle functor is the tangent bundle functor_ \(\mathsf{B}:\mathbb{X}^{op}\xrightarrow{}\mathbb{X}^{op}\)_;_ 2. _The adjoint projection_ \(p^{\times^{\circ}}:A\times A\xrightarrow{}A\) _is precisely the zero of the tangent structure, so_ \(p^{\times^{\circ}_{\phantom{A}A}}=\left\langle 1_{A},0\right\rangle\)_, and_ \(\mathsf{B}_{n}(A)\) _is the_ \(n\)_-fold pushout of_ \(p^{\times^{\circ}_{\phantom{A}A}}\) _where the_ \(j\)_-th pushout injection_ \(q^{\times^{\circ}_{\phantom{A}j}}:A\times A\xrightarrow{}\mathsf{B}_{n}(A)\) _is given by injecting the first component into the first component and the second component into the_ \(j\)_-th component:_ \(q^{\times^{\circ}_{\phantom{A}j}}:=\left\langle\pi_{1},0,\ldots,0,\pi_{2},0, \ldots,0\right\rangle\)_;_ 3. _The adjoint sum_ \(s^{\times^{\circ}_{\phantom{A}A}}:A\times A\xrightarrow{}A\times A\times A\) _is given by copying the second component,_ \(s^{\times^{\circ}_{\phantom{A}A}}:=\left\langle\pi_{1},\pi_{2},\pi_{2}\right\rangle\)_;_ 4. _The adjoint vertical lift_ \(l^{\times^{\circ}_{\phantom{A}A}}:A\times A\times A\times A\xrightarrow{}A\times A\) _projects the first component onto the first component and the fourth component onto the second component,_ \(l^{\times^{\circ}_{\phantom{A}A}}:=\left\langle\pi_{1},\pi_{4}\right\rangle\)_;_ 5. _The adjoint canonical flip_ \(c^{\times^{\circ}_{\phantom{A}A}}:A\times A\times A\xrightarrow{}A\times A \times A\times A\times A\) _is the same as the canonical flip,_ \(c^{\times^{\circ}_{\phantom{A}A}}:=\left\langle\pi_{1},\pi_{3},\pi_{2},\pi_{4}\right\rangle\)_._ _Then \(\mathbb{B}^{\circ}=(\mathsf{B},p^{\times^{\circ},\,s^{\times},\,z^{\times},l^{ \times},\,c^{\times^{\circ}},\,c^{\times^{\circ}}})\) is a tangent structure on \(\mathbb{X}^{op}\), and so \((\mathbb{X}^{op},\mathbb{B}^{\circ})\) is a Cartesian tangent category. Similarly, if \(\mathbb{X}\) is an additive category, then:_ 1. _The adjoint negative_ \(n^{\times^{\circ}}:A\times A\xrightarrow{}A\times A\) _is the same as the negative of the tangent structure,_ \(n^{\times^{\circ}_{\phantom{A}A}}=\left\langle\pi_{1},-\pi_{2}\right\rangle\)_._ _Then \(\mathbb{B}^{\circ}=(\mathsf{B},p^{\times^{\circ}},s^{\times^{\circ}},z^{\times^{ \circ}},l^{\times^{\circ}},e^{\times^{\circ}},n^{\times^{\circ}})\) is a Rosicky tangent structure on \(\mathbb{X}^{op}\), and so \((\mathbb{X}^{op},\mathbb{B}^{\circ})\) is a Cartesian Rosicky tangent category._ If \((\mathsf{S},\mu,\eta,\partial)\) is a coCartesian differential monad on a (semi-)additive category \(\mathbb{X}\), then the tangent bundle functor \(\mathsf{T}\) on \(\mathsf{ALG}_{\mathsf{S}}\) is a lifting of \(\mathsf{B}\) via the distributive law \(\lambda\), in the sense of [21, Lemma 1]. Even if \(\mathsf{B}\) is its own left adjoint, in general, \(\mathsf{T}\) will not be its own left adjoint, or even necessarily have a left adjoint. As discussed in [21], for \(\mathsf{T}\) to have a left adjoint requires the existence of reflexive coequalizers in \(\mathsf{ALG}_{\mathsf{S}}\). By [21, Theorem 2], if \(\mathsf{ALG}_{\mathsf{S}}\) has reflexive coequalizers, then the tangent bundle functor \(\mathsf{T}\) has a left adjoint \(\mathsf{T}^{\circ}:\mathsf{ALG}_{\mathsf{S}}\to\mathsf{ALG}_{\mathsf{S}}\). So, we have a unit \(\eta^{\mathsf{S}}_{(A,\alpha)}:(A,\alpha)\xrightarrow{}\mathsf{TT}^{\circ}(A,\alpha)\) and a counit \(\varepsilon^{\mathsf{S}}_{(A,\alpha)}:\mathsf{T}^{\circ}\mathsf{T}(A,\alpha) \xrightarrow{}(A,\alpha)\), and thus, \((\eta^{\mathsf{S}},\varepsilon^{\mathsf{S}}):\mathsf{T}^{\circ}\dashv \mathsf{T}\). In particular, \(\mathsf{T}^{\circ}(A,\alpha)\) is the coequalizer in \(\mathsf{ALG}_{\mathsf{S}}\) of \(\mu_{A}\circ\mathsf{S}(\lambda_{A})\) and \(\mathsf{S}(\alpha\times\alpha)\), which are both \(\mathsf{S}\)-algebra morphisms with common section \(\mathsf{S}(\eta_{A}\times\eta_{A})\), which is also a \(\mathsf{S}\)-algebra morphism. Note that, in general, coequalizers in \(\mathsf{ALG}_{\mathsf{S}}\) can be very different from coequalizers in \(\mathbb{X}\). Therefore, the underlying object of \(\mathsf{T}^{\circ}(A,\alpha)\) will in general not be \(A\times A\). In fact, in most cases, it is a much more complex object. However, for every object \(A\) in \(\mathbb{X}\), we do have a natural isomorphism \(\tau_{A}:\mathsf{T}^{\circ}\left(\mathsf{S}(A),\mu_{A}\right)\xrightarrow{}( \mathsf{S}(A\times A),\mu_{A\times A})\), defined as: \[\tau_{A}:=\varepsilon^{\mathsf{S}}_{\left(\mathsf{S}\left(\mathsf{S}(\mathsf{ T}^{\circ}(A,\alpha)),\mu_{\emptyset\mathsf{T}^{\circ}(A,\alpha)}\right) \right)}\circ\mathsf{T}^{\circ}(\lambda_{\emptyset\mathsf{T}^{\circ}(A, \alpha)})\circ\mathsf{T}^{\circ}\mathsf{S}(\eta^{\times}_{A})\] so, \(\mathsf{T}^{\circ}\left(\mathsf{S}(A),\mu_{A}\right)\cong(\mathsf{S}(A\times A ),\mu_{A\times A})\). Similarly, for \(\mathsf{T}_{n}\), we also obtain left adjoints \(\mathsf{T}^{\circ}_{n}:\mathsf{ALG}_{\mathsf{S}}\to\mathsf{ALG}_{\mathsf{S}}\). Therefore, \((\mathsf{ALG}_{\mathsf{S}},\mathbb{T})\) has adjoint tangent structure, and therefore \((\mathsf{ALG}^{op}_{\mathsf{S}},\mathbb{T}^{\circ})\) is a (Rosicky) tangent category. One could also express the (Rosicky) tangent structure \(\mathbb{T}^{\circ}\) using the fact that \(\mathsf{T}^{\circ}(A,\alpha)\) is a reflexive coequalizer. There is a relation between \((\mathsf{ALG}^{op}_{\mathsf{S}},\mathbb{T}^{\circ})\) and \((\mathbb{X}^{op},\mathbb{B}^{\circ})\) via the natural isomorphism \(\tau\), that is, the following equalities hold in \(\mathbb{X}\): \[\tau_{A}\circ p^{\circ}_{(\mathsf{S}(A),\mu_{A})}=\mathsf{S}(p^{ \times^{\circ}}_{\phantom{\times^{\circ}}A})\qquad\qquad\mathsf{S}(z^{\times ^{\circ}}_{\phantom{\times^{\circ}}A})\circ\tau_{A}=z^{\circ}_{(\mathsf{S}(A), \mu_{A})}\] (29) \[\mathsf{S}\left(s^{\times^{\circ}}_{\phantom{\times^{\circ}}A} \right)\circ\tau_{A}=\left[\mathsf{S}(q^{\times^{\circ}}_{\phantom{\times^{ \circ}}1})\circ\tau_{A},\mathsf{S}(q^{\times^{\circ}}_{\phantom{\times^{\circ}}2}) \circ\tau_{A}\right]\circ\otimes^{\circ}_{(\mathsf{S}(A),\mu_{A})}\] \[\tau_{A}\circ l^{\circ}_{(\mathsf{S}(A),\mu_{A})}=\mathsf{S}\left(l^ {\times^{\circ}}_{\phantom{\times^{\circ}}A}\right)\circ\tau_{A\times A}\circ \mathsf{T}^{\circ}(\tau_{A})\qquad\qquad\tau_{A\times A}\circ\mathsf{T}^{\circ}( \tau_{A})\circ\circ c^{\circ}_{(\mathsf{S}(A),\mu_{A})}=\mathsf{S}\left(c^{ \times^{\circ}}_{\phantom{\times^{\circ}}A}\right)\circ\tau_{A\times A}\circ \mathsf{T}^{\circ}(\tau_{A})\circ\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad **Proposition 3.3.3**.: _For a Cartesian differential comonad on a semi-additive (resp. additive) category, the opposite of the coEilenberg-Moore category is a Cartesian (Rosicky) tangent category. Furthermore, if the coEilenberg-Moore category has reflexive coequalizers (and finite coproducts), then the coEilenberg-Moore category is a (Cartesian) (Rosicky) tangent category with adjoint tangent structure_ ### Vector Fields for coCartesian Differential Monads Let us now discuss vector fields in the category of algebras of a coCartesian differential monad. First, note that, since the forgetful functor preserves the tangent structure strictly, vector fields in the category of algebras will also be vector fields in the base category. Now, if \(\mathbb{X}\) is a semi-additive category, then vector fields in \((\mathbb{X},\mathbb{B})\) correspond to endomorphisms \(A\xrightarrow{}A\). Indeed, a vector field on an object \(A\) in \((\mathbb{X},\mathbb{B})\) is a map \(v:A\xrightarrow{}A\times A\) such that \(\pi_{1}\circ v=1_{A}\). Therefore, \(v\) is of the form \(v=\langle 1_{A},f_{v}\rangle\) for a unique map \(f_{v}:A\xrightarrow{}A\). Conversely, for any endomorphism \(f:A\xrightarrow{}A\), \(\langle 1_{A},f\rangle:A\xrightarrow{}A\times A\) is a vector field. So, \(\mathsf{V}_{\mathbb{T}}(A)\) is isomorphic to the set of endomorphisms of \(A\). If \(\mathbb{X}\) is an additive category, then the tangent category Lie bracket is given by the commutator: \([v,w]=\langle 1_{A},(f_{v}\circ f_{w})-(f_{w}\circ f_{v})\rangle\). Therefore, for a coCartesian differential monad \((\mathsf{S},\mu,\eta,\partial)\), vector fields in \((\mathsf{ALG}_{\mathsf{S}},\mathsf{T})\) will correspond to certain endomorphisms in \(\mathbb{X}\) which induce \(\mathsf{S}\)-algebra morphisms. We will call \(\mathsf{S}\)-derivations these special endomorphisms. This terminology comes from the fact that, as we will see in Section 4.5, vector fields in the tangent category of algebras of an operad correspond precisely to derivations in the operadic sense. **Definition 3.4.1**.: _Let \((\mathsf{S},\mu,\eta,\partial)\) be a coCartesian differential monad on a semi-additive category \(\mathbb{X}\). An \(\mathsf{S}\)**-derivation** on an \(\mathsf{S}\)-algebra \((A,\alpha)\) is a map \(D:A\xrightarrow{}A\) such that the following equality holds:_ \[D\circ\alpha=\alpha\circ\mathsf{S}(\pi_{1}+D\circ\pi_{2})\circ\partial_{A} \tag{30}\] _Let \(\mathsf{DER}_{\mathsf{S}}(A,\alpha)\) be the set of \(\mathsf{S}\)-derivations on \((A,\alpha)\)._ **Lemma 3.4.2**.: _Let \((\mathsf{S},\mu,\eta,\partial)\) be a coCartesian differential monad on a semi-additive category \(\mathbb{X}\). Then, for an \(S\)-algebra \((A,\alpha)\), there is a bijective correspondence between vector fields on \((A,\alpha)\) in \((\mathsf{ALG}_{\mathsf{S}},\mathbb{T}^{\mathsf{S}})\) and \(\mathsf{S}\)-derivations on \((A,\alpha)\). Explicitly,_ 1. _If_ \(v\in\mathsf{V}_{\mathbb{T}}(A,\alpha)\)_, define_ \(D_{v}:A\xrightarrow{}A\) _as_ \(D_{v}:=\pi_{2}\circ v\)_. Then,_ \(D_{v}\in\mathsf{DER}_{\mathsf{S}}(A,\alpha)\)_;_ 2. _If_ \(D\in\mathsf{DER}_{\mathsf{S}}(A,\alpha)\)_, define_ \(v_{D}:A\xrightarrow{}A\times A\) _as_ \(v_{D}=\langle 1_{A},D\rangle\)_. Then_ \(v_{D}\in\mathsf{V}_{\mathbb{T}}(A,\alpha)\)_;_ _and these constructions are inverses of each other. So, \(\mathsf{V}_{\mathbb{T}}(A,\alpha)\cong\mathsf{DER}_{\mathsf{S}}(A,\alpha)\). If \(\mathbb{X}\) is also an additive category, then the induced Lie bracket on \(\mathsf{V}_{\mathbb{T}}(A,\alpha)\), as defined in Proposition 2.3.2, is \([v,w]=\langle 1_{A},(D_{v}\circ D_{w})-(D_{w}\circ D_{v})\rangle\)._ Proof.: Starting from vector fields, let \(v\in\mathsf{V}_{\mathbb{T}}(A,\alpha)\). In \(\mathbb{X}\), this means that the vector field is of type \(v:A\xrightarrow{}A\times A\) and \(\pi_{1}\circ v=1_{A}\). We also have that \(v:(A,\alpha)\xrightarrow{}\mathsf{T}^{\mathsf{S}}(A,\alpha)\) is an \(\mathsf{S}\)-algebra morphism, which means that: \[v\circ\alpha=\left\langle\alpha\circ\mathsf{S}(\pi_{1}),\alpha\circ\mathsf{ S}\left(\pi_{1}+\pi_{4}\right)\circ\partial_{A\times A}\right\rangle\circ \mathsf{S}(v) \tag{31}\] Now, observe that: \[(\pi_{1}+\pi_{4})\circ(v\times v)=\pi_{1}+D_{v}\circ\pi_{2} \tag{32}\] Therefore, we compute that: \[\alpha\circ\mathsf{S}(\pi_{1}+D_{v}\circ\pi_{2})\circ\partial_{A}=\alpha\circ \mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\partial_{A\times A}\circ\mathsf{S} \left(v\right)\] \[=\pi_{2}\circ\left\langle\alpha\circ\mathsf{S}(\pi_{1}),\alpha\circ\mathsf{ S}\left([\pi_{1},\pi_{2}]\right)\circ\partial_{A\times A}\right\rangle\circ \mathsf{S}(v)=\pi_{2}\circ v\circ\alpha=D_{v}\circ\alpha\] So, \(D_{v}\) satisfies (30), and thus, \(D_{v}\) is an \(\mathsf{S}\)-derivation. Conversely, let \(D\in\mathsf{DER}_{\mathsf{S}}(A,\alpha)\). We must first show that \(v_{D}\) is an \(\mathsf{S}\)-algebra morphism from \((A,\alpha)\) to \(\mathsf{T}(A,\alpha)\). First, note that, as before, by definition, we have: \[(\pi_{1}+\pi_{4})\circ(v_{D}\times v_{D})=\pi_{1}+D\circ\pi_{2} \tag{33}\] So, we compute: \[\left\langle\alpha\circ\mathsf{S}(\pi_{1}),\alpha\circ\mathsf{S} \left(\pi_{1}+\pi_{4}\right)\circ\partial_{A\times A}\right\rangle\circ \mathsf{S}(v_{D})=\left\langle\alpha\circ\mathsf{S}(\pi_{1})\circ\mathsf{S} \left(v_{D}\right),\alpha\circ\mathsf{S}\left(\pi_{1}+\pi_{4}\right)\circ\partial _{A\times A}\circ\mathsf{S}\left(v_{D}\right)\right\rangle\] \[=\left\langle\alpha,\alpha\circ\mathsf{S}\left(\pi_{1}+D\circ\pi_{2} \right)\circ\partial_{A}\right\rangle=\left\langle\alpha,D\circ\alpha\right\rangle =\left\langle 1_{A},D\right\rangle\circ\alpha=v_{D}\circ\alpha\] So, \(v_{D}:(A,\alpha)\to\mathsf{T}^{\mathsf{S}}(A,\alpha)\) is an \(\mathsf{S}\)-algebra morphism. By definition, \(\pi_{1}\circ v_{D}=1_{A}\), so \(p_{(A,\alpha)}\circ v_{D}=1_{(A,\alpha)}\). Thus, we conclude that \(v_{D}\) is a vector field on \((A,\alpha)\) in \((\mathsf{ALG}_{\mathsf{S}},\mathbb{T})\). Furthermore, these constructions are clearly inverses of each other, that is, \(v_{D_{v}}=v\) and \(D_{v_{D}}=D\). So, we conclude that \(\mathsf{V}_{\mathbb{T}}(A,\alpha)\cong\mathsf{DER}_{\mathsf{S}}(A,\alpha)\). Lastly, since for tangent monads, the forgetful functor preserves the tangent structure strictly, and the Lie bracket is completely defined using the tangent structure, it follows that the forgetful functor also preserves the Lie bracket. This implies that the Lie bracket in the category of algebras must be the same as the Lie bracket in the base category, or in other words, the tangent monad "lifts" the Lie bracket. Therefore, if \(\mathbb{X}\) is an additive category, then for \(v,w\in\mathsf{V}_{\mathbb{T}}(A,\alpha)\), we have \([v,w]=\langle 1_{A},(D_{v}\circ D_{w})-(D_{w}\circ D_{v})\rangle\) as desired. By Lemma 2.3.3, if \((\mathsf{ALG}_{\mathsf{S}},\mathbb{T})\) has adjoint tangent structure, then the vector fields in \((\mathsf{ALG}_{\mathsf{S}}^{op},\mathbb{T}^{\circ})\) also correspond to \(\mathsf{S}\)-derivations. We note that \(\mathsf{S}\)-derivations also generalize the notion of derivations for codifferential categories, as defined by the third author in [25]. Indeed, every codifferential category comes equipped with a canonical coCartesian differential monad \(\mathsf{S}\)[20, Example 3.13], and \(\mathsf{S}\)-derivations in the sense above are precisely the \(\mathsf{S}\)-derivations defined in [25, Definition 5.1], where the latter generalize the notion of differential algebras in codifferential categories. ### Differential Objects for coCartesian Differential Monads In this section we discuss differential objects in both the category of algebras of a coCartesian differential monad and its opposite category. In particular, we will explain how the free algebras of the monad are always differential objects in the opposite category. This was somewhat to be expected since the subcategory of free algebras is equivalent to the Kleisli category of the monad, whose opposite category is known to be a Cartesian differential category. First, observe that, if \(\mathbb{X}\) is a semi-additive category, every object \(A\) in \(\mathbb{X}\) has a unique differential object structure in \((\mathbb{X},\mathbb{B})\), since in particular, by definition, \(\mathsf{B}(A)=A\times A\). The differential projection \(\hat{p}:A\times A\xrightarrow{}A\) is the second projection, \(\hat{p}=\pi_{2}\). The sum \(\sigma:A\times A\xrightarrow{}A\) sums the two components together, \(\sigma:=\pi_{1}+\pi_{2}\), and the zero \(\zeta:\ast\xrightarrow{}A\) is the zero map \(\zeta=0\). So, \((A,\hat{p},\sigma,\zeta)\) is a differential object in \((\mathbb{X},\mathbb{B})\). So \(\mathsf{DIFF}([\mathbb{X},\mathbb{B}])=\mathbb{X}\). Now, let \((\mathsf{S},\mu,\eta,\partial)\) be a coCartesian differential monad on a semi-additive category \(\mathbb{X}\). As explained in [5, Section 4.3], strong Cartesian tangent morphisms send differential objects to differential objects. Therefore, since the forgetful functor preserves the Cartesian tangent structure strictly, it preserves differential objects. So, if \(\big{(}(A,\alpha),\hat{p},\sigma,\zeta\big{)}\) is a differential object in \((\mathsf{ALG}_{\mathsf{S}},\mathbb{T})\) then \((A,\hat{p},\sigma,\zeta)\) must also be a differential object in \((\mathbb{X},\mathbb{B})\). As explained in the previous paragraph, this means that the differential object structure of \((A,\alpha)\) must be of the form \(\hat{p}=\pi_{2}\), \(\sigma:=\pi_{1}+\pi_{2}\), and \(\zeta=0\). So an \(\mathsf{S}\)-algebra \((A,\alpha)\) can have at most one differential object if and only if \(\pi_{2}:\mathsf{T}(A,\alpha)\xrightarrow{}(A,\alpha)\), \(\pi_{1}+\pi_{2}:(A,\alpha)\times(A,\alpha)\xrightarrow{}(A,\alpha)\), and \(0:(\top,1_{\top})\xrightarrow{}(A,\alpha)\) are \(\mathsf{S}\)-algebra morphisms. We may equivalently express this as follows: **Lemma 3.5.1**.: _Let \((\mathsf{S},\mu,\eta,\partial)\) be a coCartesian differential monad on a semi-additive category \(\mathbb{X}\). Then, an \(\mathsf{S}\)-algebra \((A,\alpha)\) has a (necessarily unique) differential object structure if and only if the following equalities hold:_ \[\alpha=\alpha\circ\mathsf{S}(\pi_{2})\circ\partial_{A}\qquad\qquad\alpha\circ \mathsf{S}(\pi_{1}+\pi_{2})=\alpha\circ\mathsf{S}(\pi_{1})+\alpha\circ\mathsf{ S}(\pi_{2})\qquad\qquad\alpha\circ\mathsf{S}(0)=0 \tag{34}\] Proof.: For the \(\Rightarrow\) direction, suppose that \(\big{(}(A,\alpha),pi_{1},\pi_{1}+\pi_{2},0\big{)}\) is a differential object. That \(\pi_{2}:\mathsf{T}(A,\alpha)\xrightarrow{}(A,\alpha)\) is a \(\mathsf{S}\)-algebra morphism implies that \(\alpha\circ\mathsf{S}(\pi_{2})=\alpha\circ\mathsf{S}(\pi_{1}+\pi_{4})\circ \partial_{A}\). Pre-composing both sides by \(\mathsf{S}((0,1_{A}))\) we get \(\alpha=\alpha\circ\mathsf{S}(\pi_{2})\circ\partial_{A}\). On the other hand, that \(\pi_{1}+\pi_{2}:(A,\alpha)\times(A,\alpha)\xrightarrow{}(A,\alpha)\) and \(0:(\top,1_{\top})\xrightarrow{}(A,\alpha)\) are \(\mathsf{S}\)-algebra morphisms immediately imply the two other equalite of (34). Conversely, for the \(\Leftarrow\) direction, assume that the equations of (34) hold. Per the above discussion, we need to show that \(\pi_{2}:\mathsf{T}(A,\alpha)\xrightarrow{}(A,\alpha)\), \(\pi_{1}+\pi_{2}:(A,\alpha)\times(A,\alpha)\xrightarrow{}(A,\alpha)\), and \(0:(\top,1_{\top})\xrightarrow{}(A,\alpha)\) are all \(\mathsf{S}\)-algebra morphisms. However, the second and third equality of (34) immidietly imply that \(\pi_{1}+\pi_{2}\) and \(0\) are \(\mathsf{S}\)-algebra morphisms. To show that \(\pi_{1}\) is also an \(\mathsf{S}\)-algebra morphism, first note that the second second equality of (34) implies that: \[\alpha\circ\mathsf{S}(\pi_{1}+\pi_{4})=\alpha\circ\mathsf{S}(\pi_{1})+\alpha \circ\mathsf{S}(\pi_{4}) \tag{35}\] And note that since \(\pi_{4}=\pi_{2}\circ(\pi_{2}\times\pi_{2})\), using the naturality of \(\partial\), we get that: \[\mathsf{S}(\pi_{4})\circ\partial_{A\times A}=\mathsf{S}(\pi_{2})\circ\partial_{A} \circ\mathsf{S}(\pi_{2}) \tag{36}\] Then using these identities, first equality of (34), and **[DC.1]**, we compute: \[\pi_{2}\circ(\alpha\times\alpha)\circ\lambda_{A}=\alpha\circ\pi_{2}\circ\lambda_{ A}=\alpha\circ\mathsf{S}(\pi_{1}+\pi_{4})\circ\partial_{A\times A}=\alpha\circ \mathsf{S}(\pi_{1})\circ\partial_{A\times A}+\alpha\circ\mathsf{S}(\pi_{4}) \circ\partial_{A\times A}\] \[=0+\alpha\circ\mathsf{S}(\pi_{2})\circ\partial_{A}\circ\mathsf{S}(\pi_{2})=\alpha \circ\mathsf{S}(\pi_{2})\] So \(\pi_{2}\) is an \(\mathsf{S}\)-algebra morphism. So we conclude that \(\big{(}(A,\alpha),pi_{2},\pi_{1}+\pi_{2},0\big{)}\) is a differential object. Furthermore, the fact that \(\pi_{1}:\mathsf{T}(A,\alpha)\xrightarrow{}(A,\alpha)\) is an \(\mathsf{S}\)-algebra morphism actually implies that we have an equality \(\mathsf{T}(A,\alpha)=(A,\alpha)\times(A,\alpha)\) on the nose. This is quite a strong requirement. So, one should not expect many differential objects in \((\mathsf{ALG}_{\mathsf{S}},\mathbb{T})\). However, the terminal object \((*,1_{*})\) is always a differential object. Let us turn our attention to differential objects in the opposite category. Again, observe first that every object \(A\) in a semi-additive category \(\mathbb{X}\) also has a unique differential object structure in \((\mathbb{X}^{op},\mathbb{B}^{\circ})\). Viewed in \(\mathbb{X}\), the differential projection \(\hat{p}^{\circ}:A\xrightarrow{}A\times A\) is the injection into the second component, \(\hat{p}^{\circ}=\langle 0,1_{A}\rangle\), the sum \(\sigma^{\circ}:A\times A\xrightarrow{}A\) is the copy map, \(\sigma^{\circ}=\langle 1_{A},1_{A}\rangle\), and the zero \(\zeta^{\circ}:A\xrightarrow{}*\) is the zero map in the other direction, \(\zeta^{\circ}=0\). So, \((A,\hat{p}^{\circ},\sigma^{\circ},\zeta^{\circ})\) is a differential object in \((\mathbb{X}^{op},\mathbb{B}^{\circ})\). So, \(\mathsf{DIFF}[(\mathbb{X}^{op},\mathbb{B}^{\circ})]=\mathbb{X}^{op}\). Suppose now that \(\mathsf{ALG}_{\mathsf{S}}\) has all reflexive coequalizers and finite coproducts. Since \((\mathsf{F}^{\mathsf{S}},\tau)\) is a strong Cartesian tangent morphism, \((\mathsf{F}^{\mathsf{S}},\tau)\) will map \((A,\hat{p}^{\circ},\sigma^{\circ},\zeta^{\circ})\) to a differential object in \((\mathsf{ALG}_{\mathsf{S}}^{op},\mathbb{T}^{\circ})\) whose underlying \(\mathsf{S}\)-algebra is the free \(\mathsf{S}\)-algebra over \(A\). Therefore, we see that \(\mathsf{KL}_{\mathsf{S}}^{op}\) is a sub-Cartesian differential category of \(\mathsf{DIFF}[(\mathsf{ALG}_{\mathsf{S}}^{op},\mathbb{T}^{\circ})]\). **Proposition 3.5.2**.: _Let \((\mathsf{S},\mu,\eta,\partial)\) be a coCartesian differential monad on a semi-additive category \(\mathbb{X}\), and suppose that \(\mathsf{ALG}_{\mathsf{S}}\) has all reflexive coequalizers and finite coproducts. Then, for every object \(A\) in \(\mathbb{X}\), the free \(\mathsf{S}\)-algebra over \(A\) has a differential object structure in \((\mathsf{ALG}_{\mathsf{S}}^{op},\mathbb{T}^{\circ})\). Explicitly, \(\Big{(}\big{(}\mathsf{S}(A),\mu_{A}\big{)}\,,\tau_{A}^{-1}\circ\mathsf{S}( \langle 0,1_{A}\rangle),\theta_{A,A}^{-1}\circ\mathsf{S}(\langle 1_{A},1_{A} \rangle),\mathsf{S}(0)\Big{)}\) is a differential object in \((\mathsf{ALG}_{\mathsf{S}}^{op},\mathbb{T}^{\circ})\), where the composition in the quadruple is the one of \(\mathbb{X}\)._ For an arbitrary coCartesian differential monad, there could be other differential objects that are not free \(\mathsf{S}\)-algebras. In future work, it would be interesting to give a precise characterization of the coCartesian differential monads \(\mathsf{S}\) whose differential objects coincide with free \(\mathsf{S}\)-algebras. We conclude by restating this in terms of Cartesian differential comonads. **Proposition 3.5.3**.: _For a Cartesian differential comonad on a semi-additive category such that the coEilenberg-Moore category has reflexive coequalizers and finite coproducts, every cofree coalgebra is a differential object in the coEilenberg-Moore category._ ## 4 The Tangent Categories of Algebras of an Operad This section provides the main objective of this paper of showing that the category of algebras of an operad and its opposite category are both tangent categories. To do so, we will first show that the associated monad of an operad is a coCartesian differential monad. By using the results of Section 3, we then obtain a tangent structure on the category of algebras of an operad. We will explain how the tangent bundle is given by the semi-direct product, generalizing the tangent bundle given by dual numbers for commutative algebras. We will then show that we also have adjoint tangent structure, which gives us tangent structure for the opposite category of algebras of an operad. This time, the tangent bundle is given by the free algebra over the module of Kahler differentials, generalizing the tangent bundle of affine schemes. We will also discuss vector fields and differential objects in these tangent categories, and explain how they correspond respectively to derivation and certain modules. Lastly, since every operad gives a coCartesian differential monad, we also take a closer look at the induced Cartesian differential category. We will explain how this can intuitively be thought of as a Lawvere theory of polynomials over the operad. Throughout this section, we will also review the necessary concepts from the theory of operads as needed, such as operads, their algebras, modules, derivations, etc. For a detailed introduction to the theory of operads, we refer to [24, 26]. ### The coCartesian Differential Monad of an Operad For any operad, there is a canonical way to construct a monad from it. The algebras over the operad are, by definition, the algebras over this monad. The objective of this section is to prove that for any operad, said monad is in fact always a coCartesian differential monad. By the results of Section 3, it then follows that the category of algebras over an operad forms a tangent category and that the opposite category of the Kleisli category of an operad is a Cartesian differential category. To give a coCartesian differential monad, we must first fix our (semi-)additive category. For the remainder of this section, we fix \(R\) to be a commutative ring and we denote \(\mathsf{MOD}_{R}\) to be the category of \(R\)-modules and \(R\)-linear maps between them. It is well known that \(\mathsf{MOD}_{R}\) is an additive category. So we wish to show that every operad induces a canonical coCartesian differential monad on \(\mathsf{MOD}_{R}\). Throughout this paper, by an operad we mean a symmetric algebraic operad, which means an operad in \(R\)-modules that allows for permutations of arguments. This latter part is captured by actions of the symmetric group, where for each \(n\in\mathbb{N}\), we denote the symmetric group on \(n\) letters by \(\Sigma(n)\). So, slightly more concretely, an **operad** is a sequence \(\mathcal{P}=(\mathcal{P}(n))_{n\in\mathbb{N}}\) of \(R\)-modules such that: 1. There is a distinguished element \(1_{\mathcal{P}}\in\mathcal{P}(1)\); 2. For every \(n\), there is a right action of \(\Sigma(n)\) on \(\mathcal{P}(n)\), which we denote as \(\mu\cdot\sigma\) for all \(\mu\in\mathcal{P}(n)\) and \(\sigma\in\Sigma(n)\); 3. For every \(n\) and \(m\), there is a family of \(R\)-linear maps \(\circ_{i}:\mathcal{P}(m)\otimes\mathcal{P}(n)\xrightarrow{}\mathcal{P}(m+n-1)\) for all \(i\) with \(1\leq i\leq m\), called the **partial compositions**, which we denote as \(\mu\circ_{i}\nu:=\circ_{i}(\mu\otimes\nu)\). The partial compositions are required to satisfy natural equivariance and associativity conditions, and \(1_{\mathcal{P}}\) is required to play the role of a unit with respect to partial compositions, see [26, Section 5.3.4] for full details. Using the partial compositions, we can also define the \(R\)-linear maps \(\circ:\mathcal{P}(k)\otimes\ \mathcal{P}(n_{1})\otimes\cdots\otimes \mathcal{P}(n_{k})\xrightarrow{}\mathcal{P}(n_{1}+\cdots+n_{k})\) for all \(k\) and \(n_{i}\), called the **complete composition**, which is defined on pure tensors as: \[\circ(\mu\otimes\nu_{1}\otimes\cdots\otimes\nu_{k})=(\cdots((\mu\circ_{k} \nu_{k})\circ_{k-1}\nu_{k-1})\cdots)\circ_{1}\nu_{1}\] and then extended by \(R\)-linearity. As a shorthand, we denote: \[\mu(\nu_{1},\ldots,\nu_{k}):=\circ(\mu\otimes\nu_{1}\otimes\cdots\otimes\nu_{ k})\] After Theorem 4.1.1 below, we give some well-known examples of operads. We now describe the monad associated to an operad \(\mathcal{P}\)[26, Section 5.1.2]. First, for any \(R\)-module \(V\), define the \(R\)-module \(\mathsf{S}(\mathcal{P},V)\) as the coproduct of all \(\mathcal{P}(n)\otimes V^{\otimes n}\) quotiented by \(\Sigma(n)\) (where the right action by \(\Sigma(n)\) on \(V^{\otimes n}\) permutes the factors of the tensor power): \[\mathsf{S}(\mathcal{P},V)=\bigoplus_{n\in\mathbb{N}}\left(\mathcal{P}(n) \otimes V^{\otimes n}\right)_{\Sigma(n)}. \tag{37}\] As a shorthand, we denote the equivalence class of a pure tensor as follows: \[(\mu;v_{1},\ldots,v_{n}):=[\mu\otimes v_{1}\otimes\cdots\otimes v_{n}]\in \left(\mathcal{P}(n)\otimes V^{\otimes n}\right)_{\Sigma(n)}\qquad\qquad\qquad \mu\in\mathcal{P}(n),v_{i}\in V \tag{38}\] Observe that for any \(R\)-linear morphism with domain \(\mathsf{S}(\mathcal{P},V)\), it suffices to define said morphism on elements of the form \((\mu;v_{1},\ldots,v_{n})\), making sure the definition respects the action of the symmetric group, and then extend by \(R\)-linearity. With this in mind, define the functor \(\mathsf{S}(\mathcal{P},-):\mathsf{MOD}_{R}\xrightarrow{}\mathsf{MOD}_{R}\) which maps an \(R\)-module \(V\) to \(\mathsf{S}(\mathcal{P},V)\), and sends an \(R\)-linear morphism \(f:V\xrightarrow{}W\) to the \(R\)-linear morphism \(\mathsf{S}(\mathcal{P},f):\mathsf{S}(\mathcal{P},V)\xrightarrow{}\mathsf{S}( \mathcal{P},W)\) which is defined as follows: \[\mathsf{S}(\mathcal{P},f)(\mu;v_{1},\ldots,v_{n})=(\mu;f(v_{1}),\ldots,f(v_{n})) \tag{39}\] The monad unit \(\eta_{V}:V\xrightarrow{}\mathsf{S}(\mathcal{P},V)\) and the monad multiplication \(\gamma_{V}:S\left(\mathcal{P},\mathsf{S}(\mathcal{P},V)\right)\xrightarrow{} \mathsf{S}(\mathcal{P},V)\) are respectively defined as follows: \[\eta_{V}(v)=(1_{\mathcal{P}};v)\in\mathcal{P}(1)\otimes V \tag{40}\] Then, \((\mathsf{S}(\mathcal{P},-),\gamma,\eta)\) is a monad on \(\mathsf{MOD}_{R}\)[24, Section 3]. We will now prove that this monad is in fact also a coCartesian differential monad. **Theorem 4.1.1**.: _Let \(\mathcal{P}\) be an operad. Define the \(R\)-linear morphism \(\partial_{V}:\mathsf{S}(\mathcal{P},V)\xrightarrow{}\mathsf{S}(\mathcal{P},V \times V)\) as follows:_ \[\partial_{V}\left(\mu;v_{1},\ldots,v_{n}\right)=\sum_{i=1}^{n}\left(\mu;(v_{1},0),\ldots,(0,v_{i}),\ldots,(v_{n},0)\right) \tag{41}\] _Where, in the sum, we use the first injection \(V\xrightarrow{}V\times V\) to all the inputs in \(V\) except for the \(i\)-th input, for which we use the second injection._ _Then \(\partial\) is a differential combinator transformation for \((\mathsf{S}(\mathcal{P},-),\gamma,\eta)\), thus \((\mathsf{S}(\mathcal{P},-),\gamma,\eta,\partial)\) is a coCartesian differential monad._ Proof.: It is clear that \(\partial\) is a natural transformation. Therefore, we must prove that **[DC.1] to [DC.6]** in Definition 3.2.1 hold. First observe that for any \(R\)-linear morphism \(f:V\times V\xrightarrow{}W\), we have that: \[\mathsf{S}(\mathcal{P},f)\left(\partial_{V}\left(\mu;v_{1},\ldots,v_{n} \right)\right)=\sum_{i=1}^{n}\left(\mu;f(v_{1},0),\ldots,f(0,v_{i}),\ldots,f(v _{n},0)\right) \tag{42}\] This will help simplify our calculations. **[DC.1]**: Here we use the fact that \(\left(\mu;v_{1},\ldots,0,\ldots,v_{n}\right)=0\): \[\mathsf{S}(\mathcal{P},\pi_{1})\left(\partial_{V}\left(\mu;v_{1},\ldots,v_{n} \right)\right)=\sum_{i=1}^{n}\left(\mu;v_{1},\ldots,0,\ldots,v_{n}\right)=0\] So \(\mathsf{S}(\mathcal{P},\pi_{1})\circ\partial_{V}=0\) **[DC.2]**: Here we use that \(\left(\mu;v_{1},\ldots,v_{i}+v_{i}^{\prime},\ldots,v_{n}\right)=\left(\mu;v _{1},\ldots,v_{i},\ldots,v_{n}\right)+\left(\mu;v_{1},\ldots,v_{i}^{\prime}, \ldots,v_{n}\right)\): \[\mathsf{S}(\mathcal{P},\left\langle\pi_{1},\pi_{2},\pi_{2}\right\rangle)\left( \partial_{V}\left(\mu;v_{1},\ldots,v_{n}\right)\right)=\sum_{i=1}^{n}\left( \mu;(v_{1},0,0),\ldots,(0,v_{i},v_{i}),\ldots,(v_{n},0,0)\right)\] \[=\sum_{i=1}^{n}\left(\mu;(v_{1},0,0),\ldots,(0,v_{i},0),\ldots,(v_{n},0,0) \right)+\sum_{i=1}^{n}\left(\mu;(v_{1},0,0),\ldots,(0,0,v_{i}),\ldots,(v_{n}, 0,0)\right)\] \[=\mathsf{S}(\mathcal{P},\left\langle\pi_{1},\pi_{2},0\right\rangle)\left( \partial_{V}\left(\mu;v_{1},\ldots,v_{n}\right)\right)+\mathsf{S}(\mathcal{P},\left\langle\pi_{1},0,\pi_{2}\right\rangle)\left(\partial_{V}\left(\mu;v_{1}, \ldots,v_{n}\right)\right)\] So \(\mathsf{S}(\mathcal{P},\left\langle\pi_{1},\pi_{2},\pi_{2}\right\rangle)\circ \partial_{V}=\mathsf{S}(\mathcal{P},\left\langle\pi_{1},\pi_{2},0\right\rangle )\circ\partial_{V}+\mathsf{S}(\mathcal{P},\left\langle\pi_{1},0,\pi_{2} \right\rangle)\circ\partial_{V}\) **[DC.3]**: This is just the case for \(n=1\) which says that \(\partial_{V}(\mu;v)=\left(\mu;(0,v)\right)\). So \(\partial_{V}\circ\eta_{V}=\eta_{V\times V}\circ\langle 0,1_{V}\rangle\). **[DC.4]**: We can assume that our starting point is of the form \(\left(\mu;\left(\nu_{1};v_{1,1},\ldots,v_{1,n_{1}}\right),\ldots,\left(\nu_{k };v_{k,1},\ldots,v_{k,n_{k}}\right)\right)\). On the one hand we have that: \[\partial_{V}\left(\gamma_{V}\left(\mu;\left(\nu_{1};v_{1,1}, \ldots,v_{1,n_{1}}\right),\ldots,\left(\nu_{k};v_{k,1},\ldots,v_{k,n_{k}} \right)\right)\right)\] \[=\partial_{V}\left(\left(\mu\left(\nu_{1},\ldots,\nu_{k}\right);v _{1,1},\ldots,v_{k,n_{k}}\right)\right)\] \[=\sum_{i=1}^{k}\sum_{j_{i}=1}^{n_{i}}\left(\mu\left(\nu_{1}, \ldots,\nu_{k}\right);\left(v_{1,1},0\right),\ldots,(0,v_{i,j_{i}}),\ldots,(v_{ k,n_{k}},0)\right)\] On the other hand we have that: \[\partial_{\mathsf{S}(\mathcal{P},V)}\left(\mu;\left(\nu_{1};v_{1,1}, \ldots,v_{1,n_{1}}\right),\ldots,\left(\nu_{k};v_{k,1},\ldots,v_{k,n_{k}} \right)\right)\] \[=\sum_{i=1}^{k}\left(\mu;\left(\left(\nu_{1};v_{1,1},\ldots,v_{1,n_{1}}\right), 0\right),\ldots,\left(0,\left(\nu_{i};v_{i,1},\ldots,v_{i,n_{i}}\right)\right),\ldots\right.\] \[\ldots\left(\left(\nu_{k};v_{k,1},\ldots,v_{k,n_{k}}\right),0\right)\right)\] Then applying \(\mathsf{S}\left(\mathcal{P},\mathsf{S}(\mathcal{P},\left\langle 1_{V},0\right\rangle) \circ\pi_{1}+\partial_{V}\circ\pi_{2}\right)\), using the multilinearity in the module arguments, we get: \[\sum_{i=1}^{k}\sum_{j_{i}=1}^{n_{i}}\left(\mu;\left(\nu_{1};\left(\nu_{1,1},0 \right),\ldots,\left(v_{1,n_{1}},0\right)\right),\ldots,\left(\nu_{i};(v_{i,1},0),\ldots,(0,v_{i,j_{i}}),\ldots,(v_{i,n_{i}},0)\right),\ldots\right.\] \[\ldots\left(\nu_{k};(v_{k,1},0),\ldots,(v_{k,n_{k}},0)\right)\right)\] Then finally applying \(\gamma_{V\times V}\) gets us back exactly to: \[\sum_{i=1}^{k}\sum_{j_{i}=1}^{n_{i}}\left(\mu\left(\nu_{1},\ldots,\nu_{k} \right);\left(v_{1,1},0\right),\ldots,(0,v_{i,j_{i}}),\ldots,(v_{k,n_{k}},0)\right)\] So \(\partial_{V}\circ\gamma_{V}=\gamma_{V\times V}\circ S\left(\mathcal{P}, \mathsf{S}(\mathcal{P},\left\langle 1_{V},0\right\rangle)\circ\pi_{1}+\partial_{V}\circ\pi_{2} \right)\circ\partial_{\mathsf{S}(\mathcal{P},V)}\). For the remaining two relations, let us first expand out \(\partial_{V\times V}\circ\partial_{V}\): \[\partial_{V\times V}\left(\partial_{V}\left(\mu;v_{1},\dots,v_{n} \right)\right) =\sum_{i=1}^{n}\sum_{1\leq j<i}\left(\mu;(v_{1},0,0,0),\dots,(0,0,v_ {j},0)\dots,(0,v_{i},0,0),\dots,(v_{n},0,0,0)\right)\] \[+\sum_{i=1}^{n}\left(\mu;(v_{1},0,0,0),\dots,(0,0,0,v_{i})\dots,( v_{n},0,0,0)\right)\] \[+\sum_{i=1}^{n}\sum_{i<j\leq n}\left(\mu;(v_{1},0,0,0),\dots,(0,v_ {i},0,0),\dots,(0,0,v_{j},0),\dots,(v_{n},0,0,0)\right)\] **[DC.5]**: Here we again use that \(\left(\mu;v_{1},\dots,0,\dots,v_{n}\right)=0\): \[\mathsf{S}(\left\langle\pi_{1},\pi_{4}\right\rangle)\left( \partial_{V\times V}\left(\partial_{V}\left(\mu;v_{1},\dots,v_{n}\right) \right)\right)=\sum_{i=1}^{n}\sum_{1\leq j<i}\left(\mu;(v_{1},0),\dots,(0,0) \dots,(0,0),\dots,(v_{n},0)\right)\] \[+\sum_{i=1}^{n}\left(\mu;(v_{1},0),\dots,(0,v_{i})\dots,,(v_{n},0 )\right)+\sum_{i=1}^{n}\sum_{i<j\leq n}\left(\mu;(v_{1},0),\dots,(0,0),\dots, (0,0),\dots,(v_{n},0,0,0)\right)\] \[=0+\partial_{V}\left(\mu;v_{1},\dots,v_{n}\right)+0=\partial_{V} \left(\mu;v_{1},\dots,v_{n}\right)\] So \(\mathsf{S}(\left\langle\pi_{1},\pi_{4}\right\rangle)\circ\partial_{V\times V} \circ\partial_{V}=\partial_{V}\) **[DC.6]**: This amounts to a simple reindexing by swapping \(i\) and \(j\): \[S\left(\mathcal{P},\left\langle\pi_{1},\pi_{3},\pi_{2},\pi_{4} \right\rangle\right)\left(\partial_{V\times V}\left(\partial_{V}\left(\mu;v_ {1},\dots,v_{n}\right)\right)\right)=\] \[=\sum_{i=1}^{n}\sum_{1\leq j<i}\left(\mu;(v_{1},0,0,0),\dots,(0,v_ {j},0,0)\dots,(0,0,v_{i},0),\dots,(v_{n},0,0,0)\right)\] \[+\sum_{i=1}^{n}\left(\mu;(v_{1},0,0,0),\dots,(0,0,0,v_{i})\dots,,(v_{n},0,0,0)\right)\] \[+\sum_{i=1}^{n}\sum_{i<j\leq n}\left(\mu;(v_{1},0,0,0),\dots,(0,v _{i},0,0),\dots,(0,0,v_{j},0),\dots,(v_{n},0,0,0)\right)\] \[=\sum_{j=1}^{n}\sum_{1\leq i<j}\left(\mu;(v_{1},0,0,0),\dots,(0,0,v_{i},0)\dots,(0,v_{j},0,0),\dots,(v_{n},0,0,0)\right)\] \[+\sum_{j=1}^{n}\left(\mu;(v_{1},0,0,0),\dots,(0,0,0,v_{j})\dots,,(v_{n},0,0,0)\right)\] \[+\sum_{j=1}^{n}\sum_{j<i\leq n}\left(\mu;(v_{1},0,0,0),\dots,(0,v _{j},0,0),\dots,(0,0,v_{i},0),\dots,(v_{n},0,0,0)\right)\] \[=\partial_{V\times V}\left(\partial_{V}\left(\mu;v_{1},\dots,v_ {n}\right)\right)\] So \(S\left(\mathcal{P},\left\langle\pi_{1},\pi_{3},\pi_{2},\pi_{4}\right\rangle \right)\partial_{V\times V}\circ\partial_{V}=\partial_{V\times V}\circ\partial _{V}\) So we conclude that \(\partial\) is a differential combinator transformation and that \((\mathsf{S}(\mathcal{P},-),\gamma,\eta,\partial)\) is a coCartesian differential monad. Here are now some well-known examples of operads, their assocaited monad and resulting differential combinator transformation: **Example 4.1.2**.: Any (unital and associative) \(R\)-algebra \(A\) induces an operad \(A^{\bullet}\) where \(A^{\bullet}(1)=A\) and \(A^{\bullet}(n)=0\) for \(n\neq 1\). The associated monad is given by the free \(A\)-module monad, that is, \(\mathsf{S}(A^{\bullet},V)=A\otimes V\). See [26, Example 0, 5.2.10] for full details. The differential combinator transformation \(\partial_{V}:A\otimes V\to A\otimes(V\times V)\) is simply given by injecting \(V\) into the second component, so \(\partial_{V}(a\otimes v)=a\otimes(0,v)\). **Example 4.1.3**.: The operad \(\operatorname{Com}\) is defined by \(\operatorname{Com}(n)=R\) for all \(n\), with trivial action of the symmetric group. Unit and compositions are defined by identities of \(R\). The associated monad is the symmetric algebra monad: \[\mathsf{S}(\operatorname{Com},V)=\mathsf{Sym}(V)=\bigoplus_{n\in\mathbb{N}} \left(V^{\otimes^{n}}\right)_{\Sigma(n)}\] See [26, Example 2, 5.2.10] for full details. The differential combinator transformation \(\partial_{V}:\mathsf{Sym}(V)\xrightarrow{}\mathsf{Sym}(V\times V)\) is defined on pure symmetrized tensors as follows: \[\partial_{V}([v_{1}\otimes\ldots\otimes v_{n}])=\sum_{i=1}^{n}[(v_{1},0) \otimes\ldots\otimes(0,v_{i})\otimes\ldots\otimes(v_{n},0)]\] Now, if \(V\) is a free \(R\)-module with basis \(X\), then \(\mathsf{Sym}(V)\) is isomorphic as an \(R\)-algebra to the polynomial \(R\)-algebra over \(X\), \(\mathsf{Sym}(V)\cong R[X]\). Also, \(\mathsf{Sym}(V\times V)\) is isomorphic to the polynomial \(R\)-algebra over the disjoint union of \(X\) with itself. So, writing \(dX=\{dx|\ \forall x\in X\}\) to distinguish between the first copy and the second copy of \(X\), we have that \(\mathsf{Sym}(V\times V)\cong R[X,dX]\). So in terms of polynomials, \(\partial_{V}:R[X]\xrightarrow{}R[X,dX]\) maps a polynomial to the sum of its partial derivatives: \[\partial_{V}(p(\vec{x}))=\sum_{i=1}^{n}\frac{dp(\vec{x})}{dx_{i}}dx_{i}\] Therefore, \(\partial\) recaptures polynomial differentiation. **Example 4.1.4**.: There is an operad \(\operatorname{Ass}\) defined by \(\operatorname{Ass}(n)=R[\Sigma(n)]\), the regular representation of the group \(\Sigma(n)\). See [26, Example 1, 5.2.10] for full details. The associated monad is the tensor algebra monad: \[\mathsf{S}(\operatorname{Ass},V)=\mathsf{Ten}(V)=\bigoplus_{n\in\mathbb{N}}V ^{\otimes n}\] The differential combinator transformation \(\partial_{V}:\mathsf{Ten}(V)\xrightarrow{}\mathsf{Ten}(V\times V)\) is defined on pure tensors as follows: \[\partial_{V}(v_{1}\otimes\ldots\otimes v_{n})=\sum_{i=1}^{n}(v_{1},0)\otimes \ldots\otimes(0,v_{i})\otimes\ldots\otimes(v_{n},0)\] Now, if \(V\) is a free \(R\)-module with basis \(X\), then \(\mathsf{Ten}(V)\) is isomorphic to the \(R\)-algebra of non-commutative polynomials over \(X\). As such \(\partial\) corresponds to differentiating non-commutative polynomials. **Example 4.1.5**.: There is an operad Lie whose associated monad is given by the free Lie algebra monad, \(\mathsf{S}(\operatorname{Lie},V)=\mathsf{Lie}(V)\). See [26, Example 3, 5.2.10] for full details. In particular, \(\mathsf{Lie}(V)\) is spanned by elements of the form: \([v_{1},[v_{2},\ldots[v_{n-1},v_{n}]\ldots]]\), so the Lie bracket of the Lie bracket of etc. of elements \(v_{i}\in V\). The differential combinator transformation \(\partial_{V}:\mathsf{Lie}(V)\xrightarrow{}\mathsf{Lie}(V\times V)\) is defined on pure Lie brackets as follows: \[\partial_{V}\left(\left[v_{1},[v_{2},\ldots[v_{n-1},v_{n}]\ldots]\right]\right) =\sum_{i=1}^{n}\left[(v_{1},0),\left[(v_{2},0),\ldots\left[(0,v_{i}),\ldots[(v_{ n-1},0),(v_{n},0)]\ldots\right]\right]\right]\] Therefore, \(\partial\) corresponds to differentiating Lie bracket polynomials. ### The Cartesian Differential Categories of an Operad The first consequence of Theorem 4.1.1 is that the opposite category of the Kleisli category of an operad's associated monad is a Cartesian differential category. As a shorthand, for an operad \(\mathcal{P}\) we denote \(\mathsf{KL}_{\mathcal{P}}:=\mathsf{KL}_{\mathsf{S}(\mathcal{P},-)}\) for the Kleisli category of \((\mathsf{S}(\mathcal{P},-),\gamma,\eta)\). So we may state that: **Proposition 4.2.1**.: _Let \(\mathcal{P}\) be an operad. Then \(\mathsf{KL}_{\mathcal{P}}^{\mathit{op}}\), is a Cartesian differential category._ Let us unpack this Cartesian differential category. Recall that the objects of \(\mathsf{KL}_{\mathcal{P}}^{\mathit{op}}\) are \(R\)-modules, while a map \(f:V\xrightarrow{}W\) in \(\mathsf{KL}_{\mathcal{P}}^{\mathit{op}}\) is an \(R\)-linear morphism of type \(f:W\xrightarrow{}\mathsf{S}(\mathcal{P},V)\). The derivative \(\mathsf{D}[f]:V\times V\xrightarrow{}W\) in \(\mathsf{KL}_{\mathcal{P}}^{\mathit{op}}\) is the \(R\)-linear morphism of type \(\mathsf{D}[f]:W\xrightarrow{}\mathsf{S}(\mathcal{P},V\times V)\) defined as: \[\mathsf{D}[f]=\partial_{V}\circ f\] Let us give a bit of intuition about this. Let \(X\) be a basis for a free \(R\)-module \(V\). Then \((\mu;x_{1},\ldots,x_{n})\) can be interpreted as a sort of monomial of degree \(n\), which we call \(\mathcal{P}\)-monomials. With this in mind, an arbitrary element \(P\in\mathsf{S}(\mathcal{P},V)\) is therefore a finite sum of \(\mathcal{P}\)-monomials, and therefore we may interpret \(P\) as a \(\mathcal{P}\)-polynomial. Now \((\mu;(x_{1},0),\ldots,(0,x_{i}),\ldots,(x_{n},0))\) should be understood as the partial derivative of \((\mu;x_{1},\ldots,x_{n})\) in the variable \(x_{i}\). Therefore, the differential combinator transformation \(\partial_{V}:\mathsf{S}(\mathcal{P},V)\xrightarrow{}\mathsf{S}(\mathcal{P},V \times V)\) maps \(\mathcal{P}\)-polynomials to the sum of their partial derivatives, which we suggestively write as: \[\partial_{V}(P)=\sum_{x\in X}\frac{dP}{dx}dx\] where the sum is well-defined since \(P\) only depends on a finite number of elements of \(X\). In other words, \(\partial_{V}\) maps \(\mathcal{P}\)-polynomials to their total derivative. Now let us extend this intuition to the Kleisli category. If \(W\) is another free \(R\)-module with basis \(Y\), then a map \(f:V\xrightarrow{}W\) in \(\mathsf{KL}_{P}^{\,op}\) is precisely associated to a tuple of \(\mathcal{P}\)-polynomials in variables \(X\): \[f\equiv\langle P_{y}\rangle_{y\in Y}\] So its derivative \(\mathsf{D}[f]:V\times V\xrightarrow{}W\) in \(\mathsf{KL}_{\mathcal{P}}^{\,op}\) is associated to the tuple of the total derivative of each \(\mathcal{P}\)-polynomial: \[\mathsf{D}[f]\equiv\left\langle\sum_{x\in X}\frac{dP_{y}}{dx}dx\right\rangle_{ y\in Y}\] Therefore \(\mathsf{KL}_{P}^{\,op}\) can naively be understood as a generalized Lawvere theory of \(\mathcal{P}\)-polynomial. We can obtain a legitimate Lawvere theory of \(\mathcal{P}\)-polynomials by defining \(\mathcal{P}\)-POLY to be the category whose objects are the natural numbers \(n\in\mathbb{N}\) and where a map \(n\xrightarrow{}m\) is an \(m\)-tuple \(\langle P_{1},\ldots,P_{m}\rangle\) where \(P_{i}\in\mathsf{S}(\mathcal{P},R^{n})\). Then \(\mathcal{P}\)-POLY is equivalent to the full subcategory of \(\mathsf{KL}_{\mathcal{P}}^{\,op}\) of finite dimensional \(R\)-modules, where in particular \(\mathcal{P}\)-POLY\((n,m)\cong\mathsf{MOD}_{R}\left(R^{m},\mathsf{S}(\mathcal{P},R^{n})\right)\). Also, note that \(\mathcal{P}\)-POLY\((n,1)=\mathsf{S}(\mathcal{P},R^{n})\). Therefore we have that: **Proposition 4.2.2**.: _Let \(\mathcal{P}\) be an operad. Then \(\mathcal{P}\)-POLY is a Cartesian differential category where in particular for a map \(P:n\xrightarrow{}m\), \(P=\langle P_{1},\ldots,P_{m}\rangle\), its derivative \(\mathsf{D}[n]:n\times n\xrightarrow{}m\) is defined as follows:_ \[\mathsf{D}\left[\langle P_{1},\ldots,P_{m}\rangle\right]=\langle\partial_{R^{ n}}(P_{1}),\ldots,\partial_{R^{n}}(P_{m})\rangle\] Here is the resulting Cartesian differential category for our main examples of operads. **Example 4.2.3**.: For an \(R\)-algebra \(A\), \(A^{\bullet}\)-POLY is equivalent to the Cartesian differential category of \(A\)-linear maps. So let \(A\)-LIN be the category whose objects are \(n\in\mathbb{N}\) and where a map \(f:n\xrightarrow{}m\) is an \(A\)-linear morphism \(f:A^{n}\xrightarrow{}A^{m}\). Then \(A\)-LIN is a Cartesian differential category where the differential combinator is defined as \(\mathsf{D}[f](\vec{x},\vec{y})=f(\vec{y})\)[20, Example 2.6], and furthermore we have that \(A^{\bullet}\)-POLY\(\simeq A\)-LIN as Cartesian differential categories. **Example 4.2.4**.: For the operad Com, Com-POLY recaptures precisely polynomial differentiation since it is equivalent to the Lawvere theory of polynomials over \(R\), \(R\)-POLY, which is one of the main examples of Cartesian differential categories [20, Example 2.6]. Concretely, \(R\)-POLY is the category whose objects are \(n\in\mathbb{N}\) and where a map \(P:n\xrightarrow{}m\) is a \(m\)-tuple of polynomials in \(n\) variables, that is, \(P=\langle p_{1}(\vec{x}),\ldots,p_{m}(\vec{x})\rangle\) with \(p_{i}(\vec{x})\in R[x_{1},\ldots,x_{n}]\). \(R\)-POLY is a Cartesian differential category where the differential combinator is given by the standard differentiation of polynomials, that is, for a map \(P:n\xrightarrow{}m\), with \(P=\langle p_{1}(\vec{x}),\ldots,p_{m}(\vec{x})\rangle\), its derivative \(\mathsf{D}[P]:n\times n\xrightarrow{}m\) is defined as the tuple of the sum of the partial derivatives of the polynomials \(p_{i}(\vec{x})\): \[\mathsf{D}[P](\vec{x},\vec{y}):=\left(\sum_{i=1}^{n}\frac{\partial p_{1}(\vec{x })}{\partial x_{i}}y_{i},\ldots,\sum_{i=1}^{n}\frac{\partial p_{n}(\vec{x})}{ \partial x_{i}}y_{i}\right)\qquad\qquad\sum_{i=1}^{n}\frac{\partial p_{j}(\vec{x })}{\partial x_{i}}y_{i}\in R[x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}]\] So we have that \(\text{Com-POLY}\simeq R\)-POLY as Cartesian differential categories. **Example 4.2.5**.: For the operad Ass, Ass-POLY captures instead differentiating non-commutative polynomials since it is equivalent to the Lawvere theory of non-commutative polynomials. **Example 4.2.6**.: For the operad Lie, Lie-POLY is the cartesian differential which is given by the Lawvere theory of Lie bracket polynomials. We conclude this section by discussing the notion of a \(\mathsf{D}\)-linear counit for a coCartesian differential monad. **Definition 4.2.7**.: _[_20_, Definition 3.8]_ _For a coCartesian differential monad \((\mathsf{S},\mu,\eta,\partial)\) on a semi-additive category \(\mathbb{X}\), a \(\mathsf{D}\)-linear counit is a natural transformation \(\mathcal{E}_{A}:\mathsf{S}(A)\xrightarrow{}A\) in \(\mathbb{X}\) such that the following equalities hold:_ **[DU.1]**__\(\mathcal{E}_{A}\circ\eta_{A}=1_{A}\)__ **[DU.2]**__\(\eta_{A}\circ\mathcal{E}_{A}=\mathsf{S}(\pi_{2})\circ\partial_{A}\)__ In a Cartesian differential category with a differential combinator \(\mathsf{D}\), there is an important class of maps called the \(\mathsf{D}\)-linear maps [3, Definition 2.2.1] which are maps \(f\) such that \(\mathsf{D}[f]=f\circ\pi_{2}\). For a coCartesian differential monad, the \(\mathsf{D}\)-linear maps in the opposite category of its Kleisli category correspond precisely to the maps in the base category if and only if the coCartesian differential monad has a \(\mathsf{D}\)-linear counit [20, Proposition 3.11]. We will now show that for an operad \(\mathcal{P}\), its associated monad has a \(\mathsf{D}\)-linear counit \(\mathcal{E}_{V}:\mathsf{S}(\mathcal{P},V)\xrightarrow{}V\) if and only if \(\mathcal{P}(1)\) is of dimension one (as an \(R\)-module). Essentially, note that \(\mathcal{P}(1)\otimes V\subset\mathsf{S}(\mathcal{P},V)\), and therefore, if \(\mathcal{P}(1)\cong R\), then there is a copy of \(V\) inside \(\mathsf{S}(\mathcal{P},V)\). So the \(\mathsf{D}\)-linear counit amounts to projecting out the \(V\) component of \(\mathsf{S}(\mathcal{P},V)\). **Lemma 4.2.8**.: _Let \(\mathcal{P}\) be an operad. Then \((\mathsf{S}(\mathcal{P},-),\gamma,\eta,\partial)\) has a \(\mathsf{D}\)-linear counit \(\mathcal{E}_{V}:\mathsf{S}(\mathcal{P},V)\xrightarrow{}V\) if and only if the \(R\)-linear morphism \(e_{\mathcal{P}}:R\xrightarrow{}\mathcal{P}(1)\) which picks out the distinguished element, \(e_{\mathcal{P}}(1)=1_{\mathcal{P}}\), is an isomorphism. Explicitly, if \(e_{\mathcal{P}}\) is an isomorphism, define \(\mathcal{E}_{V}:\mathsf{S}(\mathcal{P},V)\xrightarrow{}V\) as follows:_ \[\mathcal{E}_{V}(\mu;v)=e_{\mathcal{P}}^{-1}(\mu)\cdot v\text{ if }\mu\in \mathcal{P}(1), \mathcal{E}_{V}(\mu;v_{1},\dots,v_{n})=0\text{ if }n\neq 1, \tag{43}\] _and conversely if \(\mathcal{E}_{V}:\mathsf{S}(\mathcal{P},V)\xrightarrow{}V\) is a \(\mathsf{D}\)-linear counit, then define \(e_{\mathcal{P}}^{-1}:\mathcal{P}(1)\xrightarrow{}R\) as \(e_{\mathcal{P}}^{-1}(\mu)=\mathcal{E}_{R}(\mu;1)\)._ Proof.: Suppose that \(e_{\mathcal{P}}\) is an isomorphism. We must check that \(\mathcal{E}\) satisfies \(\mathcal{E}_{V}\circ\eta_{V}=1_{V}\) and \(\eta_{V}\circ\mathcal{E}_{V}=\mathsf{S}(\mathcal{P},\pi_{2})\circ\partial_{V}\). The first identity is automatic since \(e_{\mathcal{P}}^{-1}(1_{P})=1\): \[\mathcal{E}_{V}(\eta_{v}(v))=\mathcal{E}_{V}(1_{\mathcal{P}};v)=e_{\mathcal{P }}^{-1}(1_{P})\cdot v=1\cdot v=v\] For the other identity, we must prove it in two cases. For the case \(\mu\in\mathcal{P}(1)\), note that \(\mu=e_{\mathcal{P}}^{-1}(\mu)\cdot 1_{\mathcal{P}}\). So, using that \((r\cdot\mu;v)=(\mu;r\cdot v)\), we have that: \[\mathsf{S}(\mathcal{P},\pi_{2})\left(\partial_{V}(\mu;v)\right)=(\mu;v)=(e_{ \mathcal{P}}^{-1}(\mu)\cdot 1_{\mathcal{P}};v)=(1_{\mathcal{P}};e_{\mathcal{P}}^{-1}(\mu) \cdot v)=(1_{\mathcal{P}};\mathcal{E}_{V}(\mu,v))=\eta_{V}(\mathcal{E}_{V}( \mu,v))\] Lastly, when \(n\neq 1\), using that \((\mu;v_{1},\dots,0,\dots,v_{n})=0\), we compute that: \[\mathsf{S}(\mathcal{P},\pi_{2})\left(\partial_{V}(\mu;v_{1},\dots,v_{n})\right) =\sum_{i=1}^{n}(\mu;0,\dots,v_{i},\dots,0)=0=\eta_{V}(\mathcal{E}_{V}(\mu;v_{1 },\dots,v_{n}))\] So we have that \(\mathcal{E}\) is a \(\mathsf{D}\)-linear counit. Conversely, suppose that \(\mathcal{E}\) is a \(\mathsf{D}\)-linear counit, so in particular, \(\mathcal{E}_{R}\circ\eta_{R}=1\) and \(\eta_{R}\circ\mathcal{E}_{R}=\mathsf{S}(\mathcal{P},\pi_{2})\circ\partial_{R}\). On the one hand, we have: \[e_{\mathcal{P}}^{-1}(e_{\mathcal{P}}(1))=\mathcal{E}_{R}(1_{P};1)=\mathcal{E}_{ R}(\eta_{R}(1))=1\] On the other hand, first observe that by our notation, for \(\mu,\nu\in\mathcal{P}(1)\), if \((\mu;1)=(\nu;1)\), then this means that \(\mu=\nu\). We compute: \[(e_{\mathcal{P}}(e_{\mathcal{P}}^{-1}(\mu));1)=(\mathcal{E}_{R}(\mu;1)\cdot 1_{ \mathcal{P}};1)=(1_{\mathcal{P}};\mathcal{E}_{R}(\mu;1))=\eta_{R}(\mathcal{E}_{ R}(\mu;1))=\mathsf{S}(\mathcal{P},\pi_{2})\left(\partial_{R}(\mu;1)\right)=(\mu;1)\] Hence, \(e_{\mathcal{P}}(e_{\mathcal{P}}^{-1}(1)(\mu))=\mu\), and so, \(e_{\mathcal{P}}\) is an isomorphism. For the operads Com, Ass, and Lie, their associated coCartesian differential monads all have a \(\mathsf{D}\)-linear counit which is given precisely by projecting out the copy of \(V\) in the symmetric algebra, tensor algebra, or free Lie algebra respectively. On the other hand, for an arbitrary \(R\)-algebras \(A\), the associated coCartesian differential monad of the operad \(A^{\bullet}\) will not, in general, have a \(\mathsf{D}\)-linear counit unless \(A\cong R\). ### Tangent Structure for Algebras over an Operad In this section, we describe the tangent structure on the category of algebras over an operad. We have already shown that the monad associated to an operad is a coCartesian differential monad, then, by applying Proposition 3.2.4 and Theorem 3.2.7, we can state the following: **Lemma 4.3.1**.: _Let \(\mathcal{P}\) be an operad. For the coCartesian differential monad \((\mathsf{S}(\mathcal{P},-),\gamma,\eta,\partial)\), the induced \(\mathbb{B}\)-distributive law \(\lambda_{V}:\mathsf{S}(\mathcal{P},V\times V)\xrightarrow{}\mathsf{S}( \mathcal{P},V)\times\mathsf{S}(\mathcal{P},V)\) as defined in Proposition 3.2.4 is given as follows:_ \[\lambda_{V}(\mu;(u_{1},v_{1}),\ldots,(u_{n},v_{n}))=\left((\mu;u_{1},\ldots,u_{ n}),\sum_{i=1}^{n}\left(\mu;u_{1},\ldots,v_{i},\ldots,u_{n}\right)\right) \tag{44}\] _Furthermore, \((\mathsf{ALG}_{\mathsf{S}(\mathcal{P},-)},\mathbb{T})\) is a Cartesian Rosicky tangent category, where the Rosicky tangent structure is defined as in Theorem 3.2.7._ Proof.: Recall that, by definition, \(\lambda_{V}:=\left\langle\mathsf{S}(\mathcal{P},\pi_{1}),S\left(\mathcal{P}, \pi_{1}+\pi_{4}\right)\circ\partial_{V\times V}\right\rangle\). We leave it as an excercise for the reader to check that this gives precisely the formula above. Let us give a more concrete description of the tangent structure by describing the tangent bundle in terms of semi-direct products. Following the standard terminology in operad literature, by an algebra over an operad we mean an algebra over the operad's associated monad [26, Section 5.2.3]. More explicitly, for an operad \(\mathcal{P}\), a \(\mathcal{P}\)-algebra is an \(R\)-module \(A\) equipped with an \(R\)-linear morphism \(\theta:\mathsf{S}(\mathcal{P},A)\xrightarrow{}A\) such that \(\theta\circ\eta_{A}=1_{A}\) and \(\theta\circ\mathsf{S}(\mathcal{P},\theta)=\theta\circ\gamma_{A}\). As a useful shorthand, we write: \[\theta(\mu;a_{1},\ldots,a_{n}):=\mu(a_{1},\ldots,a_{n}) \tag{45}\] and so, we will only write \(A\) for a \(\mathcal{P}\)-algebra when there is no confusion about its \(\mathcal{P}\)-algebra structure \(\theta\). Therefore, the necessary \(\mathcal{P}\)-algebra identities can be expressed as: \[1_{\mathcal{P}}(a)=a \mu\left(\nu_{1}(a_{1,1},\ldots,a_{1,n_{1}}),\ldots,\nu_{k}(v_{k,1},\ldots,v_{k,n_{k}})\right)=\mu\left(\nu_{1},\ldots,\nu_{k}\right)(a_{1,1}, \ldots,a_{k,n_{k}}) \tag{46}\] Similarly, by a \(\mathcal{P}\)-algebra morphism we simply mean an \(\mathsf{S}(\mathcal{P},-)\)-algebra morphism. So if \(A\) and \(B\) are \(\mathcal{P}\)-algebras, then a \(\mathcal{P}\)-algebra morphism is an \(R\)-linear morphism \(f:A\xrightarrow{}B\) which is compatible with \(\mathcal{P}\)-algebra structure, that is, the following equality holds: \[f(\mu(a_{1},\ldots,a_{n}))=\mu(f(a_{1}),\ldots,f(a_{n})) \tag{47}\] Therefore, by the category of algebras over an operad we mean the Eilenberg-Moore category of its associated monad. So let \(\mathsf{ALG}_{\mathcal{P}}\) be the category of \(\mathcal{P}\)-algebras and \(\mathcal{P}\)-algebra morphisms between them, or in other words, \(\mathsf{ALG}_{\mathcal{P}}:=\mathsf{ALG}_{\mathsf{S}(\mathcal{P},-)}\). By Lemma 4.3.1, we already know that \(\mathsf{ALG}_{\mathcal{P}}\) is a tangent category. However, we wish to give a more explicit description of the tangent bundle of a \(\mathcal{P}\)-algebra. We do this using the semi-direct product of a \(\mathcal{P}\)-algebra with itself. We note that, while the semi-direct product [26, Section 12.3.2] can be more generally defined between a \(\mathcal{P}\)-algebra and a module (a notion that we review in the next section below), for the purpose of the tangent structure, we only need to understand it for a \(\mathcal{P}\)-algebra with itself. For a \(\mathcal{P}\)-algebra \(A\), define the \(\mathcal{P}\)-algebra \(A\ltimes A\) as the \(R\)-module \(A\times A\) equipped with \(\mathcal{P}\)-algebra structure defined as follows: \[\mu((a_{1},b_{1}),\ldots,(a_{n},b_{n}))=\left(\mu(a_{1},\ldots,a_{n}),\sum_{i=1 }^{n}\mu(a_{1},\ldots,b_{i},\ldots,a_{n})\right) \tag{48}\] Note that, in \(A\ltimes A\), the first component \(A\) is viewed as a \(\mathcal{P}\)-algebra, while the second component \(A\) is viewed as a module. More generally, we also define the \(\mathcal{P}\)-algebra \(A\ltimes A^{n}\) to be the \(R\)-module \(A\times\underbrace{A\times\ldots\times A}_{n\text{ times}}\) with \(\mathcal{P}\)-algebra structure defined as: \[\mu\left((a_{1},\vec{b}_{i}),\ldots,(a_{m},\vec{b}_{m})\right)=\left(\mu(a_{1}, \ldots,a_{m}),\sum_{i=1}^{m}\mu(a_{1},\ldots,b_{i,1},\ldots,a_{m}),\ldots,\sum_{ i=1}^{m}\mu(a_{1},\ldots,b_{i,n},\ldots,a_{m})\right)\] We will now prove that \(A\ltimes A\) is precisely the tangent bundle over \(A\). **Lemma 4.3.2**.: _Let \(\mathcal{P}\) be an operad and \(A\) a \(\mathcal{P}\)-algebra. Then \(\mathsf{T}(A)=A\ltimes A\) and \(\mathsf{T}_{n}(A)=A\ltimes A^{n}\), where \(\mathsf{T}(A)\) and \(\mathsf{T}_{n}(A)\) are defined as in Theorem 3.2.7._ Proof.: Let \(\theta:\mathsf{S}(\mathcal{P},A)\xrightarrow{}A\) denote the \(\mathcal{P}\)-algebra structure map of \(A\). Then, by Theorem 3.2.7, \(\mathsf{T}(A)\) is the \(R\)-module \(A\times A\) with \(\mathcal{P}\)-algebra structure map defined as \((\theta\times\theta)\circ\lambda_{A}\), where \(\lambda\) was given in Lemma 4.3.1. So, since \(\mathsf{T}(A)\) and \(A\ltimes A\) have the same underlying \(R\)-module \(A\times A\), we must show that they have the same \(\mathcal{P}\)-algebra structure. We compute: \[(\theta\times\theta)\left(\lambda_{A}\left((a_{1},b_{1}),\ldots,(a_{n},b_{n})\right)\right) =(\theta\times\theta)\left((\mu;a_{1},\ldots,a_{n}),\sum_{i=1}^{n }\left(\mu;a_{1},\ldots,b_{i},\ldots,a_{n}\right)\right)\] \[=\left(\mu(a_{1},\ldots,a_{n}),\sum_{i=1}^{n}\mu\left(a_{1}, \ldots,b_{i},\ldots,a_{n}\right)\right)\] So we conclude that \(\mathsf{T}(A)=A\ltimes A\). Similarly, we can also show that \(\mathsf{T}_{n}(A)=A\ltimes A^{n}\). Therefore, we may write the tangent structure of Theorem 3.2.7 for \(\mathsf{ALG}_{\mathcal{P}}\) in terms of semi-direct products. **Theorem 4.3.3**.: _Let \(\mathcal{P}\) be an operad. Consider:_ 1. _The tangent bundle functor_ \(\mathsf{T}:\mathsf{ALG}_{\mathcal{P}}\xrightarrow{}\mathsf{ALG}_{\mathcal{P}}\) _defined on objects as_ \(\mathsf{T}(A)=A\ltimes A\) _and on maps as_ \(\mathsf{T}(f)=(f\times f)\)_, that is,_ \(\mathsf{T}(f)(a,b)=(f(a),f(b))\)_;_ 2. _The projection_ \(p_{A}:A\ltimes A\xrightarrow{}A\) _defined as_ \(p_{A}(a,b)=a\)_, with_ \(n\)_-fold pullback_ \(\mathsf{T}_{n}(A)=A\ltimes A^{n}\)_, and projections_ \(q_{j}:A\ltimes A^{n}\xrightarrow{}A\ltimes A\) _defined as_ \(q_{j}(a,b_{1},\ldots,b_{n})=(a,b_{j})\)_;_ 3. _The sum_ \(s_{A}:A\ltimes A^{2}\xrightarrow{}A\ltimes A\) _defined as_ \(s_{A}(a,b_{1},b_{2})=(a,b_{1}+b_{2})\)_;_ 4. _The zero map_ \(z_{A}:A\xrightarrow{}A\ltimes A\) _defined as_ \(z_{A}(a)=(a,0)\)_;_ 5. _The vertical lift_ \(\ell_{A}:A\ltimes A\xrightarrow{}(A\ltimes A)\ltimes(A\ltimes A)\) _defined as_ \(\ell_{A}(a,b)=(a,0,0,b)\)_;_ 6. _The canonical flip_ \(c_{A}:(A\ltimes A)\ltimes(A\ltimes A)\xrightarrow{}(A\ltimes A)\ltimes(A \ltimes A)\) _defined as_ \(c_{A}(a,b,c,d)=(a,c,b,d)\)_;_ 7. _The negative map_ \(n_{A}:A\ltimes A\xrightarrow{}A\ltimes A\) _defined as_ \(n_{A}(a,b)=(a,-b)\)_._ _Then, \(\mathbb{T}=(\mathsf{T},p,s,z,l,c,n)\) is a Rosicky tangent structure on \(\mathsf{ALG}_{\mathcal{P}}\), and so, \((\mathsf{ALG}_{\mathcal{P}},\mathbb{T})\) is a Cartesian Rosicky tangent category._ We now consider what the resulting tangent categories are for our main examples of operads. **Example 4.3.4**.: Let \(A\) be an \(R\)-algebra. Then for the operad \(A^{\bullet}\), the \(A^{\bullet}\)-algebras are precisely \(A\)-modules. So, \(\mathsf{ALG}_{A^{\bullet}}=\mathsf{MOD}_{A}\), the category of \(A\)-modules and \(A\)-linear maps between them. The resulting tangent structure on \(\mathsf{MOD}_{A}\) is precisely the biproduct structure from Lemma 3.1.1, so in particular, for an \(A\)-module \(M\), its tangent bundle is simply \(\mathsf{T}(M)=M\times M\). So \((\mathsf{MOD}_{A},\mathbb{B})\) is a Cartesian Rosicky tangent category. **Example 4.3.5**.: For the operad \(\mathsf{Com}\), the \(\mathsf{Com}\)-algebras are precisely (associative and unital) commutative \(R\)-algebras. So \(\mathsf{ALG}_{\mathsf{Com}}=\mathsf{CALG}_{R}\), the category of commutative \(R\)-algebras and \(R\)-algebra morphisms between them. Up to isomorphism, the resulting tangent structure is the one described in [11, Section 2.2], where the tangent bundle is given by dual numbers. Indeed, for a commutative \(R\)-algebra \(A\), let \(A[\epsilon]\) be its \(R\)-algebra of dual numbers: \[A[\epsilon]=\{a+b\epsilon|\ \forall a,b\in A\} \epsilon^{2}=0\] It is easy to see that \(A\ltimes A\cong A[\epsilon]\) via \((a,b)\mapsto a+b\epsilon\). So we may express the tangent structure using instead dual numbers, so \(\mathsf{T}(A)=A[\epsilon]\). We may write \(\mathsf{T}^{2}(A)\) and \(\mathsf{T}_{n}(A)\) as multivariable dual numbers in the following way: \[\mathsf{T}^{2}(A)=A[\epsilon][\epsilon^{\prime}] =\{a+b\epsilon+c\epsilon^{\prime}|\ \forall a,b,c,d\in A\} \epsilon^{2}={\epsilon^{\prime}}^{2} =0\] \[\mathsf{T}_{n}(A)=A[\epsilon_{1},\ldots,\epsilon_{n}] =\{a+b_{1}\epsilon_{1}+\ldots+b_{n}\epsilon_{n}|\ \forall a,b_{j}\in A\} \epsilon_{i}\epsilon_{j} =0\] The rest of the tangent structure is given as follows: \[p_{A}(a+b\epsilon) =a s_{A}(a+b\epsilon_{1}+c\epsilon_{2})=a+(b+c)\epsilon z_{A}(a)=a\] \[l_{A}(a+b\epsilon) =a+b\epsilon^{\prime} c_{A}(a+b\epsilon+c\epsilon^{\prime}+d\epsilon\epsilon^{\prime})=a+c \epsilon+b\epsilon^{\prime}+d\epsilon\epsilon^{\prime} n_{A}(a+b\epsilon)=a-b\epsilon\] So \((\mathsf{CALG}_{R},\mathbb{T})\) is a Cartesian Rosicky tangent category. **Example 4.3.6**.: For the operad Ass, the Ass-algebras are precisely (associative and unital) \(R\)-algebras. So \(\mathsf{ALG}_{\mathrm{Ass}}=\mathsf{ALG}_{R}\), the category of \(R\)-algebras and \(R\)-algebra morphisms between them. Again, up to isomorphism, the resulting tangent structure is precisely the same as for commutative algebras in Example 4.3.5, so in particular for an \(R\)-algebra \(A\), \(\mathsf{T}(A)=A[\mathsf{e}]\cong A\ltimes A\). Therefore, \((\mathsf{ALG}_{R},\mathbb{T})\) is a Cartesian Rosicky tangent category. **Example 4.3.7**.: For the operad Lie, the Lie-algebras are precisely Lie algebras over \(R\). So \(\mathsf{ALG}_{\mathrm{Ass}}=\mathsf{LIE}_{R}\), the category of Lie algebras over \(R\) and Lie algebra morphisms between them. For a Lie algebra \(\mathfrak{g}\), the Lie algebra \(\mathfrak{g}\ltimes\mathfrak{g}\) is the \(R\)-module \(\mathfrak{g}\times\mathfrak{g}\) with the Lie brackets defined as \(\big{[}(x_{1},y_{1}),(x_{2},y_{2})\big{]}:=([x_{1},x_{2}],[x_{1},y_{2}]+[y_{1},x_{2}])\). So \((\mathsf{LIE}_{R},\mathbb{T})\) is a Cartesian Rosicky tangent category, which we stress is a new example of a tangent category. We conclude this section by mentioning that the construction of Theorem 4.3.3 provides a (contravariant) functor between the category of operads and the category of Cartesian Rosicky tangent categories. Briefly, for operads \(\mathcal{P}\) and \(\mathcal{P}^{\prime}\), an operad morphism \(f:\mathcal{P}\xrightarrow{}\mathcal{P}^{\prime}\) is a sequence of equivariant \(R\)-linear morphisms \(f(n):\mathcal{P}(n)\xrightarrow{}\mathcal{P}^{\prime}(n)\) which preserve the partial compositions and the distinguished object. Then, let \(\mathsf{OPERAD}_{R}\) be the category of operads and operad morphisms between them. On the other hand, for \((\mathbb{X},\mathbb{T})\) and \((\mathbb{X}^{\prime},\mathbb{T}^{\prime})\), a strict Cartesian Rosicky tangent functor \(\mathsf{F}:(\mathbb{X},\mathbb{T})\xrightarrow{}(\mathbb{X}^{\prime}, \mathbb{T}^{\prime})\) is a functor \(\mathsf{F}:\mathbb{X}\xrightarrow{}\mathbb{X}^{\prime}\) which preserves the product strictly and also preserve the tangent structure, in the sense that \(\mathsf{F}\circ\mathsf{T}=\mathsf{T}^{\prime}\circ\mathsf{F}\), \(\mathsf{F}(p)=p^{\prime}\), etc. Let \(\mathsf{CRTAN}_{=}\) be the category of Cartesian Rosicky tangent categories and strict Cartesian tangent functors between them. Every operad morphism \(f:\mathcal{P}\xrightarrow{}\mathcal{P}^{\prime}\) induces a functor \(\mathsf{ALG}_{f}:\mathsf{ALG}_{\mathcal{P}^{\prime}}\xrightarrow{}\mathsf{ ALG}_{\mathcal{P}}\) by mapping a \(\mathcal{P}^{\prime}\)-algebra \(A\) to \(A\) with \(\mathcal{P}\)-algebra structure defined as \(\mu(a_{1},\dots,a_{n}):=f(\mu)(a_{1},\dots,a_{n})\). It is straightforward to see that \(\mathsf{ALG}_{f}:(\mathsf{ALG}_{\mathcal{P}^{\prime}},\mathbb{T})\xrightarrow {}(\mathsf{ALG}_{\mathcal{P}},\mathbb{T})\) is a strict Cartesian Rosicky tangent functor. Therefore we obtain a functor \(\mathsf{ALG}:\mathsf{OPERAD}_{R}^{\mathcal{P}}\xrightarrow{}\mathsf{CRTAN}_{=}\) which sends an operad \(\mathcal{P}\) to \((\mathsf{ALG}_{\mathcal{P}},\mathbb{T})\) and operad morphisms \(f:\mathcal{P}\xrightarrow{}\mathcal{P}^{\prime}\) to \(\mathsf{ALG}_{f}:(\mathsf{ALG}_{\mathcal{P}^{\prime}},\mathbb{T})\xrightarrow {}(\mathsf{ALG}_{\mathcal{P}},\mathbb{T})\). ### Adjoint Tangent Structure of Algebras over an Operad The objective of this section is to show that the tangent category of algebras over an operad also has adjoint tangent structure. Therefore, the opposite category of algebras over an operad is a tangent category. Since the category of algebras over an operad is always cocomplete, it admits all pushouts. So, by Corollary 2.2.4, it suffices to show that the tangent bundle functor has a left adjoint. The adjoint tangent bundle is given by the free algebra over the module of Kahler differentials of an algebra. This is quite a mouthful, so let's break it down piece by piece. Let \(\mathcal{P}\) be an operad and \(A\) a \(\mathcal{P}\)-algebra. Then an \(A\)**-module**[26, Section 12.3.1] is an \(R\)-module \(M\) equipped with a family of \(R\)-linear morphisms \(\psi_{n+1}:\mathcal{P}(n+1)\otimes A^{\otimes n}\otimes M\xrightarrow{}M\), called **evaluation maps**, satisfying natural equivariance, associativity, and a unit map: \(\eta_{M}:M\xrightarrow{}\bigoplus_{n\in\mathbb{N}}\mathcal{P}(n+1)\otimes A^{ \otimes n}\otimes M\) playing the role of a unit for the evaluation. As a shorthand, we denote: \[\mu(a_{1},\dots,a_{n},x)=\psi_{n+1}(\mu\otimes a_{1}\otimes\dots\otimes a_{n} \otimes x) \tag{49}\] If \(M\) and \(M^{\prime}\) are \(A\)-modules, then an \(A\)-linear morphism is an \(R\)-linear morphism \(f:M\xrightarrow{}M^{\prime}\) which preserves the evaluation and unit maps in the sense that: \[\eta_{M^{\prime}}(f(x))=\left(\bigoplus_{n+1}\mathcal{P}(n+1)\otimes A^{ \otimes n}\otimes f\right)\circ\eta_{M}(x),\qquad\qquad f(\mu(a_{1},\dots,a_{ n},x))=\mu(a_{1},\dots,a_{n},f(x)) \tag{50}\] Let \(\mathsf{MOD}_{A}\) be the category of \(A\)-modules and \(A\)-linear morphisms between them. Among the \(A\)-modules, there is an important one called the module of Kahler differentials of \(A\), which generalizes the classical notion of Kahler differentials. We must first describe derivations for algebras over an operad. For an \(A\)-module \(M\), we adopt the following useful notation: \[\mu(a_{1},\dots,a_{i},x,a_{i+1},\dots,a_{n})=\left(\mu\cdot(i\ i+1\dots n) \right)(a_{1},\dots,a_{n},x)\] where \((i\ i+1\dots n)\) is the \(n+1-i\)-cycle permutation. An \(A\)**-derivation**[26, Section 12.3.7] evaluated in an \(A\)-module \(M\) is an \(R\)-linear morphism \(D:A\xrightarrow{}M\) satisfying: \[D(\mu(a_{1},\dots,a_{n}))=\sum_{i=1}^{n}\mu(a_{1},\dots,D(a_{i}),\dots,a_{n}) \tag{51}\] The right equality is called the Leibniz rule. Now let \(\operatorname{Der}(A,M)\) be the \(R\)-module of \(A\)-derivations evaluated in \(M\). This way, we define a functor \(\operatorname{Der}(A,-)\) which is representable [26, Proposition 12.3.11]. The **module of Kahler differentials of \(A\)**[26, Section 12.3.8] is an \(A\)-module which represents \(\operatorname{Der}(A,-)\), that is, an \(A\)-module \(\Omega_{A}\) such that, for all \(A\)-modules \(M\), \(\operatorname{Der}(A,M)\cong\operatorname{MOD}_{A}(\Omega_{A},M)\). This means that there is an \(A\)-derivation \(\mathsf{d}:A\xrightarrow{}\Omega_{A}\) which is universal in the sense that for every \(A\)-derivation \(D:A\xrightarrow{}M\), there exists a unique \(A\)-linear morphism \(\overline{D}:\Omega_{A}\xrightarrow{}M\) such that \(\overline{D}\circ\mathsf{d}=D\). We do not need a concrete description of \(\Omega_{A}\) for this paper, see [26, Lemma 12.3.12] for full details. However, it is interesting to point out that, for the \(\mathcal{P}\)-algebra \(\mathsf{S}(\mathcal{P},V)\), \(\Omega_{\mathsf{S}(\mathcal{P},V)}\) is isomorphic to the sub-\(\mathsf{S}(\mathcal{P},V)\)-module of \(\mathsf{S}(\mathcal{P},V\times V)\) generated by elements of the form \(\bigl{(}\mu;(v_{1},0),\ldots,(0,v_{i}),\ldots,(v_{n},0)\bigr{)}\). Therefore, the differential combinator transformation \(\partial_{V}:\mathsf{S}(\mathcal{P},V)\xrightarrow{}\mathsf{S}(\mathcal{P},V \times V)\) factors through \(\Omega_{\mathsf{S}(V)}\) by composing the derivation \(\mathsf{d}:\mathsf{S}(\mathcal{P},V)\xrightarrow{}\Omega_{\mathsf{S}(V)}\) with the inclusion \(\Omega_{\mathsf{S}(V)}\xrightarrow{}\mathsf{S}(\mathcal{P},V\times V)\). For an arbitrary \(\mathcal{P}\)-algebra \(A\), \(\Omega_{A}\) is not a \(\mathcal{P}\)-algebra. One might be tempted to consider \(\mathsf{S}(\mathcal{P},\Omega_{A})\) as a candidate for the tangent bundle over \(A\). However, this is not the adjoint functor we are looking for. We will instead take the free \(A\)-algebra over \(\Omega_{A}\). An \(A\)**-algebra**, also called a \(\mathcal{P}\)-algebra under \(A\)[2], is a \(\mathcal{P}\)-algebra \(B\) equipped with a \(\mathcal{P}\)-algebra morphism \(u:A\xrightarrow{}B\). Now, if \(B\) and \(B^{\prime}\) are \(A\)-algebras with \(u:A\xrightarrow{}B\) and \(u^{\prime}:A\xrightarrow{}B^{\prime}\) respectively, then an \(A\)-algebra morphism is a \(\mathcal{P}\)-algebra morphism \(f:B\xrightarrow{}B^{\prime}\) which also preserves the \(A\)-algebra structure, so \(f\circ u=u^{\prime}\). Let \(\mathsf{ALG}_{A}\) be the category of \(A\)-algebras and \(A\)-algebra morphisms between them. Every \(A\)-algebra \(B\) is also an \(A\)-module where the evaluation is given by: \[\mu(a_{1},\ldots,a_{n},b):=\mu(u(a_{1}),\ldots,u(a_{n}),b) \tag{52}\] and similarly, every \(A\)-algebra morphism is also an \(A\)-module morphism. We obtain a functor \(\mathsf{U}_{A}:\mathsf{ALG}_{A}\xrightarrow{}\mathsf{MOD}_{A}\). The functor \(\mathsf{U}_{A}\) has a left adjoint \(\mathsf{Free}_{A}:\mathsf{MOD}_{A}\xrightarrow{}\mathsf{ALG}_{A}\). In the next Proposition we provide a concrete description for \(\mathsf{Free}_{A}\). This is an extension of a result due to Ginzburg, who proved the existence of \(\mathsf{Free}_{A}\) for quadratic operads [16, Lemma 5.2]. This means that, for very \(A\)-module \(M\), there exists an \(A\)-algebra \(\mathsf{Free}_{A}(M)\) with \(u_{M}:A\xrightarrow{}\mathsf{Free}_{A}(M)\) called the **free \(A\)-algebra over \(M\)**. **Proposition 4.4.1**.: _Let \(A\) be a \(\mathcal{P}\)-algebra and let \(M\) be a module over \(A\) (in the operadic sense). Consider the \(\mathcal{P}\)-algebra \(\mathsf{Free}_{A}M\) obtained by quotienting the free algebra \(\mathsf{S}(\mathcal{P},A\times M)\) by the ideal generated by the following relations:_ \[(\mu;(a_{1},0),\ldots,(a_{k-1},0),(a_{k},x),(a_{k+1},0)\ldots,(a_{n},0))=(\mu( a_{1},\ldots,a_{n}),\mu(a_{1},\ldots,a_{k-1},x,a_{k+1},\ldots,a_{n}))\] _for every \(\mu\in\mathcal{P}(n)\), \(a_{1},\ldots,a_{n}\in A\), \(x\in M\) and positive integer \(n\). Then \(\mathsf{Free}_{A}:\mathsf{MOD}_{A}\xrightarrow{}\mathsf{ALG}_{A}\) extends to a left adjoint to the functor \(\mathsf{U}_{A}:\mathsf{ALG}_{A}\xrightarrow{}\mathsf{MOD}_{A}\), where the \(A\)-algebra structure \(u_{M}:A\xrightarrow{}\mathsf{Free}_{A}M\) is defined as the injection \(u_{M}(a)=(a,0)\)._ Proof.: Note that we have an \(A\)-module morphism \(\iota_{M}:M\xrightarrow{}\mathsf{Free}_{A}M\) defined as the inclusion \(\iota_{M}(x)=(0,x)\). Now given an \(A\)-algebra \(u:A\xrightarrow{}B\) and an \(A\)-algebra morphism \(f:\mathsf{Free}_{A}M\xrightarrow{}B\), we can define an \(A\)-module morphism \(f^{\flat}:M\xrightarrow{}\mathsf{Free}_{A}M\) as the composite \(f^{\flat}=f\circ\iota_{M}\). Conversely, given an \(A\)-module morphism \(g:M\xrightarrow{}\mathsf{U}_{A}B\), it is not difficult to check that the \(\mathcal{P}\)-algebra morphism \(\mathsf{S}(\mathcal{P},A\times M)\xrightarrow{}B\) mapping \((a,x)\mapsto u(a)+g(x)\) lifts to the quotient. As such, it provides a well-defined \(A\)-algebra morphism \(g^{\sharp}:\mathsf{Free}_{A}M\xrightarrow{}B\). The final step is to note that \(f\mapsto f^{\flat}\) and \(g\mapsto g^{\sharp}\) are inverses of each other. Thus, we obtain a natural bijection \(\mathsf{ALG}_{A}(\mathsf{Free}_{A}M,B)\cong\mathsf{MOD}_{A}(M,\mathsf{U}_{A}B)\), and thus an adjunction as desired. With all this setup, we can finally define the adjoint tangent bundle of a \(\mathcal{P}\)-algebra to be \(\mathsf{T}^{\circ}(A):=\mathsf{Free}_{A}(\Omega_{A})\). Using the combined universal properties of both \(\Omega_{A}\) and \(\mathsf{Free}_{A}(-)\), we can conclude that: \[\mathsf{ALG}_{\mathcal{P}}\left(\mathsf{Free}_{A}(\Omega_{A}),A^{\prime}\right) \cong\mathsf{ALG}_{\mathcal{P}}(A,A^{\prime}\ltimes A^{\prime}) \tag{53}\] Therefore, \(\mathsf{T}^{\circ}:\mathsf{ALG}_{\mathcal{P}}\xrightarrow{}\mathsf{ALG}_{ \mathcal{P}}\) is indeed left adjoint to \(\mathsf{T}:\mathsf{ALG}_{\mathcal{P}}\xrightarrow{}\mathsf{ALG}_{\mathcal{P}}\). However, let us give a more concrete description of the adjoint tangent bundle and the adjunction. For a \(\mathcal{P}\)-algebra \(A\), its adjoint tangent bundle \(\mathsf{T}^{\circ}(A)\) is explicitly given by the \(\mathcal{P}\)-algebra obtained by quotienting \(\mathsf{S}(\mathcal{P},A\times A)\) by the following relations: \[\mu((a_{1},0),\ldots,(a_{n},0)) =(\mu(a_{1},\ldots,a_{n}),0)\] \[(0,\mu(a_{1},\ldots,a_{n})) =\sum_{i=1}^{n}\mu((a_{1},0),\ldots,(0,a_{i}),\ldots,(a_{n},0))\] As useful shorthand, we write \(a:=(a,0)\in\mathsf{T}^{\circ}(A)\) and \(\mathsf{d}(a):=(0,a)\in\mathsf{T}^{\circ}(A)\) for all \(a\in A\). The above relations then state that \(\mu(a_{1},\ldots,a_{n})\) in \(\mathsf{T}^{\circ}(A)\) corresponds to \(\mu(a_{1},\ldots,a_{n})\) in \(A\), and that \[\mathsf{d}\left(\mu(a_{1},\ldots,a_{n})\right)=\sum_{i=1}^{n}\mu(a_{1},\ldots,\mathsf{d}(a_{i}),\ldots,a_{n})\] Note that \(\mathsf{d}\) can be thought of an \(R\)-linear function: \(\mathsf{d}(r\cdot a+s\cdot b)=r\cdot\mathsf{d}(a)+s\cdot\mathsf{d}(b)\). As a \(\mathcal{P}\)-algebra, \(\mathsf{T}^{\circ}(A)\) is generated by \(a\) and \(\mathsf{d}(a)\) for all \(a\in A\). As such, to define \(\mathcal{P}\)-algebra morphisms with domain \(\mathsf{T}^{\circ}(A)\), it suffices to define them on the generators \(a\) and \(\mathsf{d}(a)\), making sure that the definition is compatible with the above relations. For every \(\mathcal{P}\)-algebra morphism \(f:P\xrightarrow{}P^{\prime}\), define the \(\mathcal{P}\)-algebra morphism \(\mathsf{T}^{\circ}(f):\mathsf{T}^{\circ}(A)\xrightarrow{}\mathsf{T}^{\circ}( A^{\prime})\) on generators as follows: \[\mathsf{T}^{\circ}(f)(a)=f(a) \mathsf{T}^{\circ}(f)(\mathsf{d}(a))=\mathsf{d}(f(a)) \tag{54}\] This gives us the desired functor \(\mathsf{T}^{\circ}:\mathsf{ALG}_{\mathcal{P}}\xrightarrow{}\mathsf{ALG}_{ \mathcal{P}}\). Observe that we did not need the \(A\)-algebra structure of \(\mathsf{T}^{\circ}(A)\) to build this functor. Nevertheless, readers familiar with modules of Kahler differentials may easily check that the presentation of \(\mathsf{T}^{\circ}(A)\) given here recaptures precisely \(\mathsf{Free}_{A}(\Omega_{A})\) (especially using the \(\mathsf{d}\) notation). That said, the \(A\)-algebra structure of \(\mathsf{T}^{\circ}(A)\) will be precisely the adjoint projection \(p_{A}^{\circ}:A\xrightarrow{}\mathsf{T}^{\circ}(A)\). Turning our attention back to the adjunction, we define the unit \(\eta_{A}:A\xrightarrow{}\mathsf{T}^{\circ}(A)\ltimes\mathsf{T}^{\circ}(A)\) as follows: \[\eta_{A}(a)=(a,\mathsf{d}(a)) \tag{55}\] which is clearly a \(\mathcal{P}\)-algebra morphism. The counit \(\epsilon_{A}:\mathsf{T}^{\circ}(A\ltimes A)\xrightarrow{}A\) is the \(\mathcal{P}\)-algebra morphism defined on generators \((a,b)\) and \(\mathsf{d}(a,b)\) for all \((a,b)\in A\ltimes A\) as follows: \[\epsilon_{A}(a,b)=a \epsilon(\mathsf{d}(a,b))=b \tag{56}\] **Lemma 4.4.2**.: \((\eta,\epsilon):\mathsf{T}^{\circ}\dashrightarrow\mathsf{T}\) _is an adjunction._ Proof.: We leave it as an exercise for the reader to check that the adjunction triangle identities are satisfied. Alternatively, one could check that \(\mathsf{ALG}_{\mathcal{P}}\left(\mathsf{T}^{\circ}(A),A^{\prime}\right) \cong\mathsf{ALG}_{\mathcal{P}}(A,A^{\prime}\ltimes A^{\prime})\). Explicitly, given a \(\mathcal{P}\)-algebra morphism \(f:\mathsf{T}^{\circ}(A)\xrightarrow{}A^{\prime}\), define the \(\mathcal{P}\)-algebra morphism \(f^{\circ}:A\xrightarrow{}A^{\prime}\ltimes A^{\prime}\) as \(f^{\circ}(a)=(f(a),\mathsf{d}(f(a)))\), and conversely, given a \(\mathcal{P}\)-algebra morphism \(g:A\xrightarrow{}A^{\prime}\ltimes A^{\prime}\), with \(g(a)=(g_{1}(a),g_{2}(a))\), define the \(\mathcal{P}\)-algebra morphism \(g^{\sharp}:\mathsf{T}^{\circ}(A)\xrightarrow{}A^{\prime}\) on generators as \(g^{\sharp}(a)=g_{1}(a)\) and \(g^{\sharp}(\mathsf{d}(a))=g_{2}(a)\). One can check that the mappings \(f\mapsto f^{\circ}\) and \(g\mapsto g^{\sharp}\) define mutually inverse bijections. For any operad \(\mathcal{P}\), \(\mathsf{ALG}_{\mathcal{P}}\) is cocomplete [28, Proposition 6.4], and so, it admits all coproducts and pushouts. Therefore, applying Theorem 2.2.2 and Corollary 2.2.4, we obtain: **Corollary 4.4.3**.: _Let \(\mathcal{P}\) be an operad and \(A\) a \(\mathcal{P}\)-algebra. The Cartesian Rosicky tangent category \((\mathsf{ALG}_{\mathcal{P}},\mathbb{T})\) defined in Theorem 4.3.3 has an adjoint tangent structure \((\mathsf{ALG}_{\mathcal{P}}^{op},\mathbb{T}^{\circ})\), where \(\mathbb{T}^{\circ}\) is defined as in Theorem 2.2.2._ We will now give a concrete description of the adjoint tangent structure. We can define all the necessary structure maps on generators. Let us first describe the generators of \(\mathsf{T}^{\circ}_{n}(A)\) and \(\mathsf{T}^{\circ 2}(A)\). On the one hand, \(\mathsf{T}^{\circ}_{n}(A)\) is a quotient of \(\mathsf{S}(\mathsf{T},\prod\limits_{i=1}^{n+1}A)\) modulo the similar equations as above, and therefore, can be described in terms of generators \(a\) and \(\mathsf{d}_{i}(a)\) for all \(a\in A\) and \(1\leq i\leq n\). By Lemma 2.2.3, \(\mathsf{T}^{\circ}_{n}\) is indeed a left adjoint to \(\mathsf{T}_{n}\). On the other hand, \(\mathsf{T}^{\circ 2}(A)\) is generated by \(a\), \(\mathsf{d}(a)\), \(\mathsf{d}^{\prime}(a)\), and \(\mathsf{d}^{\prime}\mathsf{d}(a)\) for all \(a\in A\), where \(\mathsf{d}\) is for \(\Omega_{A}\) and \(\mathsf{d}^{\prime}\) is for \(\Omega_{\mathsf{T}^{\circ}(A)}\). **Theorem 4.4.4**.: _Let \(\mathcal{P}\) be an operad. Consider:_ 1. _The adjoint tangent bundle functor_ \(\mathsf{T}^{\circ}:\mathsf{ALG}_{\mathcal{P}}\xrightarrow{}\mathsf{ALG}_{ \mathcal{P}}\) _defined on objects by_ \(\mathsf{T}^{\circ}(A)=\mathsf{Free}_{A}(\Omega_{A})\) _and on maps by_ \(\mathsf{T}^{\circ}(f)\) _(as defined above);_ 2. _The adjoint projection_ \(p_{A}^{\circ}:A\xrightarrow{}\mathsf{T}^{\circ}(A)\) _defines by_ \(p_{A}^{\circ}(a)=a\)_, where the_ \(n\)_-fold pushout of_ \(p_{A}^{\circ}\) _is_ \(\mathsf{T}^{\circ}_{n}(A)\)_, with injections_ \(q_{j}^{\circ}:\mathsf{T}^{\circ}(A)\xrightarrow{}\mathsf{T}^{\circ}_{n}(A)\) _defined on generators as_ \(q_{j}^{\circ}(a)=a\) _and_ \(q_{j}^{\circ}(\mathsf{d}(a))=\mathsf{d}_{j}(a)\)_;_ 3. _The adjoint sum_ \(s_{A}^{\circ}:\mathsf{T}^{\circ}_{n}(A)\xrightarrow{}\mathsf{T}^{\circ}(A)\)_, defined on generators as_ \(s_{A}^{\circ}(a)=a\) _and_ \(s_{A}^{\circ}(\mathsf{d}(a))=\mathsf{d}_{1}(a)+\mathsf{d}_{2}(a)\)_;_ 4. _The adjoint zero map_ \(z_{A}^{\circ}:\mathsf{T}^{\circ}(A)\xrightarrow{}A\)_, defined on generators by_ \(z_{A}^{\circ}(a)=a\) _and_ \(z_{A}^{\circ}(\mathsf{d}(a))=0\) * _The adjoint vertical lift_ \(\ell^{\circ}_{A}:\mathsf{T}^{\circ 2}(A)\xrightarrow{\mathsf{T}^{\circ}}(A)\)_, defined on generators by:_ \[l^{\circ}_{A}(a)=a l^{\circ}_{A}(\mathsf{d}(a))=0 l^{\circ}_{A}(\mathsf{d}^{\prime}(a))=0 l^{\circ}_{A}(\mathsf{d}^{\prime}\mathsf{d}(a))=\mathsf{d}(a)\] * _The adjoint canonical flip_ \(c^{\circ}_{A}:\mathsf{T}^{\circ 2}(A)\xrightarrow{\mathsf{T}^{\circ 2}}(A)\)_, defined on generators by:_ \[c^{\circ}_{A}(a)=a c^{\circ}_{A}(\mathsf{d}(a))=\mathsf{d}^{\prime}(a) c^{\circ}_{A}(\mathsf{d}^{\prime}(a))=\mathsf{d}(a) c^{\circ}_{A}(\mathsf{d}^{\prime}\mathsf{d}(a))=\mathsf{d}^{\prime} \mathsf{d}(a)\] * _The adjoint negative map_ \(n^{\circ}_{A}:\mathsf{T}^{\circ}(A)\xrightarrow{\mathsf{T}^{\circ}}(A)\)_, defined on generators by_ \(n^{\circ}_{A}(a)=a\) _and_ \(n^{\circ}(\mathsf{d}(a))=-\mathsf{d}(a)\)_._ _Then, \(\mathsf{T}^{\circ}=(\mathsf{T}^{\circ},p^{\circ},s^{\circ},z^{\circ},l^{ \circ},c^{\circ},n^{\circ})\) is a Rosicky tangent structure on \(\mathsf{ALG}^{op}_{\mathcal{P}}\), and so, \((\mathsf{ALG}^{op}_{\mathcal{P}},\mathsf{T})\) is a Cartesian Rosicky tangent category._ While the tangent bundle \(\mathsf{T}\) is mostly the same for each operad, the adjoint tangent bundle \(\mathsf{T}^{\circ}\) can vary quite drastically from operad to operad. So let us now consider the resulting tangent categories for our main examples of operads. We again stress that, while the first two examples recapture known examples, the last two examples are new examples of tangent categories. While the third example is not too surprising, it does provide a direct link between tangent categories and non-commutative algebraic geometry, which provides a novel application for the theory of tangent categories. On the other hand, the fourth example is good example that demonstrates how operads provide many new (and surprising) examples of tangent categories that were not previously considered. **Example 4.4.5**.: Let \(A\) be an \(R\)-algebra. For the operad \(A^{\bullet}\), Theorem 4.4.4 recaptures precisely the adjoint biproduct tangent structure from Lemma 3.3.1. For an \(A\)-module \(M\), every other \(A\)-module is an \(M\)-module in the operadic sense, where the evaluation maps are all zero. Similarly, algebras over \(M\) in the operadic sense simply correspond to an \(A\)-module \(N\) equipped with a chosen \(A\)-linear morphisms \(N\xrightarrow{\mathsf{d}}M\). In this case, \(\mathsf{Free}_{M}(N)=M\times N\), which is an algebra over \(M\) via the injection map. On the other hand, \(\Omega_{M}=M\), with universal derivation being the identity \(1_{M}:M\xrightarrow{\mathsf{d}}M\). So, we indeed have that \(\mathsf{T}^{\circ}(M)=M\times M\). Thus \((\mathsf{MOD}^{op}_{A},\mathbb{B}^{\circ})\) is a Cartesian Rosicky tangent category. **Example 4.4.6**.: Recall that famously the opposite category of commutative \(R\)-algebras is isomorphic to the category of affine schemes over \(R\). Therefore, the resulting tangent category for operad \(\operatorname{Com}\) is equivalent to the tangent category of affine schemes as described in [11, Section 2.3], providing a link between tangent categories and algebraic geometry. For a commutative \(R\)-algebra \(A\), a module over \(A\) in the operadic sense corresponds precisely to a (left) \(A\)-module \(M\). Free \(A\)-algebras are constructed by the symmetric \(A\)-algebra functor: \[\mathsf{Free}_{A}(M)=\mathsf{Sym}_{A}(M)=\bigoplus_{n\in\mathbb{N}}\left(M^{ \otimes_{A}n}\right)_{\Sigma(n)}\] where \(\otimes_{A}\) is the tensor product over \(A\). On the other hand, \(\Omega_{A}\) is precisely the usual (left) module of Kahler differentials over \(A\), that is, the free \(A\)-module over the set \(\{\mathsf{d}(a)|\;\forall a\in A\}\) modulo the necessary derivation identities. Then \(\mathsf{T}^{\circ}(A)\) is the free symmetric \(A\)-algebra over \(\Omega_{A}\): \[\mathsf{T}^{\circ}(A):=\mathsf{Sym}_{A}\left(\Omega_{A}\right)=\bigoplus_{n=0} ^{\infty}\left(\Omega_{A}^{\otimes_{A}n}\right)_{\Sigma(n)}=A\oplus\Omega_{A }\oplus\left(\Omega_{A}\otimes_{A}\Omega_{A}\right)_{\Sigma(2)}\oplus\ldots\] In [18, Definition 16.5.12.I], Grothendieck calls \(\mathsf{T}^{\circ}(A)\) the "fibre tangent" (french for tangent bundle) of \(A\), while in [22, Section 2.6], Jubin calls \(\mathsf{T}^{\circ}(A)\) the tangent algebra of \(A\). An arbitrary element of \(\mathsf{T}^{\circ}(A)\) is a finite sum of monomials of the form \(a\mathsf{d}(b_{1})\ldots\mathsf{d}(b_{n})\), and thus the \(R\)-algebra structure of \(\mathsf{T}^{\circ}(A)\) is essentially the same as polynomials. In particular, this implies that as an \(R\)-algebra, \(\mathsf{T}^{\circ}(A)\) is generated by \(a\) and \(\mathsf{d}(a)\) for all \(a\in A\). On the other hand, \(\mathsf{T}_{n}(A)\) can be described in terms of generators \(a\) and \(\mathsf{d}_{i}(a)\) for all \(a\in A\) and \(1\leq i\leq n\), and \(\mathsf{T}^{\circ 2}(A)\) in terms of four generators \(a\), \(\mathsf{d}(a)\), \(\mathsf{d}^{\prime}(a)\), and \(\mathsf{d}^{\prime}\mathsf{d}(a)\) for all \(a\in A\), modulo all the necessary equations. The rest of the adjoint tangent structure is given as follows on generators: \[p^{\circ}_{A}(a)=a\] \[q^{\circ}_{j}(a)=a q^{\circ}_{j}(\mathsf{d}(a))=\mathsf{d}_{j}(a)\] \[s^{\circ}_{A}(a)=a s^{\circ}(\mathsf{d}(a))=\mathsf{d}_{1}(a)+\mathsf{d}_{2}(a)\] \[z^{\circ}(a)=a z^{\circ}(\mathsf{d}(a))=0\] \[l^{\circ}_{A}(a)=a l^{\circ}_{A}(\mathsf{d}(a))=0 l^{\circ}_{A}(\mathsf{d}^{\prime}(a))=0 l^{\circ}_{A}(\mathsf{d}^{\prime}\mathsf{d}(a))=\mathsf{d}(a)\] \[c_{A}^{\circ}(a)=a\qquad\qquad c_{A}^{\circ}(\mathsf{d}(a))=\mathsf{d}^{ \prime}(a)\qquad\qquad c_{A}^{\circ}(\mathsf{d}^{\prime}(a))=\mathsf{d}(a)\qquad \qquad c_{A}^{\circ}(\mathsf{d}^{\prime}\mathsf{d}(a))=\mathsf{d}^{\prime} \mathsf{d}(a)\] So \((\mathsf{CALG}_{R}^{op},\mathbb{T}^{\circ})\) is a Cartesian Rosicky tangent category. **Example 4.4.7**.: For the operad Ass, this results in a non-commutative version of the previous example. For an \(R\)-algebra \(A\), a module over \(A\) in the operadic sense corresponds precisely to an \(A\)-bimodule \(M\). Free \(A\)-algebras are given by the \(A\)-tensor algebra: \[\mathsf{Free}_{A}(M)=\mathsf{Ten}_{A}(M)=\bigoplus_{n\in\mathbb{N}}M^{\otimes_ {A}n}\] The \(A\)-module \(\Omega_{A}\) is the non-commutative version of the module of Kahler differentials over \(A\)[17, Section 10] (which is important to note is different from the commutative version). Therefore: \[\mathsf{T}^{\circ}(A)=\bigoplus_{n\in\mathbb{N}}\Omega_{A}^{\otimes_{A}n}=A \oplus\Omega_{A}\oplus(\Omega_{A}\otimes_{A}\Omega_{A})\oplus\ldots\] A bit more concretely, \(\mathsf{T}^{\circ}(A)\) can be described as the free \(A\)-algebra over the set \(\{\mathsf{d}(a)|\ \forall a\in A\}\) modulo \(\mathsf{d}(ab)=\mathsf{ad}(b)+\mathsf{d}(a)b\) and \(\mathsf{d}(ra+sb)=r\mathsf{d}(a)+sd(b)\). So \((\mathsf{ALG}_{R}^{op},\mathbb{T}^{\circ})\) is a Cartesian Rosicky tangent category. In [17, Definition 10.2.3], Ginzburg calls \(\mathsf{T}^{\circ}(A)\) the "space of noncommutative differential forms of \(A\)". To the best of our knowledge, this is the first mention of a tangent category that relates directly to non-commutative algebraic geometry. **Example 4.4.8**.: For the operad Lie, we obtain a new example of a tangent category for Lie algebras. For a Lie algebra \(\mathfrak{g}\), modules in the operadic sense correspond to representations of \(\mathfrak{g}\), which we call \(\mathfrak{g}\)-representations and simply denote by their underlying \(R\)-module \(V\). Algebras over \(\mathfrak{g}\) in the operadic sense correspond to Lie algebras \(\mathfrak{g}^{\prime}\) equipped with a Lie algebra morphism \(\mathfrak{g}\to\mathfrak{g}^{\prime}\). So, \(\mathsf{Free}_{\mathfrak{g}}(V)\) is the free Lie algebra over \(\mathfrak{g}\) of a \(\mathfrak{g}\)-representation \(V\). On the other hand, \(\Omega_{\mathfrak{g}}\) is the free representation of \(\mathfrak{g}\) over the set \(\mathsf{d}(\mathfrak{g})=\{\mathsf{d}(x)|\ \forall x\in\mathfrak{g}\}\) modulo the relations \(\mathsf{d}(rx+sy)=s\mathsf{d}(x)+r\mathsf{d}(y)\) and \(\mathsf{d}\left([x,y]\right)=[\mathsf{d}(x),y]+[x,\mathsf{d}(y)]\) for all \(r,s\in R\) and \(x,y\in\mathfrak{g}\). Hence, \(\mathsf{T}^{\circ}(\mathfrak{g})\) can be concretely defined as the free Lie algebra over the underlying set of \(\mathfrak{g}\) and the set \(\mathsf{d}(\mathfrak{g})\) modulo the same equalities as for \(\Omega_{A}\), and such that \([x,y]\in\mathsf{T}^{\circ}(\mathfrak{g})\) is identified to \([x,y]\in\mathfrak{g}\), which makes \(\mathsf{T}^{\circ}(\mathfrak{g})\) a Lie algebra over \(\mathfrak{g}\). Therefore \((\mathsf{IL}_{R}^{op},\mathbb{T}^{\circ})\) is a Cartesian Rosicky tangent category, which we stress is a new important example of a tangent category. We conclude this section by mentioning that the construction of Theorem 4.4.4 is functorial. Indeed, every operad morphism \(f:\mathcal{P}\to\mathcal{P}^{\prime}\) induces a functor \(\mathsf{ALG}_{f}^{op}:\mathsf{ALG}_{f}^{op}\to\mathsf{ALG}_{f}^{op}\) which is defined on objects and maps as \(\mathsf{ALG}_{f}\). It is straightforward to check that \(\mathsf{ALG}_{f}^{op}:(\mathsf{ALG}_{f}^{op},\mathbb{T}^{\circ})\to(\mathsf{ ALG}_{f}^{op},\mathbb{T}^{\circ})\) is a Cartesian Rosicky tangent functor. Now let \(\mathsf{CRTAN}\) be the category of Cartesian Rosicky categories and strong Cartesian tangent functors between them [5, Section 4.3]. Therefore we obtain a functor \(\mathsf{ALG}^{op}:\mathsf{OPERAD}_{R}^{op}\to\mathsf{CRTAN}\) which sends an operad \(\mathcal{P}\) to \((\mathsf{ALG}_{\mathcal{P}}^{op},\mathbb{T}^{\circ})\) and an operad morphism \(f:\mathcal{P}\to\mathcal{P}^{\prime}\) to \(\mathsf{ALG}_{f}^{op}:(\mathsf{ALG}_{p\prime}^{op},\mathbb{T}^{\circ})\to( \mathsf{ALG}_{p}^{op},\mathbb{T}^{\circ})\). ### Vector Fields of Algebras of an Operad In this section we will explain how vector fields correspond in the category of algebras of an operad correspond precisely to derivations. Luckily, it turns out that it is already known that derivations are closely related to the semi-direct product, i.e., the tangent bundle. Indeed, one could apply [26, Proposition 12.3.11] to get the desired result. However, let us give an alternative explanation using the coCartesian differential monad point of view. So let \(\mathcal{P}\) be an operad and let \(A\) be a \(\mathcal{P}\)-algebra. In Section 3.4 we explained how vector field of \(A\) in \((\mathsf{ALG}_{\mathcal{P}},\mathbb{T})\) correspond to \(\mathsf{S}(\mathcal{P},-)\)-derivations of \(A\). It turns out that \(\mathsf{S}(\mathcal{P},-)\)-derivations on \(A\) correspond precisely to \(A\)-derivations evaluated in itself. Indeed, \(A\) is an \(A\)-module where the evaluation maps are induced from the \(\mathcal{P}\)-algebra structure \(\theta:\mathsf{S}(\mathcal{P},A)\xrightarrow{}A\). Then we may consider \(A\)-derivations \(D:A\xrightarrow{}A\). **Lemma 4.5.1**.: _For an operad \(\mathcal{P}\) and a \(\mathcal{P}\)-algebra \(A\). An \(\mathsf{S}(\mathcal{P},-)\)-derivation \(D:A\xrightarrow{}A\) is precisely the same as a \(A\)-derivation \(D:A\xrightarrow{}A\)._ Proof.: Recall that an \(R\)-linear morphism \(D:A\xrightarrow{}A\) is an \(A\)-derivation if: \[D(\mu(a_{1},\ldots,a_{n}))=\sum_{i=1}^{n}\mu(a_{1},\ldots,D(a_{i}),\ldots,a_{n})\] On the other hand, an \(R\)-linear morphism \(D:A\xrightarrow{}A\) is an \(\mathsf{S}(\mathcal{P},-)\)-derivation if \(D\circ\theta=\theta\circ S\left(\mathcal{P},\pi_{1}+D\circ\pi_{2}\right)\circ \partial_{A}\). Let us show that this equality is precisely the same as requiring \(D\) be an \(A\)-derivation. For \(\mu_{A}\in\mathcal{P}(0)\), since \(\partial_{A}(\mu_{A})=0\), we have that \(D(\mu_{A})=0\). For the rest, we compute: \[D(\mu(a_{1},\ldots,a_{n}))=D\left(\theta(\mu;a_{1},\ldots,a_{n}) \right)=\theta\left(S\left(\mathcal{P},\pi_{1}+D\circ\pi_{2}\right)\left( \partial_{A}(\mu;a_{1},\ldots,a_{n})\right)\right)\] \[=\sum_{i=1}^{n}\theta\left(S\left(\mathcal{P},\pi_{1}+D\circ\pi_ {2}\right)\left(\mu;(a_{1},0),\ldots,(0,a_{i}),\ldots,(a_{n},0)\right)\right)= \sum_{i=1}^{n}\theta\left(\mu;a_{1},\ldots,D(a_{i}),\ldots,a_{n}\right)\] \[=\sum_{i=1}^{n}\mu(a_{1},\ldots,D(a_{i}),\ldots,a_{n})\] So we conclude that \(\mathsf{S}(\mathcal{P},-)\)-derivations on \(A\) and \(A\)-derivations evaluated in \(A\) are indeed the same thing. Therefore by Lemma 3.4.2, we have that vector fields correspond to derivations as desired: **Proposition 4.5.2**.: _For an operad \(\mathcal{P}\) and a \(\mathcal{P}\)-algebra \(A\), there is a bijective correspondence between vector fields of \(A\) in \((\mathsf{ALG}_{\mathcal{P}},\mathbb{T})\) and \(A\)-derivations \(D:A\xrightarrow{}A\). Therefore a vector field \(v\in\mathsf{V}_{\mathbb{T}^{\circ}}(A)\) is precisely a \(\mathcal{P}\)-algebra morphism \(v:A\xrightarrow{}A\ltimes A\) such that \(v(a)=(a,D_{v}(a))\) for all \(a\in A\), where \(D_{v}:A\xrightarrow{}A\) is an \(A\)-derivation. Furthermore, the induced Lie bracket is given by \([v,w](a)=(a,D_{v}(D_{w}(a))-D_{w}(D_{v}(a)))\)._ By Lemma 2.3.3, we also have that vector fields in the opposite category of algebras also correspond precisely to derivations: **Corollary 4.5.3**.: _For an operad \(\mathcal{P}\) and a \(\mathcal{P}\)-algebra \(A\), there is a bijective correspondence between vector fields of \(A\) in \((\mathsf{ALG}_{\mathcal{P}}^{op},\mathbb{T}^{\circ})\) and \(A\)-derivations \(D:A\xrightarrow{}A\). So a vector field \(v\in\mathsf{V}_{\mathbb{T}^{\circ}}(A)\) is precisely a \(\mathcal{P}\)-algebra morphism \(v:\mathsf{Free}_{A}(\Omega_{A})\xrightarrow{}A\) which is defined on generators as \(v(a)=a\) and \(v(\mathsf{d}(a))=D_{v}(a)\) for all \(a\in A\), where \(D_{v}:A\xrightarrow{}A\) is an \(A\)-derivation. Furthermore, the induced Lie bracket is given on generators by \([v,w](a)=a\) and \([v,w](\mathsf{d}(a))=D_{v}(D_{w}(a))-D_{w}(D_{v}(a))\)._ Let us consider what vector fields are for our main examples of operads. **Example 4.5.4**.: For an \(R\)-algebra \(A\) and an \(A\)-module \(M\), an \(M\)-derivation evaluated in \(M\) is just an \(A\)-linear endomorphism \(f:M\xrightarrow{}M\). Therefore a vector field of \(M\) in \((\mathsf{MOD}_{A},\mathbb{B})\) is an \(A\)-linear map \(v:M\xrightarrow{}M\times M\) such that \(v(m)=(m,f_{v}(m))\) for some \(A\)-linear map \(f_{v}:M\xrightarrow{}M\). Similarly for vector fields of \(M\) in \((\mathsf{MOD}_{A}^{op},\mathbb{B}^{\circ})\). **Example 4.5.5**.: For the operad \(\mathsf{Com}\) and a commutative \(R\)-algebra \(A\), an \(A\)-derivation evaluated in \(A\) in the operadic sense is the same thing as a derivation in the classical sense, that is, \(R\)-linear morphism \(D:A\xrightarrow{}A\) which satisfies the product rule: \(D(ab)=aD(b)+D(a)b\). Then a vector field of \(A\) in \((\mathsf{CALG}_{R},\mathbb{T})\) is an \(R\)-algebra morphism \(v:A\xrightarrow{}A[\epsilon]\) such that \(v(a)=a+D_{v}(a)\epsilon\) for some derivation \(D_{v}:A\xrightarrow{}A\). Similarly, a vector field of \(A\) in \((\mathsf{CALG}_{R}^{op},\mathbb{T}^{\circ})\) corresponds to an \(R\)-algebra morphism \(v:\mathsf{Sym}_{A}\left(\Omega_{A}\right)\xrightarrow{}A\) which is given on generators as \(v(a)=a\) and \(v(\mathsf{d}(a))=D_{v}(a)\) for some derivation \(D_{v}:A\xrightarrow{}A\). **Example 4.5.6**.: For the operad \(\mathsf{Ass}\), derivation in the operadic sense again correspond to derivations in the classical sense as in the previous example. So vector fields in \((\mathsf{ALG}_{R},\mathbb{T})\) or \((\mathsf{ALG}_{R}^{op},\mathbb{T}^{\circ})\) are given in essentially the same way as in the commutative case. **Example 4.5.7**.: For the operad Lie and a Lie algebra \(\mathfrak{g}\), a \(\mathfrak{g}\)-derivation evaluated in \(\mathfrak{g}\) corresponds to an \(R\)-linear mormphsim \(D:\mathfrak{g}\xrightarrow{}\mathfrak{g}\) which satisfies \(D([x,y])=[x,D(y)]+[D(x),y]\) for all \(x,y\in\mathfrak{g}\). So a vector field of \(\mathfrak{g}\) in \((\mathsf{LIE}_{R},\mathbb{T})\) is a Lie algebra morphism \(v:\mathfrak{g}\xrightarrow{}\mathfrak{g}\ltimes\mathfrak{g}\) such that \(v(x)=x+D_{v}(y)\epsilon\) for some \(\mathfrak{g}\)-derivation \(D_{v}:\mathfrak{g}\xrightarrow{}\mathfrak{g}\). Similarly, a vector field of \(\mathfrak{g}\) in \((\mathsf{Lie}_{R}^{op},\mathbb{T}^{\circ})\) corresponds to an \(R\)-algebra morphism \(v:\mathsf{T}^{\circ}(\mathfrak{g})\xrightarrow{}A\) which is given on generators as \(v(a)=a\) and \(v(\mathsf{d}(a))=D_{v}(a)\) for some \(\mathfrak{g}\)-derivation \(D_{v}:\mathfrak{g}\xrightarrow{}\mathfrak{g}\). ### Differential Objects of an Operad In this section, we will give precise characterizations of the differential objects in both the category of algebras of an operad and its opposite category. On the one hand, for the category of algebras, we will see that the differential objects are in sense quite trivial algebras. On the other hand, we will show that the differential objects in the opposite category are quite rich and recapture a certain kind of module in the operadic sense. Let us begin by taking a look at the differential objects in the category of algebras of an operad. **Proposition 4.6.1**.: _Let \(\mathcal{P}\) be an operad. Then a \(\mathcal{P}\)-algebra \(A\) is a differential object in \((\mathsf{ALG}_{\mathcal{P}},\mathbb{T})\) (in a necessarily unique way) if and only if \(\mu(a_{1},\dots,a_{n})=0\) for all \(n\neq 1\), \(\mu\in\mathcal{P}(n)\), and \(a_{i}\in A\)._ Proof.: Recall that the terminal object in \(\mathsf{ALG}_{\mathcal{P}}\) is given by the zero \(R\)-module \(\mathfrak{0}\), whose \(\mathcal{P}\)-algebra structure is just given by \(0\). Per the discussions in Section 3.5, a \(\mathcal{P}\)-algebra \(A\) is a differential object if and only if \(\pi_{2}:A\ltimes A\xrightarrow{}A\), \(\pi_{1}+\pi_{2}:A\times A\xrightarrow{}A\), and \(\mathfrak{0}:\mathfrak{0}\xrightarrow{}A\) are all \(\mathcal{P}\)-algebra morphisms. Suppose that \(A\) is a differential object. Then, as mentioned in Section 3.5, we have that: \[(\mu(a_{1},\dots,a_{n}),\mu(b_{1},\dots,b_{n}))=\mu((a_{1},b_{1}),\dots,(a_{n}, b_{n}))=\left(\mu(a_{1},\dots,a_{n}),\sum_{i=1}^{n}\mu(a_{1},\dots,b_{i},\dots,a_{n}) \right),\] which implies: \[\mu(b_{1},\dots,b_{n})=\sum_{i=1}^{n}\mu(a_{1},\dots,b_{i},\dots,a_{n}), \forall a_{i},b_{i}\in A.\] The left-hand side does not depend on \(a_{i}\), so we get: \(\mu(b_{1},\dots,b_{n})=\sum_{i=1}^{n}\mu(0,\dots,b_{i},\dots,0)\). But by multilinearity, \(\sum_{i=1}^{n}\mu(0,\dots,b_{i},\dots,0)=0\) for \(n\geq 2\). For \(n=0\), we also get an empty sum. Therefore, \(\mu(b_{1},\dots,b_{n})=0\) for all \(n\neq 1\). Conversely, if in \(A\), \(\mu(b_{1},\dots,b_{n})=0\) for all \(n\neq 1\), it is straightforward to show that the equalities in Lemma 3.5.1 hold, and so, \(A\) is a differential object. Here are the differential objects for our main examples of operads: **Example 4.6.2**.: For any \(R\)-algebra \(A\), \(A^{\bullet}(1)=A\); every \(A\)-module \(M\) is a differential object in \((\mathsf{MOD}_{A},\mathbb{B})\), as per the discussion in Section 3.5. **Example 4.6.3**.: For the operad \(\mathrm{Com}\), a commutative \(R\)-algebra \(A\) is a differential object in \((\mathsf{CALG}_{R},\mathbb{T})\) would imply that \(A[\varepsilon]\cong A\times A\) as \(R\)-algebras. However, the unit in \(A[\varepsilon]\) is \(1\) while the unit in \(A\times A\) is \((1,1)\). But then the isomorphism \(A[\varepsilon]\cong A\times A\) would imply that \(1=0\). This is only the case for the zero \(R\)-algebra \(\mathfrak{0}\). Therefore, the only differential object in \((\mathsf{CALG}_{R},\mathbb{T})\) is \(\mathfrak{0}\). **Example 4.6.4**.: For the operad \(\mathrm{Ass}\), by the same argument as in the previous example, we have that the differential object in \((\mathsf{ALG}_{R},\mathbb{T})\) is \(\mathfrak{0}\). **Example 4.6.5**.: For the operad \(\mathrm{Lie}\), it turns out that the differential objects are precisely the \(R\)-modules. Indeed, every \(R\)-module \(V\) comes equipped with a trivial Lie bracket, \([v,w]=0\), which makes \(V\) a Lie algebra and also that \(V\ltimes V=V\times V\). Conversely, suppose that \(\mathfrak{g}\) is a Lie algebra and a differential object, which in particular implies that \(\mathfrak{g}\ltimes\mathfrak{g}=\mathfrak{g}\ltimes\mathfrak{g}\). However, this implies that \(\left([x_{1},x_{2}],[y_{1},y_{2}]\right)=[(x_{1},y_{1}),(x_{2},y_{2})]=\left([x _{1},x_{2}],[x_{1},y_{2}]+[y_{1},x_{2}]\right)\). Setting \(x_{i}=0\), we get that \([y_{1},y_{2}]=0\), which means that the Lie bracket of \(\mathfrak{g}\) is trivial. So we have that the differential objects in \((\mathsf{ILE}_{R},\mathbb{T})\) are precisely the \(R\)-modules with the trivial Lie bracket. Let us now turn our attention to differential objects in the opposite category of algebras of an operad \(\mathcal{P}\). Luckily, as mentioned in Section 2.4, differential objects do not transfer through adjoint tangent structure. So even if \((\mathsf{ALG}_{\mathcal{P}},\mathbb{T})\) may not have any non-trivial, we will show that \((\mathsf{ALG}_{\mathcal{P}}^{op},\mathbb{T}^{\circ})\) actually has many interesting differential objects. Recall that we mentioned that \(\mathsf{ALG}_{\mathcal{P}}\) is cocomplete [28, Proposition 6.4], and therefore has coproducts. However, coproducts of \(\mathcal{P}\)-algebras are not straightforward and easy to work with. Luckily, there is an alternative but equivalent characterization of differential objects in a Cartesian Rosicky tangent category which does not involve the product \(\times\). As such, this alternative description will allow us to describe differential objects in \((\mathsf{ALG}_{\mathcal{P}}^{op},\mathbb{T}^{\circ})\) without having to work with the coproduct in \(\mathsf{ALG}_{\mathcal{P}}\). Firstly, it turns out that a differential object is in fact a special kind of differential bundle [8, Definition 2.3], which are analogues of smooth vector bundles in a tangent category. While differential bundles are beyond the scope of this paper (we invite interested readers to learn about them in [8, 11, 27]), it is enough to know that a differential object is the same thing as a differential bundle over the terminal object \(*\)[8, Proposition 3.4]. In [27], MacAdam provided an alternative description of a differential bundle in a Cartesian Rosicky tangent category, which in particular required less data and axioms than the original definition. Briefly, MacAdam showed that, in a Cartesian Rosicky tangent category, a differential bundle over an object \(X\) can be characterized as an object \(A\) with maps \(q:A\xrightarrow{}X\), called the projection, \(\zeta:X\xrightarrow{}A\), called the zero map, and \(\ell:A\xrightarrow{}\mathsf{T}(A)\), called the lift map. Furthermore, these need to satisfy: (1) four equalities, (2) that the pullback of \(n\)-copies of \(q\) exists and is preserved by \(\mathsf{T}^{n}\), and (3) \(\ell\) has a universal property being part of a pullback square called the Rosicky's universality diagram, see [27, Corollary 3] or [11, Proposition 3.8] for full details. We have mentioned that differential objects are just differential bundles over the terminal object. So, in a Cartesian Rosicky tangent category, MacAdam's description is greatly simplified when \(X=*\). Firstly, there is only one possible candidate for the projection, being the unique map \(t_{A}:A\xrightarrow{}*\), so this map need not be specified. It follows that condition (2) is just saying that the product of \(n\) copies of \(A\) exists and is preserved by the tangent bundle, which is already true, since we are in a Cartesian tangent category. So, condition (2) is automatically verified. So, one only needs the relations on the maps \(\zeta:*\xrightarrow{}A\) and \(\ell:A\xrightarrow{}\mathsf{T}(A)\). Now, one of the equalities in condition (1) is \(t_{A}\circ\zeta=1_{*}\), which is true by the universal property of the terminal object, so it is always satisfied. Lastly, the pullback square of condition (3) usually has \(\mathsf{T}(*)\times A\) in the bottom corner. However, since \(\mathsf{T}(*)\cong*\), we may rewrite Rosicky's universality diagram with \(A\) in the bottom corner. Therefore, MacAdam's description allows us to provide a much simpler, yet equivalent, characterization of differential objects without referring to products. The following is just a combination of [8, Proposition 3.4] with the rewritten version of [27, Corollary 3], or [11, Proposition 3.8], for the specific case of the terminal object. **Lemma 4.6.6**.: _Let \((\mathbb{X},\mathbb{T})\) be a Cartesian Rosicky tangent category. Then, there is a bijective correspondence between:_ 1. _Differential objects_ \((A,\hat{p},\sigma,\zeta)\)_;_ 2. _Triples_ \((A,\zeta,\ell)\) _consisting of an object_ \(A\)_, a map_ \(\zeta:*\xrightarrow{}A\)_, called the_ _zero map_, and a map_ \(\ell:A\xrightarrow{}\mathsf{T}(A)\)_, called the_ _differential lift__, such that the following equalities hold:_ \[p_{A}\circ\ell=\zeta\circ t_{A}\qquad\qquad\qquad\qquad\ell\circ\zeta=z_{A} \circ\zeta\qquad\qquad\qquad\mathsf{T}(\ell)\circ\ell=l_{A}\circ\ell\] (57) _and the following commutative diagram, called_ _Rosicky's universality diagram__, is a pullback square__:_ \[\begin{array}{c}\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par **Lemma 4.6.7**.: _Let \(\mathcal{P}\) be an operad. Then for any \(R\)-module \(V\), \(\mathsf{S}(\mathcal{P},V)\) is a differential object in \((\mathsf{ALG}^{op}_{\mathcal{P}},\mathbb{T}^{\circ})\) where in particular the zero \(\zeta^{\circ}:\mathsf{S}(\mathcal{P},V)\xrightarrow{\mathcal{P}}(0)\) is defined as follows:_ \[\zeta^{\circ}(\mu;v_{1},\ldots,v_{n})=0\] _and the differential lift \(\ell^{\circ}:\mathsf{T}^{\circ}(\mathsf{S}(\mathcal{P},V))\xrightarrow{ \mathsf{S}}(\mathcal{P},V)\) is defined as follows on generators:_ \[\ell^{\circ}(\mu;v_{1},\ldots,v_{n})=0 \ell^{\circ}\left(\mathsf{d}(\mu;v_{1},\ldots,v_{n})\right)=( \mu;v_{1},\ldots,v_{n})\] _Furthermore, \(\mathsf{T}^{\circ}(\mathsf{S}(\mathcal{P},V))\cong\mathsf{S}(\mathcal{P},V \times V)\) as \(\mathcal{P}\)-algebras._ While free \(\mathcal{P}\)-algebras always give differential objects, it is possible that there are other differential objects. That said, we are still able to characterize precisely the differential objects as precisely to \(\mathcal{P}(0)\)-modules (in the operadic sense). **Theorem 4.6.8**.: _Let \(\mathcal{P}\) be an operad. Then there is a bijective correspondence between differential objects in \((\mathsf{ALG}^{op}_{\mathcal{P}},\mathbb{T}^{\circ})\) and \(\mathcal{P}(0)\)-modules._ Proof.: Let \((A,\zeta^{\circ},\ell^{\circ})\) be a differential object in \((\mathsf{ALG}^{op}_{\mathcal{P}},\mathbb{T}^{\circ})\). Let \(\mathsf{D}_{\ell^{\circ}}(a)=\ell^{\circ}(\mathsf{d}(a))\) and let \(\mathsf{D}_{\ell^{\circ}}(A)\) be the image of \(\mathsf{D}_{\ell^{\circ}}\), so \(\mathsf{D}_{\ell^{\circ}}(A)\) is an \(R\)-module. We claim that the following equips \(\mathsf{D}_{\ell^{\circ}}(A)\) with a \(\mathcal{P}(0)\)-module structure with evaluation: \[\psi_{n+1}\left(\mu;\nu_{1},\ldots,\nu_{n},\ell^{\circ}(\mathsf{d}(a)))\right) :=\ell^{\circ}\left(\mathsf{d}\left(\mu(\nu_{1},\ldots,\nu_{n},a) \right)\right). \tag{60}\] Indeed, by definition, since \(\mathsf{d}\) is a derivation, one has: \[\ell^{\circ}\left(\mathsf{d}\left(\mu(\nu_{1},\ldots,\nu_{n},a) \right)\right) =\ell^{\circ}\left(\sum_{i=1}^{n}\mu(\nu_{1},\ldots,\mathsf{d}\nu _{i},\ldots\nu_{n},a)\right)+\ell^{\circ}\left(\mu(\nu_{1},\ldots,\nu_{n}, \mathsf{d}a)\right) \tag{61}\] \[=\mu\left(\nu_{1},\ldots,\nu_{n},\ell^{\circ}(\mathsf{d}a)\right) \tag{62}\] which gives a \(\mathcal{P}(0)\)-module structure on \(\mathsf{D}_{\ell^{\circ}}(A)\). Conversely, let \(M\) be a \(\mathcal{P}(0)\)-module. Consider the free \(\mathcal{P}(0)\)-algebra \(\mathsf{Free}_{\mathcal{P}(0)}(M)\). As shown in Proposition 4.4.1, \(\mathsf{Free}_{\mathcal{P}(0)}(M)\) is generated by \(\mu\in\mathcal{P}(0)\) and \(x\in M\). However, thanks to the relations on \(\mathsf{Free}_{\mathcal{P}(0)}(M)\), the generators \(\mu\in\mathcal{P}(0)\) correspond in fact to the units coming from the \(\mathcal{P}\)-algebra structure, so \(\mathsf{Free}_{\mathcal{P}(0)}(M)\), as \(\mathcal{P}(0)\)-algebra is generated by \(x\in M\). The \(\mathcal{P}\)-algebra \(\mathsf{T}^{\circ}(\mathsf{Free}_{\mathcal{P}(0)}(M))\) has generators \(x\), and \(\mathsf{d}(x)\) for all \(x\in M\). Now, define the differential lift \(\ell^{\circ}:\mathsf{T}^{\circ}(\mathsf{Free}_{\mathcal{P}(0)}(M))\xrightarrow{ \mathsf{Free}_{\mathcal{P}(0)}(M)}\) as the \(\mathcal{P}\)-algebra morphism defined as follows on generators: \[\ell^{\circ}(x) =0 \ell^{\circ}(\mathsf{d}(x))=x \tag{63}\] which is indeed well-defined, since \(\ell^{\circ}\) can be constructed using the universal properties. Next, we define the zero \(\zeta^{\circ}:\mathsf{Free}_{\mathcal{P}(0)}(M)\xrightarrow{\mathcal{P}}(0)\) as the \(\mathcal{P}\)-algebra morphism defined as follows on generators: \[\zeta^{\circ}(x)=0 \tag{64}\] It is straightforward to check that \(\ell^{\circ}\) and \(\zeta^{\circ}\) satisfy the equalities of (4.6). Lastly for the pushout square, suppose that there was a \(\mathcal{P}\)-algebra morphism \(f:\mathsf{T}^{\circ}(\mathsf{Free}_{\mathcal{P}(0)}(M))\xrightarrow{ \mathcal{A}}\) such that \(f(x)=0\). Then define the \(\mathcal{P}\)-algebra morphism \(f^{\natural}:\mathsf{Free}_{\mathcal{P}(0)}(M)\xrightarrow{\mathcal{A}}\) on generators as \(f^{\natural}(x)=f(\mathsf{d}(x))\). By construction, \(f^{\natural}\) satisfies the necessary identities and is the unique map that does so since it was defined on generators. So we conclude that \((\mathsf{Free}_{\mathcal{P}(0)}(M),\zeta^{\circ},\ell^{\circ})\) is indeed a differential object. We must now explain why these constructions are inverses of each other. So starting from a \(\mathcal{P}(0)\)-module \(M\), by definition of \(\ell^{\circ}\) we already have that \(\mathsf{D}_{\ell^{\circ}}(\mathsf{Free}_{\mathcal{P}(0)}(M))=M\) (and this is an equality of \(\mathcal{P}(0)\)-modules). In the other direction, start with a differential object \((A,\zeta^{\circ},\ell^{\circ})\). Define the \(\mathcal{P}\)-algebra morphism \(\phi:A\xrightarrow{\mathsf{Free}_{\mathcal{P}(0)}}(\mathsf{D}_{\ell^{\circ}}(A))\) using the universal property of the pushout as the unique \(\mathcal{P}\)-algebra morphism such that: \[\phi(\ell^{\circ}(\mathsf{d}(a)))=\ell^{\circ}(\mathsf{d}(a)) \tag{65}\] Then \(\phi\) is a \(\mathcal{P}\)-algebra isomorphism with inverse \(\phi^{-1}:\mathsf{Free}_{\mathcal{P}(0)}(\mathsf{D}_{\ell^{\circ}}(A)) \xrightarrow{\mathcal{A}}\) defined on generators as: \[\phi^{-1}(\ell^{\circ}(\mathsf{d}(a)))=\ell^{\circ}(\mathsf{d}(a)) \tag{66}\] So we have that \(A\cong\mathsf{Free}_{\mathcal{P}(0)}(\mathsf{D}_{\ell^{\circ}}(A))\) as \(\mathcal{P}\)-algebra and it easy to see that \(\phi\) preserves the differential object structure \(\zeta^{\circ}\) and \(\ell^{\circ}\). So we conclude that there is indeed a bijective correspondence between differential objects and \(\mathcal{P}(0)\)-modules as desired. Before considering the differential objects in our main examples, we point out that \(\mathcal{P}(0)\)-modules also have an alternative description in terms of modules in the usual sense. **Lemma 4.6.9**.: _[_2_, Lemma 1.4]_ _For an operad \(\mathcal{P}\), \(\mathcal{P}(0)\)-modules in the operadic sense correspond precisely to \(\mathcal{P}(1)\)-left modules in the usual sense._ Let us consider the differential objects in our main examples of operads. In particular, we note that the first example has differential objects which are not simply free \(\mathcal{P}\)-algebras. On the other hand, for the last three examples, the differential objects turn out to be precisely free \(\mathcal{P}\)-algebras. **Example 4.6.10**.: For an \(R\)-algebra \(A\), \(A^{\bullet}(1)=A\); so every \(A\)-module is a differential object in \((\mathsf{MOD}^{op}_{A},\mathbb{R}^{\circ})\), as per the discussion in Section 3.5. **Example 4.6.11**.: For the operad \(\operatorname{Com}\), \(\operatorname{Com}(0)=R\) and \(\operatorname{Com}(1)=R\). We also have that \(\mathsf{Free}_{R}=\mathsf{Sym}\). Therefore for any \(R\)-module \(V\), \(\mathsf{Sym}(V)\) is a differential object in \((\mathsf{CALG}^{op}_{R},\mathbb{T}^{\circ})\). If \(V\) is a free \(R\)-module with basis \(X\), then recall that \(\mathsf{Sym}(V)\cong R[X]\) and so \(\mathsf{T}^{\circ}(\mathsf{Sym}(V))\cong R[X,dX]\), as in Example 4.2.4. So in terms of polynomials, the differential object structure is defined as follows \[\hat{p}^{\circ}(q(x_{1},\ldots,x_{n}))=q(dx_{1},\ldots,dx_{n}) \zeta^{\circ}(q(x_{1},\ldots,x_{n}))=q(0,\ldots,0)\] \[\sigma^{\circ}(q(x_{1},\ldots,x_{n}))=q(x_{1}+dx_{1},\ldots,x_{n} +dx_{n})\] \[\ell^{\circ}(q(x_{1},\ldots,x_{n},dy_{1},\ldots,dx_{m}))=q(0, \ldots,0,dy_{1},\ldots,dx_{m}),\] where the \(x_{i}\) and \(y_{j}\) are elements of \(X\), possibly with repetitions. This recaptures [11, Theorem 5.9]. **Example 4.6.12**.: For the operad \(\operatorname{Ass}\), \(\operatorname{Ass}(0)=R\) and \(\operatorname{Ass}(1)=R\). We also have that \(\mathsf{Free}_{R}=\mathsf{Ten}\). So for any \(R\)-module \(V\), \(\mathsf{Ten}(V)\) is a differential object in \((\mathsf{ALG}^{op}_{R},\mathbb{T}^{\circ})\). For the case of free \(R\)-modules, we may describe the differential object structure in terms of non-commutative polynomials as in the previous example. **Example 4.6.13**.: For the operad \(\operatorname{Lie}\), \(\operatorname{Lie}(0)=\mathsf{0}\) and \(\operatorname{Lie}(1)=R\). We also have \(\mathsf{Free}_{\mathsf{0}}=\mathsf{Lie}\). Thus for any \(R\)-module \(V\), the free \(\operatorname{Lie}\) algebra \(\mathsf{Lie}(V)\) is a differential object in \((\mathsf{Lie}^{op}_{R},\mathbb{T}^{\circ})\). ## 5 Future Work We conclude this paper by discussing some interesting future research projects that build upon the theory of tangent categories of algebras over an operad. 1. In [11], it was shown that the study of the tangent structure on the opposite category of commutative algebras provide many concepts from the algebraic geometry of affine schemes. Similarly, the tangent structure on the opposite category of associative algebras formalizes many constructions of Ginzburg [17] related to non-commutative geometry. It is natural to ask the question: what kind of geometry can be described using the opposite category of Lie algebras? Is this somehow related to the geometry of Lie algebras studied, for example, by Francis and Gaitsgory [13]? What are the geometries obtained using the operads of PreLie algebras or Poisson algebras? Can we describe the non-commutative Poisson geometry studied by Van den Bergh [33] using our techniques? Even more generally, one could use tangent categories to provide a new version of algebraic geometry relative to an operad. 2. Differential bundles in a tangent category [8] generalize the notion of smooth vector bundles. In [11], it was shown that differential bundles over a commutative algebra correspond precisely to modules over said commutative algebra. As such, we conjecture the differential bundles over an algebra of an operad will also correspond to modules over the algebra (in the operadic sense). This would be an extension of Theorem 2.2.3 since a differential object can equivalently be described as a differential bundle over the terminal object. This will be investigated by the second author for their PhD thesis. Beyond differential bundles, one should also study other interesting tangent category notions in the (opposite) category of algebras over an operad. For example, what are the tangent category versions of connections [7], or de Rham cohomology [12], or even solving differential equations [9] in these tangent categories? 3. The story of this paper was to explain how, from operad, one could build tangent categories and Cartesian differential categories. A natural question is if we can go in the other direction. So for what kinds of tangent categories or Cartesian differential categories is possible to construct an operad. In a similar fashion, it would be of interest to precisely characterize which tangent categories are equivalent to the (opposite) category of algebras of an operad. * In [1], Bauer, Burke and Ching generalized the notion of tangent categories to the higher categorical setting. This new concept of tangent \(\infty\)-category allows one to study tangent structures "up to higher coherences". There is a well-established notion of \(\infty\)-operad [4], and this encodes operations "up to homotopy". It then seems natural to ask whether our theory can be generalized to produce a tangent \(\infty\)-structure on the (opposite) category of algebras over an \(\infty\)-operad. If so, there are plenty of potential applications: replacing the operad \(\operatorname{Com}\) by the \(\infty\)-operad \(E_{\infty}\) of commutative algebras up to infinity could give an insight into the well-defined notion of derived algebraic geometry [32]. Using a replacement for Lie could recapture notions from the geometry of Lie algebras of Francis and Gaitsgory, and from works of Harpaz, Nuiten and Prasma [13, 19]. Replacing Ass by the \(\infty\)-operad \(A_{\infty}\) should recapture the theory of \(A_{\infty}\)-geometry of Kontsevich and Soibelman [23]. * In Sections 4 we discussed briefly the functoriality of the two constructions. In particular, we mentioned how every operad morphism provide strong/strict tangent morphisms relating to each construction. This fact should play an important role in better understanding the link between operad theory and tangent category theory. So there are many potential interesting paths to take for future work regarding operads and tangent categories.
2302.09127
Robust Pseudo-Markets for Reusable Public Resources
We study non-monetary mechanisms for the fair and efficient allocation of reusable public resources, i.e., resources used for varying durations. We consider settings where a limited resource is repeatedly shared among a set of agents, each of whom may request to use the resource over multiple consecutive rounds, receiving utility only if they get to use the resource for the full duration of their request. Such settings are of particular significance in scientific research where large-scale instruments such as electron microscopes, particle colliders, or telescopes are shared between multiple research groups; this model also subsumes and extends existing models of repeated non-monetary allocation where resources are required for a single round only. We study a simple pseudo-market mechanism where upfront we endow each agent with a budget of artificial credits, proportional to the fair share of the resource we want the agent to receive. The endowments thus define for each agent her ideal utility as that which she derives from her favorite allocation with no competition, but subject to getting at most her fair share of the resource across rounds. Next, on each round, and for each available resource item, our mechanism runs a first-price auction with a selective reserve, wherein each agent submits a desired duration and a per-round-bid, which must be at least the reserve price if requesting for multiple rounds; the bidder with the highest per-round-bid wins, and gets to use the item for the desired duration. We consider this problem in a Bayesian setting and show that under a carefully chosen reserve price, irrespective of how others bid, each agent has a simple strategy that guarantees she receives a $1/2$ fraction of her ideal utility in expectation. We also show this result is tight, i.e., no mechanism can guarantee that all agents get more than half of their ideal utility.
Siddhartha Banerjee, Giannis Fikioris, Éva Tardos
2023-02-17T20:21:50Z
http://arxiv.org/abs/2302.09127v4
# Robust Pseudo-Markets for Reusable Public Resources ###### Abstract We study non-monetary mechanisms for the fair and efficient allocation of _reusable public resources_. We consider settings where a limited resource is shared among a set of agents, each of whom may request to use the resource over multiple consecutive rounds, receiving some utility only if they get to use the resource for the full duration of their request. Such settings are of particular significance in scientific research where large-scale instruments such as electron microscopes, particle colliders, and telescopes are shared between multiple research groups; this model also subsumes and extends existing models of repeated non-monetary allocation where resources are required for a single round only. We study a simple pseudo-market mechanism where upfront we endow each agent with some budget of artificial credits, with the budget proportion reflecting the _fair share_ of the resource we want the agent to receive. The endowments thus define for each agent her _ideal utility_ as that which she derives from her favorite allocation with no competition, but subject to getting at most her fair share of the items across rounds. Next, on each round, and for each available item, our mechanism runs a first-price auction with a selective reserve, wherein each agent submits a desired duration and per-round-bid (which must be at least the reserve price), and the highest bidder gets to use the item for the desired duration. We consider this problem in a Bayesian setting and show that under a carefully chosen reserve price, irrespective of how others bid, each agent has a strategy that guarantees her a \(1/2\) fraction of her _ideal utility_ in expectation. We also show this result is tight, i.e., there is no mechanism that can guarantee that all agents get more than half of their ideal utility. ## 1 Introduction Our goal in this paper is to design mechanisms that enable fair and efficient utilization of reusable public resources between agents who share the resource. To formalize what we mean, it helps to consider the problem faced by researchers sharing some expensive scientific equipment, such as a telescope/mass spectrometer/etc. Such settings exhibit several common features: * _Resource constraints:_ A telescope can at any time only be used by one researcher; a biology core facility may have multiple spectrometers allowing simultaneous sharing by a few, but still, not by all agents. This necessitates some form of centralized coordination. * _Stochastic and time-sensitive demands:_ Research is uncertain, and so a researcher may not always know beforehand when they may need to use the equipment. Moreover, requirements are often time-sensitive, and can not be delayed. For example, astronomers often use less powerful telescopes to find potential astronomical events of interest but need a large-scale telescope to take proper measurements. Consequently, the coordinator needs to make allocation decisions on the fly, while being uncertain about future events. * _Multi-period demands:_ Different researchers may require to use the instrument for different lengths of time to complete their experiments and may be unable to interrupt them and resume later. Hence, the coordinator may need to enable reserving the resource for long durations that block the workflow of some researchers. * _Strategic behavior by agents:_ At the end of the day, given any coordination mechanism, competing agents will try to 'game' the mechanism for their own good, and there is always a danger that this may lead to a tragedy of the commons, where the resource is inefficiently utilized. The way these challenges are handled in practice is often ad-hoc. For example, modern large-scale telescopes such as the James Webb telescope [11] use complex protocols for time-sharing between researchers, based on a combination of guaranteed slots, proposal-based allocations, and on-demand slots. Of course, one common solution to any such coordination problem is to enable some form of 'free market', and indeed, astronomers have proposed using credit-based market mechanisms [12, 13]. However, our understanding of such non-monetary mechanisms is limited, especially when incorporating dynamics, uncertainty, and scheduling constraints. Our work aims to advance our understanding of such _pseudo-market_ mechanisms for such settings. At a high level, it seems clear that the principal who controls the instrument should decide when each agent gets to control the instrument in a way that aims for high overall utilization, while ensuring equitable distribution of resource usage between agents. One way to do so could be via round-robin sharing - however, while this is clearly equitable and maximizes utilization, in many cases this is undesirable as it has no alignment with the agents' utilities. Unfortunately, without charging money, it is unclear how the principal can determine the exact utility that each agent gets from using the instrument on any given round. The principal could of course ask agents to report their utilities, but without money, the principal can not incentivize agents to report truthfully. Additionally, there is no direct way to compare these reports since there is no relative scales of their utilities. Pseudo-markets based on artificial currencies offer a way to guarantee fairness and ensure that the instrument is used in a way that generates high utilities for the agents. The basic idea is that the principal first endows each agent with some budget of artificial credits, which represents the _fair share_ of the resource she is entitled to receive. More specifically, the principal gives each agent an amount of an artificial currency and then uses some mechanism to allocate the instrument charging them money in this artificial currency. In the case that the allocation process happens only once, running a first-pricing auction offers an appealing mechanism, where the equilibrium is the market equilibrium of the corresponding Fisher market. [1] studied the case when allocation happens repeatedly over a number of rounds but the item is required only for a single round at a time. They study a first-price mechanism and show approximate fairness properties of the resulting allocation. Their approach however depends on having only single-round allocations, and as we demonstrate below, performs poorly in settings that require multi-round allocations. Consequently, we need new ideas and techniques to extend their robustness notion to settings where the resources may be needed for various amounts of time. ### Overview of our results and techniques The basic setting we study is as follows: A principal has a single item that is to be shared between a set of agents over \(T\) rounds. In each round, each agent has a random requirement which is comprised of a required duration, and a value for utilizing the item over that duration. The principal uses a pseudo-market to coordinate the agents, where she first endows each agent with some budget of artificial credits, and then whenever the resource is free, runs some mechanism to determine who should get to use it, and for how long. Our aim is to give _per-agent performance guarantees_ under minimal behavioral assumptions on other agents. In particular, following [1], we define the _ideal utility_ of an agent with fair share \(0<\alpha<1\) as the best long-run average utility she can achieve in a setting without competition, but where she is is constrained to have the item for at most an \(\alpha\) fraction of the rounds (see Section 4 for the formal definition). The ideal utility can be used as a measure of how much utility an agent can hope to achieve that does not depend on the behavior of the other agents. Our results focus on what fraction of her ideal utility an agent can achieve in a _minimax_ sense, irrespective of how other agents bid. This is in contrast to other measures like no-regret, where the resulting utility is compared with a benchmark that depends on how the other agents bid. In this sense, our guarantees can be viewed as characterizing the robustness of the underlying mechanism with respect to an agent's ideal utility. Our main result is a new mechanism, First-Price Pseudo-Auction with Multi-Round Reserves, that guarantees that with multi-round allocations, each agent can realize at least \(\nicefrac{{1}}{{2}}\) of her ideal utility. In the setting of single-round allocations, this same guarantee is achieved by a simple first price auction [1]; however, this auction performs poorly with multi-round reservations, even if agents are truthful. The reason for this is that if we allow agents to reserve the item for longer periods, in a round in which there is no serious demand, an agent can capture the item for a long time at a very low price, and such a reservation can block higher-valued demands that arrive in later periods. To overcome this difficulty, our mechanism uses a reserve price for multi-period reservations. In each period each agent can request the resource for a consecutive sequence of periods starting with the current one by submitting a requested duration and their per-round bid of that duration. Bids that request the resource for more than one round need to be at least the reserve price; aside from that rule, the agent with the highest per-round bid wins. We can think of our mechanism as a combination of a spot market for the current period, and a buy-ahead reservation option but with a price floor. Our main technical contributions are as follows: * In Section 5 we study our mechanism, where an agent with fair share \(\alpha\) is endowed with budget \(\alpha T\) and the other agents have a total budget of \((1-\alpha)T\). Now when the reserve price is \(r\), we prove that an agent with ideal utility \(v^{\star}\) can guarantee \(v^{\star}T\min\{1/r,1-1/r\}-O(\sqrt{T})\) utility in expectation over \(T\) rounds, regardless of how the other agents behave (Theorem 5.2). This quantity is maximized when \(r=2\), in which case the agent can guarantee half her ideal utility. We then show that this is the best possible bound in our mechanism: an agent cannot guarantee more than \(v^{\star}T(\min\{1/r,1-1/r\})^{+}\) expected utility (Theorem 5.4). This shows that reserve prices are essential in our mechanism for multi-period reservations. The \(1/r\) part in the minimum of both results comes from the fact that an agent with budget \(\alpha T\) and multi-round demands cannot get the item for more than \(\alpha T/r\) rounds when the reserve price is \(r\). The argument for being able to guarantee the part with \(1-1/r\) is the following: The other agents have budget at most \(T\) which means they can win at most \(T/r\) rounds with bids at or above the reserve price, leaving \(T(1-1/r)\) for the agent if she is willing to pay the reserve price. However, with the multi-period demands, the periods are not at all independent. For example, if the adversary could reserve every other round ahead of time, this could eliminate all value for an agent with only two round demands. Using a martingale argument we prove that because the other agents' behavior is independent of the value of the agent, the agent can get \(v^{\star}\) utility from each one of those rounds in expectation. For the upper bound result, the other agents can get the item for \(T/r\) rounds, and this leaves the agent with \(T-T/r\) rounds in which the item is available. We prove that if the mechanism allows long reservation durations, then our agent may have high value in only an \(\alpha\) fraction of the available periods. * In Section 6 we prove that no mechanism can guarantee that every agent can get more than half her ideal utility (Theorem 6.1), making our previous result optimal. We prove this by examining an example where every agent has positive demand with low probability, that lasts many rounds. * In Section 7 we study the same setting we did in Section 5, but when there are \(L\) identical items that the principal can allocate to the agents (we still assume each agent wants at most one item at a time). In this setting the ideal utility of an agent with fair share \(\alpha\) allows her to have the item for a fraction of \(\alpha L\) rounds. We show that the agent can again guarantee half of her ideal utility (Theorem 7.2). ### Paper Outline Before presenting our work, we survey related literature in Section 2. In Section 3, we define the simplest case of the reusable resource model, where the principal has a single resource to allocate in each round, and outline the general structure of the pseudo-market mechanism that we study in this work. For most of the paper, we focus on this single resource setting: we first define our per-agent benchmarks (Section 4) and then state our main robustness guarantees (Section 5) and associated hardness results (Section 6). We extend our basic setting to incorporate multi-unit settings in Section 7. ## 2 Related Literature Our work sits at the intersection of two topics in mechanism design: (i) non-monetary mechanisms for resource allocation, and (ii) dynamic mechanisms with state. We now briefly summarize the literature on these topics. Classical non-monetary mechanism design utilizes a wide variety of models and objectives to realize good welfare outcomes despite having strategic agents. Some of these include targeting alternate solution concepts such as pairwise stability [10], disregarding incentives to focus on fairness properties of realized outcomes [1, 2, 11], partial public information (e.g., utilities are public but feasibility is private [13]), designing lotteries to approximate efficient outcomes [12, 11, 13], explicitly hurting efficiency to align incentives (i.e.,'money burning' [1, 10]), and using ex-post verification [14]. Most of this literature, however, deals with one-shot (i.e., static) allocation settings. More recently, there has been an increasing focus on _pseudo-markets_ - simulating monetary mechanisms using an _artificial currency_ - driven largely by their success in real-world deployment for university course allocation [13, 14], food banks [15, 16] and cloud computing platforms [17, 18]. Theoretical foundations of such mechanisms have been studied in the context of one-shot combinatorial assignment problems [1, 13], but also in dynamic settings, including redistribution mechanisms [19], and approximate mechanisms for infinite-horizon Bayesian settings with knowledge of agents' value distributions [1, 2, 18, 17]. The latter works all build on the core idea of [13] of 'linking' multiple allocation problems to mitigate any gains from strategic behavior in any one problem. More recently, [2] showed how pseudo-markets can be used for repeated single-round allocations to get individual performance guarantees _without_ knowing demand distributions; our work adapts and extends their ideas to the more complex reusable resource setting. The challenge of dealing with both fixed budgets and reusable resources also places our work in the active area of dynamic mechanisms with state; once the resource is allocated, it is unavailable for the next few periods, and the user with the allocation has decreased credit. The difficulty in these problems arises due to factors that couple allocations across time; for example, incomplete information and learning [14, 15, 16], cross-period constraints including budget limits [1, 1, 17, 18, 19], leftover packets in a queuing system [10, 11], stochastic fluctuations in the underlying setting [15, 1, 16, 17], adversarial environments [1], etc. Analyzing equilibria in repeated settings however can be difficult, and so authors have explored approximation techniques such as mean-field approaches [1, 18] and bi-criteria approximations [19]. Another relevant and recent work is that of [13], where value maximizing agents have budget constraints that correspond to real money. They show a regret type guarantee against the strategy that spends at most the agent's average budget in expectation each round, but their result degrades a lot when the behavior of the other agents is adversarial. [1, 2, 18] study a similar setting where the agents' behavior is assumed to reach equilibrium; the first and third focusing on revenue maximization and the second on welfare maximization. ## 3 Reusable Public Resources and Pseudo-Markets - Basic Setting We now formally define the simplest case of the reusable resource model, where the principal has a single resource it can allocate in each round - for example, time-sharing a single telescope. We then outline the general structure of pseudo-market mechanisms that we study in this work. In Sections 4 to 6, we focus on this setting; subsequently, we extend our model to incorporate multi-unit settings in Section 7. ### Allocating a Single Reusable Public Resource There are \(n\) agents and \(T\) rounds. In each round, the principal has a single item to allocate. Agents have single-minded multi-round valuations; formally, in every round \(t\in[T]\), each agent \(i\in[n]\) samples a random _type_\(\theta_{i}[t]=(V_{i}[t],K_{i}[t])\), where \(K_{i}[t]\) is the number of rounds that agent \(i\) needs the item for starting from round \(t\), and \(V_{i}[t]\) is the _per-round value_ she gets if she is allocated the item for the next \(K_{i}[t]\) rounds. In other words, if agent \(i\) is allocated the item on rounds \(t,t+1,\ldots,t+K_{i}[t]-1\) (henceforth denoted \([t,t+K_{i}[t]-1]\)), then she receives a total utility of \(K_{i}[t]V_{i}[t]\); on the other hand, if the agent is not allocated the item for all of these rounds, then she does not get any utility arising from her round \(t\) demand. We henceforth use \(\Theta=\mathbb{R}_{+}\times\mathbb{N}\) to denote the type space for each agent and round. We assume that each agent \(i\) in each round \(t\) draws demand type \(\theta_{i}[t]\) from some underlying distribution \(\mathcal{F}_{i}\), that is _independent across agents_, and _across rounds_. In particular, agent \(i\)'s demand \(\theta_{i}[t]\) is drawn independently in round \(t\) irrespective of her demand in previous rounds. Note that this means if agent \(i\) has \(K_{i}[t]>1\) but is not allocated the item in round \(t\), then in round \(t+1\), the earlier demand is lost, and she draws a new demand \(\theta_{i}[t+1]\). Having demand that is lost if not immediately allocated is meaningful in many settings where it is not possible for agents to hold back on executing a demand till a later round. For example, a biologist may not be able to preserve a sample that she needs a microscope to study and the same is also true in the observatory example mentioned in the introduction, where also new opportunities may arise even when the observer was not able to take advantage of the current one. Moreover, once an agent \(i\) is allocated the item for days \([t,t+K_{i}[t]-1]\), then we assume the item is unavailable for reallocation to _any_ agent (including agent \(i\)). In other words, in every round \(t\), either the item is unavailable for allocation, or the principal commits it to some agent \(i\) for rounds \([t,t+K_{i}[t]-1]\). The commitment to not interrupt allocation models situations with large setup cost. For example, there is often some setup cost in preparing an instrument for an experiment, or in directing and focusing a telescope, and so it may be desirable for agents to complete any task they start. ``` Input: Rounds \(T\), agents \(n\), reserve \(r\geq 0\), and agents' fair shares \(\{\alpha_{i}\}_{i\in[n]}\) (i.e., \(B_{i}[1]=\alpha_{i}T\,\forall\,i\in[n]\)) repeat Collect bids \(b_{1}[t],\ldots,b_{n}[t]\) and desired durations \(d_{1}[t],\ldots,d_{n}[t]\); Let \(\mathcal{V}=\left\{i\in[n]:b_{i}[t]d_{i}[t]\leq B_{i}[t]\text{ and }(d_{i}[t]=1\text{ or }b_{i}[t]\geq r)\right\}\) ; // Determine valid bids if\(\mathcal{V}=\emptyset\)then Leave item unallocated and set \(t=t+1\) ; else Define \(I_{t}=\arg\max_{i\in\mathcal{V}}b_{i}[t]\) (ties broken arbitrarily) ; // Choose winning agent Update \(B_{i}[t+1]=B_{i}[t]-b_{i}[t]d_{i}[t]\,\mathbb{I}\left[i=I_{t}\right]\) ; // Update agents' budgets Allocate item to agent \(I_{t}\) for rounds \([t,t+d_{I_{t}}[t]-1]\); Set \(t=t+d_{I_{t}}[t]\) ; // Block item for requested duration end while until\(t>T\); ``` **ALGORITHM 1** First-Price Pseudo-Auction with Multi-Round Reserves ### Artificial Currency Mechanism for Reusable Resources Given the above setting, the mechanism we study for allocating the resource is a _pseudo-market_ (or artificial credit) mechanism. The basic idea behind such mechanisms is to allocate all agents competing for the shared resource with some budget of artificial credits, which they then use over time to compete in some form of repeated auction. Such mechanisms have been widely used in practice [1, 2], and also studied theoretically in one-shot allocation settings under known value distributions [1, 1, 2]. Our presentation here is closest to that of [1], who consider the robustness properties of such mechanisms for repeated allocation with single-round demands. Our mechanism, First-Price Pseudo-Auction with Multi-Round Reserves, starts by endowing every agent with some artificial credits. Since the currency has no intrinsic value (and hence no fixed scale), we henceforth normalize the total budget of all agents to be \(T\), of which every agent \(i\) has a share of \(\alpha_{i}\), with \(\sum_{i\in[n]}\alpha_{i}=1\); in other words, each agent \(i\) has an initial budget \(B_{i}[1]=\alpha_{i}T\). Following [1], we assume that the fractions \(\alpha_{i}\) are exogenously specified so as to determine the _fair share_ of the resource that the principal wants agent \(i\) to have (which then determines her associated _ideal utility_). We define these formally in Section 4, but at a high level, the budget fraction \(\alpha_{i}\) corresponds to the idea that agent \(i\) could get to use the item on an \(\alpha_{i}\) fraction of rounds. Following the initial endowment, the basic idea behind pseudo-market mechanisms is to then run some particular mechanism in each round, and allow agents to bid (and pay) in these mechanisms using their credits. Agents have no intrinsic value for these credits (i.e., their utility is not quasi-linear, it is only from the allocations gained), but they are unable to bid more than their remaining budget. While different works consider different mechanisms, the most commonly studied is a first-price auction [1, 1, 1, 2]. Our mechanism handles multi-round allocations as follows: first, the principal declares a _reserve price_\(r\) for multi-round allocations; next, at the start of any round in which the resource is available, each agent declares a duration of rounds she wants to reserve the resource for, as well as a _per-round_ bid (which must exceed the reserve if the requested duration lasts multiple rounds). The agent with the _highest per-round bid_ is then awarded the item for her requested duration and is 'charged' her bid times the duration from her credit budget. We present the mechanism in detail in Algorithm 1. In Section 7, we discuss how to extend it to settings where the principal has more than one resource to allocate. Individual Agent Benchmarks: Fair Shares and the Ideal Utility In this section, we define the utility benchmarks we consider for the agents. When mechanisms can use real money and agents have quasi-linear utilities, then payments provide an easy way to compare different agents' values and utilities. In contrast, when there are no payments that affect the agents' utilities, then there is no way to make inter-personal comparisons between agents. For this reason, we need a welfare benchmark for each agent that is independent of other agents' values. To this end, we adapt an idea from [1] (which in turn borrows ideas from the bargaining literature and the Fisher market model), wherein agents' benchmarks are defined by their (exogenous) budget shares as well as their own relative valuations for items in different rounds. The main idea behind our benchmark is that for each agent \(i\), her budget fraction \(\alpha_{i}\) (with \(\sum_{j}\alpha_{j}=1\)) determines the _fair share_ of the overall resource, i.e., the fraction of total rounds she is entitled to utilize, while respecting the rights of other agents to access the resource. To see how this translates into our welfare benchmark, consider the following simple example: suppose we have \(n\) agents, where each agent has budget share \(\alpha_{i}=1/n\). Moreover, suppose every agent \(i\) has \((V_{i}[t],K_{i}[t])=(1,1)\) with probability \(1\) in every round. An agent's maximum total utility _without competition_ is \(T\), but it would be unreasonable to expect this to be attainable for any agent. In contrast, by symmetry, each agent could expect to get \(T/n\) total rounds resulting in \(T/n\) utility, which is indeed easily achieved (for example, via a round-robin allocation scheme). [1] extend the above idea to define the _ideal utility_ for agents in settings where every item lasts for just one round. Their basic definition asserts that an agent's ideal utility is _the highest per-round utility she can get while ensuring that other agents can get at least their fair share of the resource_. Formally, for each agent \(i\), they consider a simplified setting with no other agents, but where agent \(i\) is constrained to request the item on at most an \(\alpha_{i}\) fraction of the rounds, and define agent \(i\)'s ideal utility to be the maximum expected per-round utility she can achieve in this setting. For example, if agent \(i\) has \((V_{i}[t],K_{i}[t])=(1,1)\) with probability \(1\), then her ideal utility is thus \(\alpha_{i}\) for any budget split \(\alpha_{i}\); more generally, if the agent has value \(V_{i}[t]\sim\mathcal{F}_{i}\) (and \(K_{i}[t]=1\)) and budget share \(\alpha_{i}\), her ideal utility essentially corresponds to that achieved by requesting the item only on rounds in which the agent's value \(V_{i}[t]\) is in the top \(\alpha_{i}\) quantile of her value distribution, which makes the agent request the item with probability \(\alpha_{i}\) depending on her demand. A first challenge in extending the ideal utility to our setting with reusable resources is that now it is tricky to define what it means for an agent to request each round in the no-competition setting while ensuring that she is only using her fair share, since each bid may need to reserve the resource for multiple days. To this end, given a budget share \(\alpha_{i}\), we now define agent \(i\)'s ideal utility to be her long-run average utility in an infinite horizon setting with no competition, subject to her long-run average resource utilization being at most \(\alpha_{i}\). Formally, let \(A[t]\in\{0,1\}\) denote the state of the resource at time \(t\) (where \(A[t]=1\) indicates the resource is available), and \(\pi:\Theta\rightarrow[0,1]\) denote a (stationary) policy that specifies for each type \(\theta=(V,K)\) the probability with which the agent requests to reserve the resource for \(K\) rounds conditioned on it being available. Now we have the following definition: **Definition 4.1** (Ideal Utility).: Consider the single reusable resource setting, with a single agent \(i\) with budget share \(\alpha_{i}\) and i.i.d type \(\theta[t]=(V[t],K[t])\sim\mathcal{F}_{i}\). For any policy \(\pi:\Theta\rightarrow[0,1]\), let \(Z[t]\sim\text{Bernoulli}(A[t]\pi(\theta[t]))\) denote a sequence of indicator variables each of which is \(1\) in round \(t\) if the resource is available and is requested by the agent, else \(0\); moreover if \(Z[t]=1\), then \(A[t^{\prime}]=0\) for all \(t^{\prime}\in[t,t+K[t]-1]\). Now, the ideal utility \(v_{i}^{\star}\) for agent \(i\) is defined as the solution to the following constrained infinite-horizon control problem: \[\max\,_{\pi} \quad\lim_{H\to\infty}\frac{1}{H}\sum_{t=1}^{H}V[t]K[t]Z[t]\] (1) such that \[\quad\lim_{H\to\infty}\frac{1}{H}\sum_{t=1}^{H}K[t]Z[t]\leq\alpha_{i}\] Note that the above problem does not depend on our true (finite) horizon \(T\). Moreover, assuming \(V[t],K[t]\) are bounded, via the Markov chain ergodic theorem we have that for any policy \(\pi\), the above time average costs exist and equal their expected value under the stationary distribution of the resulting Markov chain. Note however, that unlike a standard average cost MDP, due to the additional constraint, the optimal policy here may not be deterministic. Intuitively, the above definition extends the notion of the ideal utility to the single reusable resource setting by again considering a world with only a single agent \(i\) with fair share \(\alpha_{i}\), and allowing the agent to choose any stationary request policy (i.e., the probability, as a function of \((V[t],K[t])\), with which the agent can reserve the resource whenever it is free) subject to it using the resource at most an \(\alpha_{i}\) fraction of rounds on average. Defining the control problem over the infinite horizon allows us to ignore boundary issues (e.g., if there are \(5\) rounds remaining but agent \(i\)'s demand lasts \(6\) rounds); moreover, note that in the case of single-round demands, our definition recovers that of [1]. ### Computing the ideal utility One problem with defining the ideal utility via the infinite horizon control problem in Eq. (1) is that it is unclear if it can be solved efficiently, and moreover, how to interpret the solution for any given distribution \(\mathcal{F}_{i}\) and fair share \(\alpha_{i}\). We now show how the above definition of the ideal utility for agent \(i\) can be re-formulated as a simpler optimization problem, which moreover we show can be efficiently solved by converting into a linear program. For ease of notation, we drop the subscript \(i\) for the remainder of this section. To reformulate the above program, we first define \(\texttt{Req}(\theta)\) to be an indicator random variable that is \(1\) if the agent has type \(\theta\), and wants to request the item if available (in our earlier notation, for given randomized policy \(\pi\), we have \(\texttt{Req}(\theta[t]))\sim\text{Bernoulli}(\pi(\theta[t]))\)). Now, given some function \(\texttt{Req}\), we divide the entire horizon into a collection of discrete renewal cycles or _epochs_, where each epoch comprises of all rounds between successive times in which the resource is released by the agent: formally, if in round \(t\) the agent requests the item, then the epoch associated with that request comprises of all the rounds before \(t\) since the last time the resource was unavailable (which can be \(0\)) and all the rounds after \(t\) till the agent releases her hold of the item (i.e., rounds \([t,t+K[t]-1]\)). We show an example in Fig. 1. Under any stationary Figure 1: _Example of an agent’s ideal utility: The numbers on top are the agent’s type \(\theta[t]=(V[t],K[t])\) each round \(t\); a type in red denotes that \(\texttt{Req}(\theta[t])=1\). Each epoch is associated with a request that the agent made while the item was available: it includes the rounds when the agent held the item because of that request (green blocks) and the rounds before that while the item was free. For each epoch \(j\) we also include its length, \(\ell_{j}\), the number of rounds the agent holds the item for, \(k_{j}\), and the total value the agent gets, \(v_{j}\)._ policy \(\mathtt{Req}\), any two epochs are independent and identically distributed. Now let \(q=\mathbb{P}\left[\mathtt{Req}(V,K)=1\right]\) denote the probability that the agent requests the item if it is available (where the probability is over both \((V,K)\sim\mathcal{F}\) and any randomization in the agent's request policy). Then the following are true for each epoch: * The number of rounds in an epoch after its start and until (and including) the round in which the agent requests for the item is distributed as \(\text{Geometric}(q)\). * If \(\ell_{j}\) is the length of an epoch \(j\), then \(\mathbb{E}\left[\ell_{j}\right]=\nicefrac{{1}}{{q}}-1+\mathbb{E}\left[K \middle|\mathtt{Req}(V,K)=1\right]\). * If \(v_{j}\) is the total utility the agent gets in epoch \(j\), then \(\mathbb{E}\left[v_{j}\right]=\mathbb{E}\left[VK\middle|\mathtt{Req}(V,K)=1\right]\). Similarly, if \(k_{j}\) is the number of rounds the agent holds the item in epoch \(j\), then it holds that \(\mathbb{E}\left[k_{j}\right]=\mathbb{E}\left[K\middle|\mathtt{Req}(V,K)=1\right]\). * The agent's per-round utility is \(\nicefrac{{\sum_{j}v_{j}}}{{\sum_{j}\ell_{j}}}\) and the total fraction of rounds she holds the item for is \(\nicefrac{{\sum_{j}k_{j}}}{{\sum_{j}\ell_{j}}}\). Since \((\ell_{j},k_{j},v_{j})\) is independent across different epochs, as the number of epochs approaches infinity, we get that her expected per-round utility is \(\nicefrac{{\mathbb{E}\left[v_{1}\right]}}{{\mathbb{E}\left[\ell_{1}\right]}}\) and the expected fraction of rounds she holds the item for is \(\nicefrac{{\mathbb{E}\left[k_{1}\right]}}{{\mathbb{E}\left[\ell_{1}\right]}}\). Using these facts, we can re-parameterize and re-write the optimization problem (1) as follows: \[\max_{\mathtt{Req}} \frac{\mathbb{E}\left[VK\middle|\mathtt{Req}(V,K)=1\right]}{ \frac{1}{q}-1+\mathbb{E}\left[K\middle|\mathtt{Req}(V,K)=1\right]}\] (2) such that \[\mathbb{P}\left[\mathtt{Req}(V,K)=1\right]=q\] \[\frac{\mathbb{E}\left[K\middle|\mathtt{Req}(V,K)=1\right]}{\frac{ 1}{q}-1+\mathbb{E}\left[K\middle|\mathtt{Req}(V,K)=1\right]}\leq\alpha\] Before we show how an agent can efficiently solve the optimization problem (2), we make some observations about its optimal solution. * One natural question is whether the optimal request policy of the agent is independent of \(K\) (and in particular, if \(\mathtt{Req}(V,K)=\mathbb{1}\left[V\geq\bar{v}\right]\) for some \(\bar{v}\)); note that this is the case in the settings with single-round demands. However, the following example shows this is not the case for reusable resources: Consider an agent with fair share \(\alpha\) and the following distribution on \((V,K)\) \[(V,K)=\begin{cases}(1,1),&\text{with probability }\alpha/2\\ (\epsilon,2),&\text{with probability }\alpha/2\\ (\epsilon^{2},1),&\text{otherwise}\end{cases}\] for some \(\epsilon\) much smaller than \(\alpha\). It is easy to see that the optimal request policy should have \(\mathtt{Req}(1,1)=1\) with probability \(1\). However, if \(\mathtt{Req}(V,K)=0\) in the other cases, then the agent gets the item for only \(\alpha/2\) fraction of the rounds. In order to increase her ideal utility the agent can set \(\mathtt{Req}(\epsilon^{2},1)=1\) with some probability, but in the optimal solution, it should always be \(\mathtt{Req}(\epsilon,2)=0\). Intuitively, getting the item for \(2\) rounds and gaining only \(2\epsilon\) utility on those rounds, hinders the agent from getting expected utility \(\alpha/2\) on the next round (recall that \(\alpha\) is much larger than \(\epsilon\)). Formally, the denominator in the objective function of Eq. (2) becomes much larger, while the numerator increases only slightly, overall decreasing the ideal utility. * The above example also shows that given fair share \(\alpha\), it is possible that the optimal request policy by the agent results in resource-usage fraction less than \(\alpha\) (we will need to distinguish this in our proofs later). In particular, if the agent's value is only \((1,1)\) or \((\epsilon,2)\), then as we argue above, the optimal request policy sets \(\mathtt{Req}(\epsilon,2)=0\) with probability \(1\), resulting in the agent holding the item with probability \(\alpha/2\). Finally, in the case where \(\Theta\) is a _finite_ type-space, we can convert the optimization problem (2) into a linear program as shown in the lemma that follows. The lemma shows that in the case where \(\Theta\) is a finite type set, then the optimal request policy \(\texttt{Req}(\theta)\) underlying the ideal utility can be solved efficiently via a linear program. Subsequently, we will use this policy as a blackbox for defining the agent's robust bidding strategy in the pseudo-market. In the linear program below, for each type \(\theta\), we use variables \(f_{\theta}\) to denote the expected fraction of rounds in which the resource is available, the agent has type \(\theta\), and she requests to reserve the resource (note that this is not the fraction of rounds the agent uses the resource while _having_ demand type \(\theta\) - this is \(k_{\theta}f_{\theta}\)). Using these variables, we can restrict the agent to use the resource in at most a fraction \(\alpha\) rounds using a linear inequality. In addition we need to bound each \(f_{\theta}\) to be at most as much as type \(\theta\) is available. **Lemma 4.1**.: _The optimization problem (2) can be constructed as follows: Suppose that the agent has each type \(\theta=(v_{\theta},k_{\theta})\in\Theta\) with probability \(p_{\theta}\), and let \(x_{\theta}=p_{\theta}\,\mathbb{P}\left[\texttt{Req}(\theta)=1|\theta\right]\) denote the probability that the agent has type \(\theta\) and requests the item given its availability. Then we have \(x_{\theta}=\frac{f_{\theta}}{1-\sum_{\theta^{\prime}}(k_{\theta^{\prime}}-1) f_{\theta^{\prime}}}\), where \(\{f_{\theta}\}\) is the solution to the following linear program._ \[\max_{\{f_{\theta}\}_{\theta\in\Theta}} \sum_{\theta}v_{\theta}k_{\theta}f_{\theta}\] (3) _such that_ \[\sum_{\theta}k_{\theta}f_{\theta}\leq\alpha\] \[0\leq f_{\theta}\leq p_{\theta}\left(1-\sum_{\theta^{\prime}}(k _{\theta^{\prime}}-1)f_{\theta^{\prime}}\right)\qquad\forall\,\theta\in\Theta\] To understand the conversion between \(x_{\theta}\) and \(f_{\theta}\) and the upper bound used for \(f_{\theta}\), note that when the agent gets the item for \(k\) rounds at some time \(t\) then on the following \(k-1\) rounds the item is not available. This means that the fraction of rounds the item is available is \(1-\sum_{\theta^{\prime}}(k_{\theta^{\prime}}-1)f_{\theta^{\prime}}\). On any of those rounds, the probability of having type \(\theta\) is \(p_{\theta}\), which results in the upper bound on \(f_{\theta}\) given above. Proof of Lemma 4.1.: Using the variables \(\{x_{\theta}\}_{\theta\in\Theta}\) we rewrite Eq. (2): \[\max_{\{x_{\theta}\}_{\theta\in\Theta}} \frac{\sum_{\theta}v_{\theta}k_{\theta}x_{\theta}}{1+\sum_{\theta }(k_{\theta}-1)x_{\theta}}\] \[\text{such that} \frac{\sum_{\theta}k_{\theta}x_{\theta}}{1+\sum_{\theta}(k_{ \theta}-1)x_{\theta}}\leq\alpha\] \[0\leq x_{\theta}\leq p_{\theta} \forall\,\theta\in\Theta\] We turn the above into an LP by setting \(f_{\theta}=\frac{x_{\theta}}{1+\sum_{\theta^{\prime}}(k_{\theta^{\prime}}-1) x_{\theta^{\prime}}}\), which is equivalent to the the linear system \(\left(I-\vec{f}\left(\vec{k}-1\right)^{\top}\right)\vec{x}=\vec{f}\). Now, as long as \(1-\vec{f}^{\top}\left(\vec{k}-1\right)=1-\sum_{\theta^{\prime}}(k_{\theta^{ \prime}}-1)f_{\theta^{\prime}}\neq 0\) (which holds due to the constraints imposed on \(\vec{f}\) as shown below), we can use the Sherman-Morrison matrix inversion formula1 to get \(\left(I-\vec{f}\left(\vec{k}-1\right)^{\top}\right)^{-1}=I+\frac{\vec{f}(\vec {k}-1)^{\top}}{1-\vec{f}^{\top}\left(\vec{k}-1\right)}\). Thus, we get the unique solution \(x_{\theta}=\frac{f_{\theta}}{1-\sum_{\theta^{\prime}}(k_{\theta^{\prime}}-1) f_{\theta^{\prime}}}\), and substituting this in the above program, we get the promised LP in Eq. (3). Footnote 1: [https://en.wikipedia.org/wiki/Sherman-Morrison_formula](https://en.wikipedia.org/wiki/Sherman-Morrison_formula) Allocating a Single Reusable Resource: Robustness Guarantees Given the above setup, we are now ready to characterize the performance of the First-Price Pseudo-Auction with Multi-Round Reserves (Algorithm 1). In particular, our main result is the following _per-agent robustness guarantee_ that the mechanism enjoys: we show that under a reserve price \(r\), _every agent can get a constant fraction (depending on \(r\)) of their ideal utility, irrespective of how other agents behave_. With respect to this robustness guarantee, we show that the minimax optimal reserve \(r\) is \(2\), in which case the agent can ensure they get at least half their ideal utility. Before proceeding, we need to introduce some notation. Since we are studying guarantees from the perspective of a single agent, we henceforth drop the \(i\) subscript. Throughout this section, we define \(v^{\star}\) to be the ideal utility of the agent and \(\beta\) to be the fraction of rounds in which the agent claims the item under the optimal request policy \(\mathtt{Req}\), when there is no competition. More specifically, from Eq. (2), we have \[v^{\star} =\frac{\mathbb{E}\left[VK\middle|\mathtt{Req}(V,K)=1\right]}{ \frac{1}{q}-1+\mathbb{E}\left[K\middle|\mathtt{Req}(V,K)=1\right]} \tag{4}\] \[\beta =\frac{\mathbb{E}\left[K\middle|\mathtt{Req}(V,K)=1\right]}{ \frac{1}{q}-1+\mathbb{E}\left[K\middle|\mathtt{Req}(V,K)=1\right]}\leq\alpha\] Finally we assume there is some upper bound \(k_{\max}\) on the duration of demands that agents sample (i.e., \(K_{i}[t]\leq k_{\max}\) for all \(i,t\)). To prove our robustness bound, we consider the following simple bidding strategy for an agent: Robust Bidding Policy: - Agent solves the LP in Lemma 4.1 to compute the request probability \(\mathbb{P}\left[\mathtt{Req}(\theta)=1|\theta\right]\) for each type \(\theta\) that realizes the ideal utility for her budget share \(\alpha\). - For round \(t\) the agent re-samples \(\mathtt{Req}(\theta[t])\). - If in round \(t\) the item is available, the agent has enough remaining budget (\(B[t]\geq rK[t]\)), and \(\mathtt{Req}(\theta[t])=1\), then she competes in the auction with (per-round) bid \(b[t]=r\) and duration \(d[t]=K[t]\). In other words, the agent computes her optimal stationary request policy in the no-competition setting, and then, while her budget is sufficient, bids at the reserve price according to this request policy. Note that this policy can be computed efficiently using Lemma 4.1. Now, to understand the performance of this strategy, we first establish a simple lemma that shows that the agent's utility in each round under the Robust Bidding Policy is directly proportional to her payment in that round. For simplicity, we henceforth assume that if an agent who follows Robust Bidding Policy wins the auction in round \(t\) for \(K[t]\) periods, then she pays the total amount \(P[t]=rK[t]\) and realizes her total utility \(U[t]=V[t]K[t]\) instantaneously. **Lemma 5.1**.: _Consider an agent with budget share \(\alpha\), and compute her ideal utility \(v^{\star}\) and ideal utilization fraction \(\beta\leq\alpha\). Let \(\theta[t]=(V[t],K[t])\) denote her demand type in round \(t\), and let \((U[t],P[t])\) be her total realized utility and total payment in round \(t\) under Robust Bidding Policy (and any fixed policy of other agents). Then, for all \(t\in[T]\), we have_ \[\mathbb{E}\left[U[t]\right]=\frac{v^{\star}}{\beta r}\,\mathbb{E}\left[P[t]\right]\] This lemma utilizes the simplicity of the Robust Bidding Policy, and in particular, the fact that the agent's utility is positive in round \(t\) only if \(P[t]=rK[t]\) and that the player winning on round \(t\) and \(\theta[t]\) are independent conditioned on \(\mathtt{Req}(\theta[t])=1\). Proof.: Let \(W[t]\) be an indicator random variable that is \(1\) if the agent wins in round \(t\), else \(W[t]=0\). We are going to use the fact that the agent's type \(\theta[t]=(V[t],K[t])\) does not directly depend on \(W[t]\), but rather on whether the agent chooses to bid, which is determined by the request policy \(\texttt{Req}(V[t],K[t])\) (formally, \(\theta[t]\) is conditionally independent of \(W[t]\) given \(\texttt{Req}(\theta[t])\)). Now we write \[\mathbb{E}\left[U[t]\right] = \mathbb{E}\left[K[t]V[t]W[t]\right]\] \[= \mathbb{E}\left[K[t]V[t]\big{|}W[t]=1\right]\mathbb{P}\left[W[t]=1\right]\] \[= \mathbb{E}\left[K[t]V[t]\big{|}\texttt{Req}(\theta[t])=1\right] \mathbb{E}\left[W[t]\right]\] \[= \frac{v^{\star}}{\frac{1}{q}-1+\mathbb{E}\left[K[t]\big{|} \texttt{Req}(\theta[t])=1\right]}\mathbb{E}\left[W[t]\right]\] where in the last equality we used Eq. (4). Similarly, \[\mathbb{E}\left[P[t]\right] = \mathbb{E}\left[rK[t]W[t]\right]\] \[= r\,\mathbb{E}\left[K[t]\big{|}W[t]=1\right]\mathbb{P}\left[W[t]=1\right]\] \[= r\,\mathbb{E}\left[K[t]\big{|}\texttt{Req}(\theta[t])=1\right] \mathbb{E}\left[W[t]\right]\] \[= \frac{r\beta}{\frac{1}{q}-1+\mathbb{E}\left[K[t]\big{|}\texttt{ Req}(\theta[t])=1\right]}\,\mathbb{E}\left[W[t]\right]\] where in the last equality we used Eq. (4). Combining the two above equalities we get the desired bound. We now proceed to prove the main result of this section, that using Robust Bidding Policy, an agent can guarantee a fraction of their ideal utility. **Theorem 5.2**.: _Consider First-Price Pseudo-Auction with Multi-Round Reserves with reserve price \(r\geq 1\), and any agent with budget share \(\alpha\), corresponding ideal utility \(v^{\star}\), and ideal utilization fraction \(\beta\). Then, using Robust Bidding Policy, irrespective of how other agents bid, the agent can guarantee a total expected utility_ \[\mathbb{E}\left[\sum_{t\in[T]}U[t]\right]\geq v^{\star}T\left(\min\left\{ \frac{1}{r},1-\frac{1}{r}\right\}-\frac{1}{\beta}O\left(\sqrt{\frac{k_{\max}}{ T}}\right)\right)\] **Remark 5.3**.: _The first term in the competitive ratio in Theorem 5.2 is maximized when \(r=2\), in which case it becomes \(\nicefrac{{1}}{{2}}\). We note also that via more careful accounting, the first term can be improved to \(\min\left\{\frac{1}{r},1-\frac{1-\alpha}{r}\right\}\), which gives the optimal result for \(\alpha=1\); for ease of presentation, we defer this improvement to Section 7 (see Theorem 7.2 and Appendix A)._ At a high level (and ignoring the sub-linear in \(T\) terms), the proof of the theorem rests on a critical property of the reserve price that irrespective of the bidding policy, _it limits the number of rounds that other agents can claim at prices greater than or equal to the reserve_. The rest of the agents have a budget of \((1-\alpha)T\), and hence at a reserve price of \(r\), they can collectively win at most \((1-\alpha)T/r\) rounds at a bid which is higher than the reserve. More precisely, owing to the structure of the First-Price Pseudo-Auction with Multi-Round Reserves mechanism, the other agents can collectively _block_ the agent on at most \((1-\alpha)T/r\) rounds. The remaining rounds are available to the agent, and roughly a \(\beta\) fraction of these have high value (i.e., are requested for under \(\texttt{Req}\)). In particular, if we choose \(r=2\), then the other agents can at most block \(T/2\) rounds, and among the remaining rounds, the agent wants roughly \(\beta T/2\) rounds, and also has sufficient budget to claim these rounds at the reserve price. The problem with formalizing the above intuition is that with multi-round allocations, it is not enough to show that the agent has access to a \(\beta/2\) fraction of the rounds on average, as in order to receive utility for a demand \((V[t],K[t])\), the agent must get the item for the entire interval \([t,t+K[t]-1]\). As an extreme case, suppose \(\alpha\) is small, and the agent had type either \((1,1)\) or \((0,0)\), and moreover, on a given sample path, had \((V[t],K[t])=(0,0)\) on almost all odd rounds. Now the remaining agents could 'adversarially' bid \((2r,1)\) on all even rounds, and by blocking these, ensure the agent rarely wins the item. Of course, such a sample path is unlikely with i.i.d. demands, but the challenge still is to rule out all such bidding behavior by the other agents. The main idea in our proof is to track all the rounds in which the agent is not _blocked_ - i.e., where the item is available and all other agents bid lower than \(r\) - and argue that by playing the Robust Bidding Policy, the agent wins close to a \(q\) fraction of these rounds, the probability that \(\texttt{Req}(V[t],K[t])=1\) in the solution of Eq. (2). Such a property is true for any set of rounds _fixed upfront_; however, the set of non-blocked rounds may not be independent, both because they may be blocked adversarially, and also due to the lengths of the reservations. Additionally, because the agent's final payment is the minimum between her budget and a function of the unblocked rounds she has a high value for, in order to lower bound her payment we need a high probability bound on the second quantity. We do that by showing that at any time \(t\), the difference between the utilization of the agent up to \(t\), and the number of unblocked rounds up to \(t\) scaled by \(\beta\), forms a sub-martingale, which lets us use the Azuma-Hoeffding inequality2 to get the high probability bound. We then combine this with the fact that the number of blocked rounds overall is at most \(T/r\) to get the desired result. Footnote 2: [https://en.wikipedia.org/wiki/Azuma](https://en.wikipedia.org/wiki/Azuma)’s_inequality Proof of Theorem 5.2.: Fix our agent, Alice, with budget share \(\alpha\), and corresponding ideal utility \(v^{\star}\) and utilization \(\beta\leq\alpha\). We assume that Alice plays Robust Bidding Policy: in any round \(t\), if the item is available and \(\texttt{Req}(V[t],K[t])=1\), then she participates in First-Price Pseudo-Auction with Multi-Round Reserves with per-round bid \(b[t]=r\) and duration \(d[t]=K[t]\). Let \(\texttt{Blk}[t]\in\{0,1\}\) be an indicator variable that Alice is _blocked_ from competing for the item in round \(t\). In particular, we have \(\texttt{Blk}[t]=1\) if * The item was reserved for round \(t\) by Alice in a previous round (at a rate of \(r\) credits per day for the item). * The item was previously reserved for round \(t\) by an agent other than Alice, who is paying at least \(r\) per round. * The item is available, but some agent other than Alice bids at least \(r\) (for one or multiple rounds), and wins the round \(t\) auction. In all other cases, we have \(\texttt{Blk}[t]=0\). Given \(\texttt{Blk}[t]\), and assuming Alice has a remaining budget of at least \(rK[t]\leq rk_{\max}\), we have that her payment \(P[t]\) in round \(t\) is \[P[t]=rK[t]\mathbbm{1}\left[\text{Alice wins auction in round }t\right]=rK[t] \texttt{Req}(\theta[t])(1-\texttt{Blk}[t])\] Since Alice might become budget limited at some point, her overall payment is at least \[\sum_{t}P[t]\geq\min\left\{\alpha T,r\sum_{t=1}^{T}K[t]\texttt{Req}(\theta[t] )(1-\texttt{Blk}[t])\right\}-rk_{\max} \tag{5}\] We now study the second term in the minimum, which represents Alice's _unconstrained_ spending (i.e., if she was never budget limited), and prove a high probability lower bound on this. Subsequently, using Lemma 5.1, we can translate this into a utility lower bound. To this end, we define \[Z_{\tau}=\sum_{t=1}^{\tau}K[t]\texttt{Req}(\theta[t])(1-\texttt{Blk}[t])-\beta \sum_{t=1}^{\tau}(1-\texttt{Blk}[t])\] To understand the rationale behind this, briefly assume that Alice makes requests that last only one round, i.e., \(\texttt{Req}(\theta[t])=1\) implies \(K[t]=1\). Given any set \(\mathcal{T}\) of rounds chosen _independently_ of \(\texttt{Req}\), if Alice wins _all_ the rounds in \(\mathcal{T}\) she requests for under policy \(\texttt{Req}\), then she would expect to utilize the item for at least \(\beta|\mathcal{T}|\) rounds in \(\mathcal{T}\). Now observe that, over the first \(\tau\) rounds, the set of rounds \(\{t\leq\tau:\texttt{Blk}[t]=0\}\) are precisely those on which Alice is not blocked from the item, and \(Z_{\tau}\) counts the difference between the actual number rounds (budget-unconstrained) Alice wins in this set, and \(\beta\) times the number of rounds in the set. If the set was chosen independently of Alice's policy, then \(Z_{\tau}\) would be a martingale, which we can then use to estimate the total expected payment \(\mathbb{E}\left[\sum_{t}P[t]\right]\), and hence the total utility. Unfortunately, however, with no additional assumptions on the bidding behavior of other agents, we can not assert that the set of unblocked rounds is independent of the \(\texttt{Req}\) policy (for example, the adversary's policy may depend on her budget). Nevertheless, we show below that \(Z_{\tau}\) is a sub-martingale with respect to the history of the previous rounds \(\mathcal{H}_{\tau-1}\). First, recall we define \(q=\mathbb{P}\left[\texttt{Req}(\theta[t])=1\right]\) under Alice's optimal request policy (Eq. (2)). Moreover, we also have \(q=\mathbb{P}\left[\texttt{Req}(\theta[t])=1|H_{t-1}\right]\), since \(\texttt{Req}\) is a _stationary_ policy that only depends on \(\theta[t]\), the agent's type in round \(t\), which is independent of \(\mathcal{H}_{t-1}\). Thus, we have: \[\mathbb{E}\left[Z_{\tau}-Z_{\tau-1}\big{|}\mathcal{H}_{\tau-1}\right] =\ \mathbb{E}\left[K[\tau]\texttt{Req}(\theta[\tau])(1- \texttt{Blk}[\tau])\big{|}\mathcal{H}_{\tau-1}\right]-\beta\,\mathbb{E}\left[ (1-\texttt{Blk}[\tau])\big{|}\mathcal{H}_{\tau-1}\right]\] \[=\ q\,\mathbb{E}\left[K[\tau](1-\texttt{Blk}[\tau])\big{|} \texttt{Req}(\theta[\tau])=1,\mathcal{H}_{\tau-1}\right]-\beta\,\mathbb{E} \left[(1-\texttt{Blk}[\tau])\big{|}\mathcal{H}_{\tau-1}\right]\] Next, from Eq. (4), we get that \(\mathbb{E}\left[K[t]\big{|}\texttt{Req}(\theta[t])=1\right]=\frac{1-q}{q}\frac {\beta}{1-\beta}\). Moreover, note that \(K[\tau]\) and \(\texttt{Blk}[\tau]\) are independent given that Alice wants to request the item in round \(\tau\) (i.e., \(K[\tau]\) and \(\texttt{Blk}[\tau]\) are conditionally independent given \(\texttt{Req}(\theta[t])=1\)). Substituting, in the above equation, we get \[\mathbb{E}\left[Z_{\tau}-Z_{\tau-1}|\mathcal{H}_{\tau-1}\right] =\ \frac{\beta(1-q)}{(1-\beta)}\,\mathbb{E}\left[(1-\texttt{Blk}[\tau]) \big{|}\texttt{Req}(\theta[\tau]),\mathcal{H}_{\tau-1}\right]-\beta\,\mathbb{E }\left[(1-\texttt{Blk}[\tau])\big{|}\mathcal{H}_{\tau-1}\right]\] \[\geq\ \beta\,\mathbb{E}\left[(1-\texttt{Blk}[\tau])\big{|} \mathcal{H}_{\tau-1}\right]-\beta\,\mathbb{E}\left[(1-\texttt{Blk}[\tau]) \big{|}\mathcal{H}_{\tau-1}\right]=0\] where the second inequality follows from the facts that \(\texttt{Blk}[\tau]\) can only increase if we remove the condition that \(\texttt{Req}(\theta[\tau])=1\) and that \(q\leq\beta\) (by Eq. (4) and the fact that \(K\geq 1\)). Now, since \(\mathbb{E}\left[Z_{\tau}-Z_{\tau-1}|\mathcal{H}_{\tau-1}\right]\geq 0\), we have that \(Z_{\tau}\) is a sub-martingale. Moreover, since \(|Z_{\tau}-Z_{\tau-1}|\leq k_{\max}\) with probability \(1\), we can use Azuma's inequality to get for any \(\epsilon>0\), \[\mathbb{P}\left[Z_{T}-Z_{0}\leq-\epsilon\right]\leq e^{-\frac{\epsilon^{2}}{2 7k_{\max}}}.\] In other words, for any \(\epsilon>0\), we have that with probability at least \(1-e^{-\frac{\epsilon^{2}}{27k_{\max}}}\) \[r\sum_{t=1}^{T}K[t]\texttt{Req}(V[t],K[t])(1-\texttt{Blk}[t])\geq r\beta \sum_{t=1}^{T}(1-\texttt{Blk}[t])-r\epsilon\] On the other hand, note that we have \(\sum_{t}\texttt{Blk}[t]\leq T\frac{1}{r}\) with probability \(1\) - this follows from the fact that for every round in which Alice is blocked, some agent (including possibly Alice) pays at least \(r\) credits. Thus we have with probability at least \(1-e^{-\frac{t^{2}}{2Tk_{\max}}}\) \[r\sum_{t=1}^{T}K[t]\texttt{Req}(V[t],K[t])(1-\texttt{Blk}[t])\geq r\beta T \left(1-\frac{1}{r}\right)-r\epsilon \tag{6}\] Combining Eqs. (5) and (6), we get with probability at least \(1-e^{-\frac{t^{2}}{2Tk_{\max}}}\), Alice's payment satisfies \[\sum_{t}P[t]\geq\min\left\{\alpha T,r\beta T\left(1-\frac{1}{r}\right)-r \epsilon\right\}-rk_{\max}\geq\beta Tr\min\left\{\frac{1}{r},1-\frac{1}{r} \right\}-r\epsilon-rk_{\max}\] Setting \(\epsilon=\Theta(\sqrt{Tk_{\max}})\) and taking expectations, we get that her expected payment is at least \[\sum_{t}\mathbb{E}\left[P[t]\right]\geq\beta Tr\min\left\{\frac{1}{r},1-\frac {1}{r}\right\}-rk_{\max}-rO\left(\sqrt{Tk_{\max}}\right)\] Finally, using Lemma 5.1, we get that Alice's expected utility satisfies \[\sum_{t}\mathbb{E}\left[U[t]\right]\geq v^{\star}T\min\left\{\frac{1}{r},1- \frac{1}{r}\right\}-\frac{v^{\star}}{\beta}k_{\max}-\frac{v^{\star}}{\beta}O \left(\sqrt{Tk_{\max}}\right)\] Finally, since \(k_{\max}\leq T\), we have \(k_{\max}=O(\sqrt{Tk_{\max}})\), which gets our promised guarantee. In Theorem 5.2 we proved that an agent can guarantee approximately a \(\max\{1/r,1-1/r\}\) fraction of her ideal utility, by using a very simple strategy that involves only bidding \(r\) or \(0\). It is reasonable to wonder if the agent can guarantee a bigger fraction by using a more complicated strategy or if the choice of \(r=2\) is the optimal one. In the next theorem, we prove that the answer is negative. More specifically, we prove that our result in Theorem 5.2 is asymptotically tight under our mechanism as we jointly scale \(T,n\) and \(k_{\max}\) (in particular, for large \(T\), and assuming \(k_{\max}=\omega(1)\) and \(\alpha_{i}=o(1)\,\forall\,i\in[n]\) with respect to \(T\)). **Theorem 5.4**.: _Consider the First-Price Pseudo-Auction with Multi-Round Reserves mechanism, with maximum reservation duration \(k_{\max}\geq 2\), and reserve price \(r\geq 0\). Then there is a strategy for the other agents such that an agent with budget \(\alpha T\) and ideal utility \(v^{\star}\) is limited to_ \[\mathbb{E}\left[\sum_{t\in[T]}U[t]\right]\leq v^{\star}T\left(1-\frac{(1- \alpha)}{\max\{1,r\}}+\frac{1}{k_{\max}}\right)+v^{\star}(k_{\max}-1)\] In order to get the bound we consider an agent with a (small) budget share \(\alpha\), and moreover, who in each round has demand \(\theta[t]=(1,1)\) with probability \(\alpha\), and \((0,0)\) otherwise. In contrast, suppose every time the item is available, at least one other agent demands \(k_{\max}\) rounds at price \(\max\{1,r\}\). This means that the agent has three options to get utility. First, she can bid only when she has positive value and get the item if it is available (which only happens with probability \(1/k_{\max}\)) until all other agents run out of money. Second, she can bid slightly more than \(\max\{r,1\}\) when she has zero value to block the other agents from getting the item. Third, she can wait for the other agents' budget to deplete, which happens in the last \(T(1-\frac{(1-\alpha)}{\max\{1,r\}})\) rounds. Proof.: Fix our agent, her budget \(\alpha T\), and her type to be \((V[t],K[t])=(1,1)\) with probability \(\alpha\) and \((V[t],K[t])=(0,1)\) otherwise. We notice that \(v^{\star}=\alpha\) and \(\beta=\alpha\). We will show the impossibility result by assuming that all the other agents follow the same deterministic strategy: each tries to reserve \(k_{\max}\) periods, by bidding \(\max\{1,r\}\) per period. For the rest of the proof, we are going to think of all the other agents into one adversary, with a total budget of \((1-\alpha)T\). Let \(A_{t}\in\{0,1\}\) denote the availability of the item: if \(A[t]=1\) then in round \(t\) the agent has the ability to bid and reserve the item; if \(A[t]=0\) then the adversary has reserved the item in a previous period and still has it. As long as \(A[t]=1\), the agent can control if she wins the item in round \(t\) or not by bidding slightly above \(\max\{1,r\}\). This allows us to assume w.l.o.g. that she only requests the item for \(1\) round at a time, since requesting the item for multiple rounds can be simulated by requesting the item for \(1\) round multiple times. We notice that the agent's utility is \[\sum_{t=1}^{T}U[t]=\sum_{t=1}^{T}V[t]\mathds{1}\left[\text{agent wins in }t\right]=\sum_{t=1}^{T}A[t]V[t]\mathds{1}\left[\text{agent bids in }t\right]\leq\sum_{t=1}^{T}A[t]V[t].\] We now notice that because the random variables \(A[t]\) and \(V[t]\) are independent, \(\mathbb{E}\left[A[t]V[t]\right]=\alpha\,\mathbb{E}\left[A[t]\right]\). This proves that the agent's expected utility \[\mathbb{E}\left[\sum_{t=1}^{T}U[t]\right]\leq\alpha\,\mathbb{E}\left[\sum_{t= 1}^{T}A[t]\right]. \tag{7}\] We now upper bound the sum of the above quantity. Let \(U\) be the number of rounds the adversary won, which makes the item unavailable for \((k_{\max}-1)U\) rounds in total. Note that \(\sum_{t=1}^{T}A_{t}=T-(k_{\max}-1)U\). We now lower bound \(k_{\max}U\), the number of periods the adversary holds the item, by observing that the adversary must eventually run out of budget (since as long as the adversary has budget either she or the agent pays at least \(1\) for the item each round). This means that \(k_{\max}U\geq\frac{(1-\alpha)T}{\max\{1,r\}}-k_{\max}\), i.e., the adversary gets the item until their budget is depleted, up to an additive error of \(k_{\max}\). Combining this with our previous bound for the agent's utility in (7), we have that \[\mathbb{E}\left[\sum_{t=1}^{T}U[t]\right]\leq\alpha\left(T-\frac{k_{\max}-1}{ k_{\max}}\frac{(1-\alpha)T}{\max\{1,r\}}+k_{\max}-1\right)\] By rearranging the above, we complete the proof. ## 6 Impossibility Result for Single Resource Allocation In this section, we study impossibility results on the fraction of ideal utility an agent can get under _any_ mechanism. More specifically we show that our result in Theorem 5.2 is tight: as the number of agents \(n\) and the maximum number of days an agent can have demand for, \(k_{\max}\), become large, no mechanism can guarantee every agent more than half their ideal utility. **Theorem 6.1**.: _There is an example where \(n\) agents have the same ideal utility \(v^{\star}\) and no mechanism can guarantee every agent total expected utility more than_ \[v^{\star}T\left(\frac{1}{2}+O\left(\frac{1}{k_{\max}}\right)\right)\] _as \(n\to\infty\)._ We study an instance where the \(n\) agents only have demands that last \(k_{\max}\) with small probability. In our example, if every agent gets allocated the item every time she has positive value, then her total expected utility would be exactly \(v^{\star}T\). However, because the demands of the agents overlap, no mechanism can guarantee such an allocation for every agent. Proof.: Consider an example with \(n\) agents, each with budget \(T/n\) and the same type distribution \[(V,K)=\begin{cases}(1,k_{\max}),&\text{ w.p. }\frac{1}{k_{\max}(n-1)+1}:=p\\ (0,1),&\text{ otherwise}\end{cases}.\] Note that given the above probability \(p\), if an agent sets \(\texttt{{Re}}\texttt{{Q}}(1,k_{\max})=1\) with probability \(1\), then her ideal utilization is exactly \(\beta=\alpha=\nicefrac{{1}}{{n}}\). This means that the ideal utility of every agent is \(v^{\star}=\nicefrac{{1}}{{n}}\). We now calculate the expected maximum social welfare in this setting. Let \(p^{\prime}=1-(1-p)^{n}\) be the probability that any agent has positive value in a certain round when the item is free. The expected number of rounds before an item is allocated in this mechanism is \(\nicefrac{{1}}{{p^{\prime}}}-1\) and then the item is allocated for \(k_{\max}\) rounds. This means that the fraction of rounds the item is allocated for is \[\frac{k_{\max}}{k_{\max}-1+\frac{1}{p^{\prime}}}\] We show that the above fraction equals \(\nicefrac{{1}}{{2}}\) as \(n\) and \(k_{\max}\) approach infinity. We notice that if \(n\) is large enough, for any value of \(k_{\max}\) \[p^{\prime}=1-(1-p)^{n}=1-\left(1-\frac{1}{k_{\max}(n-1)+1}\right)^{n}\approx 1 -e^{-1/k_{\max}}\leq\frac{1}{k_{\max}}\] which makes \[\frac{k_{\max}}{k_{\max}-1+\frac{1}{p^{\prime}}}\lessapprox\frac{k_{\max}}{k_ {\max}-1+k_{\max}}=\frac{1}{2}+O\left(\frac{1}{k_{\max}}\right)\] This entails that the expected maximum social welfare is \(T(\nicefrac{{1}}{{2}}+O(\nicefrac{{1}}{{k_{\max}}}))\). This means that under any mechanism, at least one agent is going to have expected utility at most \(\nicefrac{{1}}{{n}}\) times that, which proves the theorem. ## 7 Generalized Reusable Public Resource Allocation In this section, we extend our results to the case where there are \(L\geq 1\) identical items shared among the agents. We still assume that each agent has nothing to gain by having more than one item on a single round. Simply adapting our single-resource result to multiple resources would result (ignoring lower order terms) in guaranteeing a \[\min\left\{\frac{1}{r},1-\frac{1+\alpha(L-1)}{r}\right\}\] fraction of the player's ideal utility (formally defined below). In his section we show an improved guarantee of \[\min\left\{\frac{1}{r},1-\frac{1-\alpha}{r}\right\}\] fraction, eliminating the deterioration of the guarantee as \(L\) gets larger. In fact, the improved bound would also slightly improve our result in Section 5. There we focused on the simpler proof as the improvement in the case of \(L=1\) is not so significant. ### Mechanism and Ideal Utility Our mechanism, First-Price Pseudo-Auction with Multi-Round Reserves, is similar to the single resource case, except that now if in round \(t\) there are \(m\) available items, the agents with the top \(m\) valid bids get allocated an item each. Additionally, in this case, we normalize the budgets such that the total budget is \(LT\) and each agent \(i\) has budget \(\alpha_{i}LT\). Our definition of ideal utility for agent \(i\), in this case, is almost identical to the one in Definition 4.1, except we allow each agent to request the item for at most \(\alpha_{i}L\) fraction of the rounds, assuming \(\alpha_{i}L\leq 1\) (otherwise the ideal utility of the agent allows him to request the item every round). Specifically, the analogous LP of Eq. (2) that defines the agent \(i\)'s ideal utility when she has fair share \(\alpha_{i}\) and \((V,K)\sim\mathcal{F}_{i}\) is \[v^{\star}=\max\textsc{{}_{\texttt{Req}}} \frac{\mathbb{E}\left[VK\big{|}\texttt{{Req}}(V,K)\right]}{\frac{1} {q}-1+\mathbb{E}\left[K\big{|}\texttt{{Req}}(V,K)\right]}\] (8) such that \[\mathbb{P}\left[\texttt{{Req}}(V,K)\right]=q\] \[\frac{\mathbb{E}\left[K\big{|}\texttt{{Req}}(V,K)\right]}{\frac{ 1}{q}-1+\mathbb{E}\left[K\big{|}\texttt{{Req}}(V,K)\right]}=\beta\leq\alpha_{ i}L\] ### Guarantees of First-Price Pseudo-Auction with Multi-Round Reserves for multiple resources For the rest of the section, we focus on guarantees that an agent can get when using our mechanism. Henceforth, we drop the \(i\) subscript. We are going to focus on the case when \(\alpha L\) is small. We do so because this is the more interesting setting. Otherwise, very simple mechanisms like round-robin could make strong guarantees. We proceed to prove that even in this more complicated setting, an agent who follows Robust Bidding Policy, exactly as described in Section 5, can guarantee an almost \(1/2\) fraction of their ideal utility in expectation. We start with a lemma that is identical to Lemma 5.1. We defer the proofs of this section to Appendix A. **Lemma 7.1**.: _Consider an agent with budget \(\alpha LT\), and compute her ideal utility \(v^{\star}\) and optimal utilization fraction \(\beta\leq\alpha L\). Let \(\theta[t]=(V[t],K[t])\) denote her demand type in round \(t\), and let \((U[t],P[t])\) be her total realized utility and total payment in round \(t\) under Robust Bidding Policy (and any fixed policy of other agents). Then, for all \(t\in[T]\), we have_ \[\mathbb{E}\left[U[t]\right]=\frac{v^{\star}}{\beta r}\,\mathbb{E} \left[P[t]\right]\] We now proceed to show the main result of the section, that any agent can guarantee a fraction of her ideal utility in expectation. **Theorem 7.2**.: _Consider First-Price Pseudo-Auction with Multi-Round Reserves with reserve price \(r\geq 1\), and any agent with budget \(\alpha LT\), corresponding ideal utility \(v^{\star}\), and optimal utilization fraction \(\beta\). Then, using Robust Bidding Policy, irrespective of how other agents bid, the agent can guarantee a total expected utility_ \[\mathbb{E}\left[\sum_{t\in[T]}U[t]\right]\geq v^{\star}T\left( \min\left\{\frac{1}{r},1-\frac{1-\alpha}{r}\right\}-\frac{1}{\beta}O\left( \sqrt{\frac{k_{\max}}{T}}\right)\right)\] **Remark 7.3**.: _The term in Theorem 7.2 is maximized when \(r=2-\alpha\), in which case it becomes \(\frac{1}{2-\alpha}\)._ We defer the proof of this theorem to the Appendix. We note that even though the result is quite similar to the one in Theorem 5.2 the proof for it requires more careful analysis. More specifically, as mentioned above, if we follow the steps of the proof in Theorem 5.2 we would get a \(\min\{\frac{1}{r},1-\frac{1+\alpha(L-1)}{r}\}\) bound instead. In order to improve the bound we need to analyze carefully the rounds when the item is not available because the player still holds it from a previous round. Previously we upper bounded the number of those rounds by \(T\frac{\alpha}{r}\) which in this case would become \(T\frac{\alpha L}{r}\) which in turn leads to the worse bound.
2301.08769
The imprint of convection on Type I X-ray bursts: Pauses in photospheric radius expansion lightcurves
Motivated by the recent observation by NICER of a type I X-ray burst from SAX J1808.4-3658 with a distinct "pause" feature during its rise (Bult et al. 2019), we show that bursts which ignite in a helium layer underneath a hydrogen-rich shell naturally give rise to such pauses, as long as enough energy is produced to eject the outer layers of the envelope by super-Eddington winds. The length of the pause is determined by the extent of the convection generated after ignition, while the rate of change of luminosity following the pause is set by the hydrogen gradient left behind by convection. Using the MESA stellar evolution code, we simulate the accumulation, nuclear burning and convective mixing prior to and throughout the ignition of the burst, followed by the hydrodynamic wind. We show that the results are sensitive to the treatment of convection adopted within the code. In particular, the efficiency of mixing at the H/He interface plays a key role in determining the shape of the lightcurve. The data from SAX J1808.4-3658 favors strong mixing scenarios. Multidimensional simulations will be needed to properly model the interaction between convection and nuclear burning during these bursts, which will then enable a new way to use X-ray burst lightcurves to study neutron star surfaces.
Simon Guichandut, Andrew Cumming
2023-01-20T19:09:38Z
http://arxiv.org/abs/2301.08769v3
# The imprint of convection on Type I X-ray bursts: ###### Abstract Motivated by the recent observation by NICER of a type I X-ray burst from SAX J1808.4-3658 with a distinct "pause" feature during its rise (Bult et al., 2019), we show that bursts which ignite in a helium layer underneath a hydrogen-rich shell naturally give rise to such pauses, as long as enough energy is produced to eject the outer layers of the envelope by super-Eddington winds. The length of the pause is determined by the extent of the convection generated after ignition, while the rate of change of luminosity following the pause is set by the hydrogen gradient left behind by convection. Using the MESA stellar evolution code, we simulate the accumulation, nuclear burning and convective mixing prior to and throughout the ignition of the burst, followed by the hydrodynamic wind. We show that the results are sensitive to the treatment of convection adopted within the code. In particular, the efficiency of mixing at the H/He interface plays a key role in determining the shape of the lightcurve. The data from SAX J1808.4-3658 favors strong mixing scenarios. Multidimensional simulations will be needed to properly model the interaction between convection and nuclear burning during these bursts, which will then enable a new way to use X-ray burst lightcurves to study neutron star surfaces. X-ray bursts -- Neutron stars -- Convection + Footnote †: journal: ## 1 Introduction As per the recent MINBAR catalogue (Galloway et al., 2020), about one fifth of type I X-ray bursts from accreting neutron stars (Lewin et al., 1993; Galloway and Keek, 2021) reach high enough luminosities to provoke a radiatively-driven expansion of the neutron star envelope. In these "photospheric radius expansion" (PRE) bursts, the star's photosphere moves outward and appears 10-100 times larger, for a few to tens of seconds. PRE bursts offer a unique opportunity to study not only the surface but also the interior of neutron stars. Indeed, they have been used to place joint constraints on both neutron star mass and radius and the dense matter equation of state (Ozel et al., 2016, and references therein). Another way to constrain the mass is to measure the gravitational redshift of spectral features from heavy elements being ejected during the burst (Li et al., 2018; Strohmayer et al., 2019). These techniques rely on theoretical models which describe the expansion of the star's envelope and the winds that drive it. The recent deployment of the Neutron Star Interior Composition Explorer (NICER) telescope has drastically improved observations of PRE bursts, since the instrument's soft X-ray response allows spectral evolution of the PRE to be followed as the blackbody temperature drops to \(\lesssim\)1 keV (Keek et al., 2018). A most interesting PRE burst was recently observed by NICER from the millisecond pulsar SAX J1808.4-3658 (Bult et al., 2019). During the burst rise, the luminosity briefly "paused" for \(\approx\)0.7 s before reaching its peak. The ratio between the bolometric luminosity at the peak and pause was \(\approx\)1.68, which is very similar to the ratio between the pure helium and solar composition (\(X\approx 0.7\)) Eddington luminosities, given by \[L_{\rm Edd}=\frac{4\pi GMm_{p}c}{\sigma_{T}(1+X)}\,, \tag{1}\] where \(M\) is the neutron star mass, \(\sigma_{T}\) is the Thomson scattering cross-section, \(m_{p}\) is the proton mass, and \(X\) is the mass fraction of ionized hydrogen (free protons). Bult et al. (2019) interpreted this as an observation of the rapid ejection of a solar or hydrogen-rich layer, followed by the usual helium PRE phase. This is consistent with the observed burst recurrence times and energetics which indicate that SAX J1808.4-3658 is in the burst regime where hydrogen is depleted by hot CNO burning well before unstable ignition of helium (Galloway and Cumming, 2006; Goodwin et al., 2019). A similar idea for a two-staged mixed H/He PRE burst was put forward by Sugimoto et al. (1984) to explain the bursting behaviour of 4U/MXB 1636-53, which showed a bimodal distribution of peak luminosity (see also Galloway et al., 2006). Following this suggestion, Kato (1986) computed steady-state solutions of outflows from a small H layer on top of a He-layer, finding the timescale for the ejection of the H layer (and jump in luminosity) to be on the order of 0.1 to 1 s, inversely proportional to the luminosity of the model. However, this strongly depends on not only the mass of the H layer, assumed to be \(10^{-16}M_{\odot}\), but also on the steady-state assumption, which cannot reproduce the actual ejection of the H layer. It is clear that in order to understand this type of burst, we need time-dependent hydrodynamic simulations combined with realistic neutron star envelopes. X-ray bursts are challenging to model because of the many different types of physics involved. Previous studies have followed the time-dependent nuclear burning and convection during the thermonuclear runaway, using stellar evolution codes such as KEPLER(Woosley et al., 2004; Cyburt et al., 2010), or Modules for Experiments in Stellar Astrophysics, MESA, (Paxton et al., 2011; Meisel, 2018), but did not resolve the formation of the wind in PRE bursts. Yu and Weinberg (2018) first demonstrated the ability of MESA to resolve both the nuclear burning and convective mixing at the onset of bursts from pure He accretion, followed by the hydrodynamic ejection of a super-Eddington wind and the PRE phase. This opened up the possibility of simulating time-dependent PRE bursts with an emphasis on the role of composition in the resulting wind. In this paper, we use MESA (version 15140; Paxton et al., 2011, 2013, 2015, 2018, 2019) to simulate the accumulation phase, ignition, super-Eddington wind, and decay of a mixed H/He PRE burst. This represents the first full simulation of PRE bursts resulting from accretion of H and He. Our main result is that the rise in luminosity at the start of the burst pauses temporarily once the luminosity reaches the Eddington luminosity. As a wind develops and mass is ejected, deeper layers are eventually exposed that have been depleted in hydrogen by a combination of convective mixing and nuclear burning. This ends the pause and the luminosity begins to rise again as the outflowing material becomes less hydrogen rich and therefore has a larger Eddington luminosity. The resulting lightcurve has a distinct shape in contrast to pure helium bursts, and it depends on the gradient of hydrogen left behind by convection. However, as we will show, these results are very sensitive to the treatment of convection within the code. Indeed, the formation of layers of different compositions in the envelope leads to a convective boundary mixing problem which cannot be adequately simulated in one dimension, making the detailed shape of light curve uncertain. We begin in Section 2 with a simple model of mixed H/He PRE bursts, and explain the connection between the hydrogen profile in the envelope post-convection and the shape of the lightcurve. The main stages and important parameters of this model are also summarized in Figure 1. In Section 3, we describe our MESA simulations and show detailed results using the simplest prescription for convection. In Section 4, we vary the prescription for convection and assess how the results are affected. In Section 5, we summarize our findings, elaborate on issues related to the treatment of convection in one-dimensional simulations, and give an interpretation for the SAX J1808.4-3658 data. ## 2 Evolution of the Composition Profile and Lightcurve Shape The structure of the neutron star envelope at ignition is determined mainly by the accretion rate onto the neutron star and composition of the infalling gas. For \(\dot{M}_{\rm acc}\gtrsim 2\times 10^{-10}M_{\odot}\) yr\({}^{-1}\), hydrogen burns via the hot CNO cycle at a constant rate (Bildsten, 1998). Then, it can be shown that the column depth, \(y(r)\equiv\int_{\infty}^{r}\rho(r^{\prime})dr^{\prime}\), at which hydrogen is depleted is \[y_{\rm d}=2.7\times 10^{7}\,{\rm g\,cm^{-2}}\left(\frac{\dot{\rm M}_{\rm acc}}{0. 01\dot{\rm M}_{\rm Edd}}\right)\left(\frac{0.02}{\dot{\rm Z}_{\rm CNO}}\right) \left(\frac{\rm X_{0}}{0.7}\right) \tag{2}\] where \(X_{0}\) and \(Z_{\rm CNO}\) are the initial hydrogen and CNO nuclei mass fractions (Cumming and Bildsten, 2000)1. We scale the accretion rate to the Eddington accretion rate corresponding to a neutron star with \(R=12\) km and accreted hydrogen mass fraction \(X_{0}=0.7\), giving \(\dot{M}_{\rm Edd}=8\pi Rm_{p}c/(1+X_{0})\sigma_{T}=2.1\times 10^{-8}M_{\odot}\) yr\({}^{-1}\). Therefore at ignition, the envelope consists of two layers: an outer H-rich layer of depth \(y_{\rm d}\) in which the hydrogen abundance drops from the accreted value to zero, and an inner layer of pure helium where ignition of the burst occurs. This initial state is illustrated in column A of Figure 1. From an energetics standpoint, we know that only a column \(y_{\rm w}\sim 10^{6}\)-\(10^{7}\) g cm\({}^{-2}\) can be ejected by winds (Weinberg et al., 2006), which is smaller than \(y_{\rm d}\). However, Weinberg et al. (2006) also showed that prior to the wind, a convection zone will grow and extend to a column smaller than \(y_{\rm w}\) (Figure 1 column B), which will mix the H and He and result in a change in the composition of the ejecta as a function of time. We define the minimum column depth reached by convection as \(y_{\rm c,min}\), below which hydrogen is not mixed and \(X=X_{0}\) is roughly constant2. Footnote 2: \(X\) in fact decreases linearly with \(y\), but since \(y_{\rm c,min}\) ends up being \(\sim\)1% of \(y_{\rm d}\) or less, the variation in \(X\) over this column is negligible. Wind models for PRE bursts (Ebisuzaki et al., 1983; Paczynski and Proszynski, 1986; Joss and Melia, 1987; Guichandut et al., 2021) show that the luminosity at infinity is always very close to the Eddington luminosity \(L_{\rm Edd}\) (Equation 1), as any "extra" energy in the form of a super-Eddington flux gets used up to drive mass-loss. Therefore, during the initial ejection of \(y_{\rm c,min}\), \(L\approx L_{\rm Edd}\) will be constant. This is the observed pause, and its duration is \[\Delta t_{\rm p} \approx\frac{4\pi R^{2}y_{\rm c,min}}{\dot{M}}\] \[\approx 0.18\,{\rm s}\,\left(\frac{y_{\rm c,min}}{10^{4}\,{\rm g\,cm^{-2 }}}\right)\dot{\rm M}_{18}^{-1}\,, \tag{3}\] where \(\dot{M}_{18}=\dot{M}/(10^{18}\) g s\({}^{-1}\)) is the mass-loss rate, which we assume to be constant during the pause. After the pause, the ejection of the mixed layers will begin, and the luminosity will rise as the hydrogen fraction \(X\) in the ejecta decreases. The rate \(dL/dt\) at which the luminosity will increase, assuming it stays near Eddington, will be proportional to \(dX/dt\propto\dot{M}(dX/dy)\), thus linking the shape of the lightcurve to the hydrogen gradient in the envelope. Column C of Figure 1 illustrates the compositional nature of these two stages, the pause and the rise. Note that if \(X=0\) at columns \(y<y_{\rm w}\), a third stage will appear where the luminosity peaks at the helium \(L_{\rm Edd}\) and remains there for the rest of the PRE (until all of \(y_{\rm w}\) has been ejected). Figure 1: Diagram illustrating the three main stages of the mixed H/He burst. _A)_ As hydrogen burns stably throughout accretion, distinct hydrogen and helium-rich layers build up in the envelope, their boundary being at a known column depth of \(y_{\rm d}\) (Equation 2). We are interested in bursts that ignite at \(y>y_{\rm d}\) (red color indicates nuclear burning). _B)_ Heat from nuclear burning creates a growing convection zone (blue semicircles) which penetrates into the H-rich layer, resulting in a convective boundary mixing (CBM) event. _C)_ As the convection zone retreats, it leaves behind a layer of constant hydrogen fraction at a column \(y_{\rm c,min}<y_{\rm d}\), a mixed H-He layer and nuclear ashes at depth. Winds progressively eject the layers, up to a column \(y_{\rm w}>y_{\rm c,min}\), during which the observed luminosity at infinity is the Eddington luminosity, which depends on the hydrogen fraction \(X\) of the material. Since \(X\) is initially constant, we first observe a pause after the initial burst rise. After the H layer is ejected, the luminosity once again rises in a manner that depends on the hydrogen gradient \(dX/dy\). See text for further details. The burst lightcurve is therefore determined by _1)_ the nuclear burning and convection that occur during the rising phase, setting the hydrogen profile, and _2)_ the mass-loss rate during the wind phase. In the next section, we describe simulations with MESA to investigate both of these factors. ## 3 Mesa Simulations We model a single burst with several separate MESA runs3, in an approach similar to Yu & Weinberg (2018). First, we follow the ignition and convective rise of the burst under the assumption of hydrostatic equilibrium, then use MESA's hydrodynamic solver to follow the super-Eddington wind phase. However, unlike Yu & Weinberg (2018), we leave nuclear burning on during the wind, as much of the energy in bursts with hydrogen comes from slower reactions that continue into the wind phase. Footnote 3: Our MESA milists and simulation results will be made publicly available upon publication of this paper. ### Accumulation and formation of the layers The basic physical setup is the same for all simulations: we assume a non-rotating neutron star of mass \(M=1.4M_{\odot}\) and radius \(R=12\) km, and ignore general relativistic corrections. The envelope initially consists of an \({}^{56}\)Fe substrate4 with a column depth \(y=10^{11}\) g cm\({}^{-2}\). The outer boundary is set at an optical depth \(\tau=100\) to avoid numerical issues caused by radiation-dominated regions becoming convective (Paxton et al., 2013). We begin accreting a solar-like composition (\({}^{1}\)H, \({}^{4}\)He and \({}^{12}\)C with mass fractions \(X=0.7\), \(Y=0.28\) and \(Z=0.02\) respectively) at a constant rate of \(\dot{M}_{\rm acc}=3\times 10^{-10}\)\(M_{\odot}\) yr\({}^{-1}=0.014\dot{M}_{\rm Edd}\). We assume carbon to be the only metal being accreted for simplicity. What matters for the ignition of this type of burst is to achieve hydrogen depletion, and any isotope part of the CNO cycle would work, because the CNO abundances quickly adjust to the equilibrium ratio of \({}^{14}\)O and \({}^{15}\)O in the hot CNO cycle (Bildsten, 1998). Throughout accretion, the luminosity at the base of the substrate is fixed to \(1.8\times 10^{34}\) erg s\({}^{-1}\), equivalent to \(\dot{M}_{\rm acc}\) times 1 MeV per nucleon which is roughly the expected crust heating at low accretion rates (Brown, 2000). As discussed in Yu & Weinberg (2018), \(\dot{M}_{\rm acc}\) and \(L_{\rm base}\) determine the ignition depth, which we have chosen to be at \(y\approx 3\times 10^{8}\) g cm\({}^{-2}\), greater than \(y_{d}\) so that the burst ignites in a pure He layer. Footnote 4: To build the starting model, we took the the ns_env model file from the ns_he problem provided as part of MESA’s test suite, then relaxed the neutron star radius from 10 to 12 km, and finally accreted additional \({}^{56}\)Fe to the target column depth. The point of increasing the mass of the iron substrate is to build a large enough buffer between the flashing zone and the inner boundary, allowing heat to diffuse inward without reflecting back. To reduce the complexity of the computations, especially during the wind phase, we use MESA's cno_extras.net nuclear network to model the nuclear burning, which includes 17 isotopes up to \({}^{24}\)Mg (we also add \({}^{56}\)Fe as an additional inert element). This is fewer and lighter isotopes than the approx21.net network used by Yu & Weinberg (2018) for He bursts (which has \(\alpha\)-capture reactions up to iron-group elements), but contains a similar number of reactions due to the addition of hot CNO. It is also a much smaller network than that used by Woosley et al. (2004), who studied the energy generation carefully but did not model the hydrodynamic wind. Since we are focused primarily on the wind, this limited network is adequate because most of the energy generated during the burst comes from hydrogen and helium burning in CNO and triple-\(\alpha\) reactions. However, this assumption means that our calculations do not accurately predict the nuclear burning ashes, both ejected in the wind and leftover afterwards. The left panel of Figure 2 shows the composition profile of the envelope after 6 days of accretion. The hydrogen depletion column is \(y_{\rm d}=3.7\times 10^{7}\) g cm\({}^{-2}\), consistent with Equation 2 for the chosen \(\dot{M}_{\rm acc}\). At larger column depths, the CNO cycle is starved of protons, and the abundances are determined by \(\beta\)-decay rates only. A few seconds prior to ignition, some helium has already started stably burning and converting to \({}^{12}\)C. ### Ignition and convective rise After having built up the He layer, unstable triple-\(\alpha\) burning triggers the thermonuclear runaway. We define "ignition" as the moment \(t=t_{0}\) when the He layer is convective and has a larger maximum nuclear energy generation rate than the H layer. The convection zone begins growing in the He layer as it would in a pure He burst, then hits the H layer about 1.1s later. The mixing of H into the convection zone leads to a sudden increase in the nuclear energy generation rate at the top of the zone. The outcome of this mixing event depends strongly on the prescription used in the code for convection and for convective boundary mixing (we mark this event as CBM in Figure 1). Here, we use the prescription of Henyey et al. (1965) for mixing length theory, with the dimensionless parameter \(\alpha_{\rm mlt}=1.5\) dictating the ratio between the mixing length and local pressure scale height. In this section, we present results using the Schwarzschild criterion to determine convective boundaries (as assumed also by Yu & Weinberg, 2018). This ignores the effects of composition gradients on the convective stability, but simplifies the interpretation of our results. We explore the effect of changing the prescription for convection in Section 4. In Figure 3, we show Kippenhahn diagrams for the history of convection and nuclear burning as a function of depth, throughout the flash phase and beginning of the wind. The bottom panel is zoomed in to show short timescales following the collision between the He convection zone and the H layer. In this collision, fresh protons are brought in below the depletion depth \(y_{\rm d}\) where they can capture onto seed nuclei, causing a rapid injection of energy and a local steepening of the temperature gradient, which in turns drives further expansion of the convection zone. The first proton captures to happen are \({}^{18}\)O(p,\(\alpha\))\({}^{15}\)N and \({}^{15}\)N(p,\(\alpha\))\({}^{12}\)C. The remaining protons (and fresh ones coming from the top) then quickly capture onto the carbon and build up \({}^{13}\)N. The nuclear reactions in this first stage do not produce enough heat to generate large scale convection. Instead, the convection splits into many zones, as radiative gaps as small as 0.1% of the scale height appear. These zones and gaps are clearly seen in the bottom panel of Figure 3. The maximum number of individual convective zones is 66, and it occurs 0.1 ms after the collision. The maximal extent of the convection during this initial stage is to a column \(\approx\)10\({}^{6}\) g cm\({}^{-2}\). About 0.4 ms after the initial collision, enough nitrogen has built up to trigger a "second ignition" via the \({}^{13}\)N(p,\(\gamma\))\({}^{14}\)O reaction. This time, so much heat is released that the convection zone grows massively in less in 0.1 ms, its column depth extent decreasing by a factor of \(\approx\)30. The convection is still split but the radiative gaps now maintain over time, resulting in a period of layered convection, which extends down to a minimum column \(y_{\rm c,min}=1.1\times 10^{4}\) g cm\({}^{-2}\). The remnants of this period of explosive burning appear in the final composition profiles (see e.g. the magenta and orange lines for \({}^{14}\)O and \({}^{13}\)N respectively in the right panel of Figure 2), and will later be partly ejected by the wind. Further oxygen, neon and finally sodium-burning reaches the end-point of our network at \({}^{24}\)Mg. The final composition profiles that go into the hydrodynamic calculation are shown in the right panel of Figure 2. If a more complete network were to be used, we would expect the ashes to proton and \(\alpha\) capture to heavier elements, namely Ca and Si, over the following \(\sim\)tens of seconds (Woosley et al., 2004). As we have explained in Section 2, the main predictor for the shape of the lightcurve is the hydrogen gradient left over by convection. We now investigate what creates this gradient. Figure 4 is another Kippenhahn diagram of the same simulation that shows the evolution of the hydrogen abundance after the collision. A few ms after the collision, once the convection retreats, the hydrogen gradient is already set. The dashed line traces how much hydrogen has burned away since ignition. At the collision, a significant amount of protons capture onto seed nuclei, and this continues throughout the burst even as the convection zone retreats. At ignition, the total mass of the envelope (minus the iron substrate) is \(\sim\)10\({}^{22}\) g, and \(\sim\)2% of it is H. By the time the gradient is set, about 25% of the hydrogen has burned away (another 10% burns in the rest of the flash, mostly at depths \(y>y_{\rm w}\), before ejection by winds). What is im Figure 2: Composition profiles before (left) and after (right) the thermonuclear flash, which ignites at \(t=t_{0}\). The lines for different elements have the same color in left and right panels. In the \(\sim\)1.4 s to reach the Eddington luminosity, convection has significantly mixed the He and H layers. Dotted lines show the depletion column \(y_{\rm d}\) (left) and the minimum extent of convection \(y_{\rm c,min}\) (right). All runs shown in this paper begin at the ignited model \(t=t_{0}\), which the left panel leads up to. The panel on the right shows the result of mixing using the Schwarzschild criterion prescription for convective boundaries (see Section 3.2). portant in determining the gradient is not the details of the nuclear burning; once convection subsides, hydrogen burning at shallow depths (near \(y_{\rm c,min}\)) is slow and does not substantially affect the gradient and therefore the lightcurve. Instead, it is the efficiency of the convective mixing which determines how fast and how far protons can be brought downwards to regions of high temperatures where they can quickly burn away. We further investigate the point about the efficiency of mixing in Section 4. ### Wind and collapse As the outgoing luminosity rises and approaches the Eddington limit a short time after ignition, the outer layers become radiation pressure dominated and the envelope begins to expand. In Newtonian gravity, we know from previous work that appreciable expansion of \(\sim\)100 m above the stellar surface occurs at \(\approx\)90% of \(L_{\rm Edd}\)(see Figure 12 of Guichandut et al., 2021). At this point, Figure 3: Kippenhahn diagrams for the Schwarzschild run, centered on the moment of collision between the convection zone and the H layer. The bottom panel is zoomed into a 1.5 ms window following the collision. The color scale traces the energy generation or loss (nuclear burning minus all neutrino losses), while the green hatches mark the convection zones. The solid black lines show the location of the depletion depth \(y_{4}\). In the top panel, the dashed line shows the luminosity coming out of the atmosphere, normalized by the Eddington luminosity of the accreted gas (scale on the right-hand side). In the bottom panel, it shows the integrated nuclear power (\(\int dm\,\epsilon_{\rm nuc}\)). Moments where certain reactions dominate are labeled (see Section 3.2). we turn off convection5 and accretion, turn on MESA's hydrodynamics calculation, and relax the outer boundary to an optical depth \(\tau=2/3\) in order to resolve the photosphere, as in Yu and Weinberg (2018). Mass-loss is then done by removing any grid points with a density \(\rho<\rho_{\rm thresh}=10^{-7}\) g cm\({}^{-3}\). To avoid issues caused by going off the opacity tables at low density, we switch to an interpolation formula for electron scattering opacity as a function of temperature from Paczynski (1983). This is a good approximation at the high temperatures of the wind where electron scattering dominates. Footnote 5: This to avoid complications associated with radiation-dominated artificially becoming convective. The presence of jumps in the \({}^{1}\)H mass fractions as a result of convection (see right panel of Figure 2) added some numerical difficulties in the simulation of the winds. Since the acceleration of a fluid parcel due to radiation is proportional to its opacity, a jump in the hydrogen fraction X will result in a density inversion, as the uppermost fluid element is ejected faster than the bottom one can follow. We found that these density inversions tended to expand as they moved outwards, and caused the MESA integration to diverge as they approached \(\rho_{\rm thresh}\). A solution that worked in all cases was to manually soften those composition jumps using a smoothing spline on the \({}^{1}\)H mass fractions prior to running the wind simulation, as shown in the top panel of Figure 6. In order to preserve the overall gradient, this smoothing was done using monotonic functions. The \({}^{4}\)He profiles were then adjusted such that the sum of mass fractions of all species remained 1 everywhere. During the wind, the rate of change of composition in the atmosphere is determined by the mass-loss rate \[\dot{M}(r,t)=4\pi r^{2}\rho(r,t)v(r,t)\,, \tag{4}\] where \(\rho\) and \(v\) are the gas density and velocity at a radial distance \(r\) from the center of the star. We evaluated \(\dot{M}\) at three different locations: a) the "sonic point", i.e. where the velocity \(v=\sqrt{kT/\mu m_{p}}\) where \(\mu\) is the mean molecular weight of the gas, b) the photosphere, i.e. the location \(r=r_{\rm ph}\) where the luminosity \(L=4\pi r_{\rm ph}^{2}\sigma T^{4}\), and c) the surface of the model or outer boundary of the grid. As shown in Figure 5, despite some small variations, the mass-loss rate is overall constant across all locations. This is no surprise, as we expect these winds to reach a steady-state, characterized by a constant \(\dot{M}(r)\) at fixed \(t\), in a time much shorter than the evolution timescale of the burst (Joss and Melia, 1987; Guichandut et al., 2021). We can then write the total column ejected as a function of time, \[y_{\rm ej}(t)=\frac{1}{4\pi R^{2}}\int_{0}^{t}\dot{M}(t^{\prime})dt^{\prime}\,, \tag{5}\] independently of location. The total ejected column \(y_{\rm w}\) is the final value of \(y_{\rm ej}\), equal to \(5.83\times 10^{5}\) g cm\({}^{-2}\) in this simulation (see dashed line in Figure 5). This is Figure 4: Kippenhahn diagram for the Schwarzschild run as in Figure 3, now with the color scale representing the hydrogen abundance. The solid black and gray lines show \(y_{\rm d}\) and \(y_{\rm c,min}\) (same values as in Figure 2). The dashed line shows the relative change in the total mass of hydrogen at a given time compared to the initial amount at ignition (scale on the right-hand side). roughly consistent with results obtained by Yu & Weinberg (2018) for the total ejected mass of pure He bursts igniting at a similar column depth as we have here. As shown by these authors, the total ejected column, and therefore the total duration of the PRE, would increase (decrease) for bursts which ignite at larger (smaller) column depths. Approaching the end of the super-Eddington phase of the burst, the nuclear luminosity which is driving the wind begins to die down. The outflow then separates into two regions, an outer unbound wind of high velocity which is being ejected, and an inner atmosphere which is collapsing back into, eventually, a hydrostatic configuration. This can be seen from Figure 5 at \(t-t_{0}>9\) s, where the mass-loss rate first drops sequentially from the inside (lower radii first). The timescale for the collapse to reach the surface from the sonic point is \(\lesssim\)1 s, which is roughly the sound crossing time between those two locations (Guichandut et al., 2021). In some other simulations (Section 4), the collapse results in numerical issues as the infalling gas becomes supersonic, which causes the time-step to drop and the evolution of the model to come to a stop. The moment where these numerical issues arise coincides with the sonic point crossing the photosphere and the wind effectively becoming optically thin. Then, the implicit optically thick assumption made by MESA to treat the radiative transfer becomes incorrect. Future work is needed to investigate the infall phase in more detail. ### Lightcurve In order to plot the observed lightcurve, we evaluate the radiative luminosity of our models as a function of time at the photosphere. We show the lightcurve for the main Schwarzschild run in the bottom panel of Figure 6. Its shape is consistent with the hydrogen profile in the envelope post-convection, shown in the top panel. On the luminosity axis, the ratio between the peak and pause luminosities is \(\approx\)1.6, consistent with the ratio of Eddington luminosities \((1+X_{0})/(1+X(y_{\rm w}))\) with \(X(y_{\rm w})=0.07\) (Figure 6 top panel). On the time axis, we also have to take into account how the mass-loss rate varies throughout the burst. The pause duration is 0.6 s which, for \(y_{\rm c,min}=1.1\times 10^{4}\) g cm\({}^{-2}\), corresponds to a mass-loss rate \(\dot{M}_{18}=0.33\) according to Equation 3. This is in good agreement with the time-averaged value of \(\dot{M}\) during the pause, \(3.5\times 10^{17}\) g s\({}^{-1}\) (note however that \(\dot{M}\) changes significantly during the pause, from \(\approx\)\(6.5\times 10^{16}\) g s\({}^{-1}\) to \(\approx\)\(7.6\times 10^{17}\) g s\({}^{-1}\), see Figure 5). Figure 5: The mass-loss rate of the wind as a function of time for the Schwarzschild run, in solid lines. The different colors show \(\dot{M}\) evaluated with Eq.(4) at different locations (see text). The dashed lines show the ejected column \(y_{\rm ej}\) (scale on the right-hand side). The red shaded region marks the pause in the lightcurve (bottom panel of Figure 6). Figure 6: Results of the Schwarzschild run. _Top:_ Hydrogen profile after convective mixing. To avoid numerical difficulties during the hydrodynamic wind (see Section 3.3), the profile was smoothed using a monotonic cubic spline. The dashed lines show the minimum column reached by convection \(y_{\rm c,min}\), and the total column ejected by winds \(y_{\rm w}\). _Bottom:_ Lightcurve of the burst. The pause occurs after surpassing \(L_{\rm Edd}\) of the accreted material (\(X=0.7\), bottom dotted line). The following rise takes place over a much longer timescale since \(y_{\rm w}\gg y_{\rm c,min}\). In general, the outgoing luminosity follows the Eddington luminosity of the material which is currently being ejected (blue dotted line), which we can track using \(y_{\rm ej}(t)\) (see Section 3.4). If the mass-loss rate remained at this value throughout the remainder of the wind, the total duration of the PRE phase would be \((y_{\rm w}/y_{\rm c,min})\Delta t_{\rm pause}\approx 32\) s. But since \(\dot{M}\) increases by a factor of \(\sim\)3 after the pause (Figure 5), the ejection is much faster and the PRE only lasts \(\approx\)9 s. As expected, the lightcurve can be obtained by tracking \(L_{\rm Edd}\) as the hydrogen mass fraction in the ejecta \(X(y_{\rm ej}(t))\) evolves in time. This is shown by the blue dotted line in Figure 6 which closely follows the luminosity from the simulation (black line). Comparing the two, we see that two features of the observed lightcurve are unexplained by composition changes. First, the luminosity during the pause is not exactly flat, but instead slowly decreases throughout its duration. This effect can be understood by considering the energetics of the expansion. At the beginning of the pause, the wind is not yet established. To do so, it needs to both lift material out of the gravitational potential and expand it (effectively raising its enthalpy). These two contributions account for the observed decrease in luminosity6. Second, near the end of the super-Eddington phase and before the decay, a bump in luminosity appears. This is related to the wind dying down, which "returns" the gravitational energy and enthalpy that was required to sustain it back to the radiation. However, as discussed in the previous section, this part of the lightcurve is uncertain because of the wind becoming optically thin. Footnote 6: Note that this effect is not related to the pause itself but rather to the onset of the wind, when \(L\) first exceeds \(L_{\rm Edd}\), and should therefore be a common feature across all PRE bursts. In fact, all lightcurves of pure He bursts in Yu and Weinberg (2018) (see their Figure 12) also show a slowly descending flux throughout the wind, which can likely be attributed to the same effect. From the observational perspective, it could in principle be possible to infer the shape of the hydrogen profile from the lightcurve only. Assuming that the energy used to eject mass (\(GMM_{\rm w}/R\) where \(M_{\rm w}=4\pi R^{2}y_{\rm w}\)) is equal to a fraction \(\eta\) of the observed burst energy (integrated luminosity which can be determined from the fluence if the distance to the source is known), one could infer \(y_{\rm w}\) from the lightcurve only. Then, given the total duration of the PRE, one could find the average \(\dot{M}\), and finally use Equation 3 to obtain \(y_{\rm e,min}\). In our simulations, we find \(\eta\approx 0.31-0.37\), but this is likely to change for different ignition depths. We plan to study the energy budget of PRE bursts in more detail in future work. ## 4 Impact of changing the treatment of convection It is unlikely that the implementation of convection in Section 3 with the Schwarzschild criterion is an accurate representation of the true hydrodynamic phenomena. For one, the rapid nuclear burning induces local changes in composition towards heavier species, and locally increases the mean molecular weight \(\mu\). The creation of \(\mu\)-gradients can either have a stabilizing or de-stabilizing effect on the thermal profile. This is especially relevant starting at the collision, where the growth of the He-rich convection zone will be inhibited by the H-rich material on top, as pointed out by Weinberg et al. (2006). They showed that a jump in temperature between the convection zone and the overlying radiative layer would develop in order to overcome the stabilization of the boundary due to composition. However, the effectiveness of the composition jump may be decreased by entrainment of fluid at the convective-radiative boundary which would erode the stabilizing composition gradient (e.g. Anders et al., 2022). Second, we may expect that the tiny radiative gaps that appear in between convection zones (see Section 3.2) will be destroyed by some form of overshoot mixing. This is because, at the Schwarzschild convective boundary, the fluid parcel has zero acceleration by definition, but it may have enough inertia to continue rising and mix the fluid all the way to the next convective zone. To better understand the importance of these effects, we ran additional simulations, starting from the same model at ignition (Figure 2, left panel), but changing the prescription for convection during the thermonuclear flash. We first ran a simulation using the Ledoux criterion instead of Schwarzschild for locating convective boundaries, which takes into account compositional gradients. When this is used, semiconvective and thermohaline mixing become available. For these, we set the dimensionless parameters \(\alpha_{\rm sc}=0.1\), and \(\alpha_{\rm th}=2\) (MESA uses these to determine diffusion coefficients, see Paxton et al., 2013), following the ns_he test suite problem in MESA. Then, we ran simulations using both the Schwarzschild and Ledoux criteria where we also forced all radiative gaps of radial extent less than 10% of the scale height to close and become convective instead7. This is meant to, in a simplified way, mimic the overshoot at the top of each convective zone. Finally, we test the convergence of our simulations by running a high resolution version of every prescription. For these, the number of grid points during the flash phase was increased from \(\sim\)5000 to \(\sim\)15000 (the exact num ber varies in time and is adjusted by MESA using the mesh_delta_coeff control). We show in Figure 7 the hydrogen composition profiles and lightcurves for all the simulations mentioned above. In each case, the main two features of our simplified model hold: _1)_ a larger \(y_{\rm c,min}\) results in a longer pause, and _2)_ a steeper hydrogen gradient leads to a faster rise in luminosity. The peak luminosity of the burst is then inversely proportional to \(X(y_{\rm w})\), where \(y_{\rm w}\lesssim 10^{6}\) g cm\({}^{-2}\) is roughly constant across different bursts, and reaches the helium \(L_{\rm Edd}\) in half of the simulations. Moreover, lightcurves with higher peak luminosities have shorter PRE times - this is because the fluence is conserved for a given ignition depth, no matter the exact the shape of the lightcurve. The slow decrease of the luminosity during the pause is also observed in every case. For runs which managed to integrate through the decay phase (ones which do not end in an "x" in Figure 7), the bump at the end of the super-Eddington phase is also present. Once \(y_{\rm w}\) has been completely ejected, all models join on the same exponential cooling track. While Figure 7 demonstrates agreement with the basic model described in Section 2, it also clearly shows that the results, and in particular the observable lightcurve, depend on the prescription for convection. We cannot confidently claim that one prescription is more realistic than another, given the complex interactions between nuclear burning and mixing in this inherently multi-dimensional process, and so cannot predict lightcurves using these simulations. Nevertheless, we can assess the impact that different convective prescriptions have on the overall simulation. First, using the Ledoux criterion instead of the Schwarzschild criterion means that formerly convective regions become semi-convective instead, as the composition gradients stabilize the thermal profile. This effect inhibits the mixing, and, therefore (Section 3.2), reduces the amount of hydrogen burned away. This can be seen in the left panel of Figure 7 by comparing the orange and blue lines. Closing the radiative gaps between convective zones ("CG" in the figure legend) naturally has the opposite effect, with the mixing becoming stronger. The impact of spatial resolution is also interesting (compare solid and dashed lines in Figure 7). In the Schwarzschild and Ledoux runs, we obtain an increase in the total number of convective zones when increasing the resolution. Indeed, the addition of grid points in the convection zone allows it to split even more. Therefore, these runs are clearly not converged. However, in the CG runs, this splitting is effectively cancelled, as tiny zones are merged together at the end of every step, and we find good agreement between the models with different resolutions. Figure 7: Results for all convective prescriptions and spatial resolutions tested in this work. _Left_: Mass fraction of \({}^{1}\)H in the atmosphere after the convective rise and before the ejection by winds. The circles label the locations of \(y_{\rm w}\) resulting from each simulations. The black line shows the hydrogen gradient at ignition, from which all runs start. _Right_: Lightcurve of the bursts. The dotted lines mark the locations of \(L_{\rm Edd}\) for \(X=0.7\) and \(X=0\), as in Figure 6. Some simulations could not integrate through the decay phase and were stopped short at the “x” symbols. The inset zooms in on the pauses. ## 5 Summary and Discussion We have shown that variations in chemical composition in the envelope of neutron stars accreting mixed H/He fuel are reflected in the lightcurves of their PRE bursts. After the ignition of the thermonuclear runaway in the He-rich layer, a convective zone expands outwards and mixes the fuel with the overlying H-rich shell (Figure 2). The resulting H abundance profile determines the shape of the lightcurve, namely the duration of initial pause and the subsequent slope in luminosity (Figure 6). Due to convection, the mass of the layer with solar composition, \(y_{\rm c,min}\sim 10^{4}-10^{5}\) g cm\({}^{-2}\), is much reduced compared to the initial \(y_{\rm d}\sim 10^{7}\) g cm\({}^{-2}\) set by stable hydrogen burning during accretion (Equation 2). This results in a rapid ejection of a hydrogen-rich shell and a short observed pause on the order of 1 s or less (Equation 3). Subsequently, the luminosity rises toward the helium Eddington luminosity as hydrogen depleted layers are exposed by the wind. We find that the hydrogen profile in the envelope is sensitive to the details of convection and mixing following the collision with the H layer (Figures 3 and 4). As a result, the exact shape of the lightcurve of a given event is uncertain as it depends on the choice of convective prescription and spatial resolution (Figure 7). The critical factor in setting the hydrogen profile is the efficiency of the mixing within the convective regions. However, this mixing is inhibited when the convection splits into many zones interspersed with radiative gaps. This splitting occurs even when ignoring compositional gradients (Schwarzschild criterion), suggesting that the culprit is the local energy deposition from rapid nuclear burning. Moreover, we found that increasing the spatial resolution of the simulations led to an increase in the number of zones and gaps, significantly reducing the efficiency of mixing, such that our simulations are not converged. This non-convergence however is mitigated by overshoot mixing at the top of convective zones, which we modeled in a simplified way by closing radiative gaps less than 10% of the scale height. Even disregarding problems related to splitting of the convection, a more fundamental issue stems from the approximate treatment of convection with mixing-length theory. In the collision event, explosive nuclear burning releases energy on microsecond timescales, which is close to or even shorter than local convective turnover times. This is in violation of the standard assumptions of mixing length theory. It also amplifies the differences between the Schwarzschild and Ledoux criterion, in contrast to situations with long dynamical timescales such as the main-sequence, where both criteria should lead to similar outcomes (Anders et al., 2022), as long as convective boundaries are implemented correctly (Gabriel et al., 2014)8. This timescale problem has been noted before in the context of late-stage evolution of massive population III stars. There, a helium burning convective region encroaches upon a hydrogen shell, mixing in protons which burn on timescales of hours to days, which is short compared to the month-long convective turnover times (Marigo et al., 2001). The proper modeling of these situations, which are also known as level-3 mixing or convective-reactive phases (Herwig et al., 2011), continues to be an active area of research (e.g. Davis et al., 2019; Clarkson and Herwig, 2021), with a particular focus on multidimensional hydrodynamics simulations (Woodward et al., 2014; Stephens et al., 2021). Footnote 8: Recent versions of MESA(Paxton et al., 2018, 2019) introduced predictive mixing schemes to correctly locate convective boundaries in the presence of composition discontinuities. These iteratively instantaneously mix and change abundances in the cells surrounding the boundary. This would not be appropriate in our case where mixing also leads to rapid and explosive burning. Although the general shape of the lightcurves in our simulations agrees with the burst from SAX J1808.4-3658 reported by Bult et al. (2019), there are some differences. In this burst, the pause is \(\sim\)0.7 s long, similar to our principal Schwarzschild run, suggesting a similar extent of the convection. However, the subsequent rise is rapid, reaching the helium Eddington luminosity in just \(\sim\)1.3 s. This would imply a mixing event which is strong enough to produce a very steep hydrogen gradient. Since the total pause plus rise duration is \(\sim\)3 times that of the pause, the hydrogen profile would have to go from9\(X\approx 0.7\) to \(X=0\) in the span of \(y_{\rm c,min}\approx 10^{4}\) g cm\({}^{-2}\) to \(\approx\)3 \(y_{\rm c,min}\). Or, if for example the mass-loss rate increases by a factor of 3 from the pause to the rise, as it does in our simulations (Section 3.3), then in the span of \(y_{\rm c,min}\) to \(\approx\)9 \(y_{\rm c,min}\); in any case, the whole hydrogen gradient spans a decade in column depth at most. None of our simulations achieve this - our fastest rise time is \(\gtrsim\)6 s for the Schwarzschild+CG model (in fact, factoring redshift, these times should be \(\sim\)20\(-\)30% longer). Moreover, our Figure 7 shows a general trend that steep hydrogen gradients are also associated with smaller \(y_{\rm c,min}\) values; to reproduce the rise seen in SAX J1808.4-3658, we may need such strong mixing that it would push \(y_{\rm c,min}\) to very small columns and dissolve the pause entirely. Footnote 9: Goodwin et al. (2019) inferred a hydrogen mass-fraction \(X_{0}\approx 0.57^{+0.13}_{-0.14}\) for the companion, based on an analysis of Type I X-ray burst recurrence times and energetics. The observed ratio of peak to pause luminosities in Bult et al. (2019) favors the upper end of this range. One way to match the rapid rise but keep the pause duration the same as observed in SAX J1808.4-3658 could be to burn hydrogen more effectively with the same mixing efficiency and convective extent. In fact, our simulations do not model hydrogen burning completely, because we were limited to a small nuclear network which reached its end at \({}^{24}\)Mg prior to the wind launch (see right panel of Figure 2). We investigated the effect of a larger network by running a simulation equivalent to our Schwarzschild run, but using MESA's rp_153.net nuclear network, which includes isotopes up to \({}^{56}\)Ni, until the wind launch. We found that the outer hydrogen profile was unchanged, with \(y_{\rm c,min}\) and initial hydrogen gradient staying the same. The effects of the additional hydrogen burning were limited to large columns \(\gtrsim 0.3y_{\rm w}\). At these depths, hydrogen completely burned away, whereas with the smaller network a small amount (\(\lesssim 0.1\) for the well-mixed models) remains. The lightcurve for such a burst would initially look the same as in our original Schwarzschild run (Figure 6 bottom panel), but would continue rising all the way to the helium Eddington luminosity, instead of levelling of to \(\sim\)90% of it. This suggests that additional burning is not the explanation for the rapid rise after the pause. More observations of PRE bursts in the pure helium ignition regime would help to further understand and constrain the hydrogen ejection model. Note that previous PRE bursts from SAX J1808.4-3658, observed with the Rossi X-ray Timing Explorer, have shown an increase in luminosity during the Eddington phase, but only by \(\sim\)20% and on \(\sim\)7 s timescales (see Figure 3 in Galloway et al.2017). In these bursts, pauses are not clearly seen, which would indicate small values of \(y_{\rm c,min}\), although this could also be due to the choice of time bins used for the analysis. Such variations in the shape of the lightcurve (slow or fast luminosity increases) across different bursts from a single source may also imply that the dynamics of convection are very sensitive to initial conditions at ignition. Furthermore, a puzzling aspect of the burst reported in Bult et al. (2019) is the secondary peak following the PRE phase. This is unexplained by our hydrogen ejection model, and could instead require multidimensional effects. On the theoretical side, the obvious next step in order to refine predictions for these bursts will be to improve the treatment of convection during the thermonuclear flash, in particular for the collision between the He and H layers. Due to the timescales involved and the limitations of mixing length theory, we know that only multidimensional hydrodynamical simulations can yield accurate results. This may pose a significant numerical challenge, although recent works by Malone et al. (2014) and Zingale et al. (2015) have shown promising results in this direction, demonstrating the use of low-mach number hydrodynamics to model two and three-dimensional convection in thermonuclear explosions. Finally, other improvements need also be made in the hydrodynamical part of the simulation in order to correctly model the super-Eddington wind. First, we faced numerical problems with "staircases" in mass fractions leading to density inversions in the wind, which we simply smoothed out in this work. It would be interesting to investigate such density inversions as they propagate outward in future work. We also had issues at the end of the super-Eddington phase and collapse of the atmosphere. To properly model this part of the PRE, we will likely need hydrodynamical simulations which can handle optically thin radiative transfer as well as shocks (if our findings that infall velocities can be supersonic are correct). Hydrodynamical simulations would also be useful to model the super-Eddington winds in multiple dimensions, where the effects of rotation and magnetic fields could be taken into account. Lastly, for accurate observational predictions, it would be pertinent to include general relativistic corrections to the hydrodynamic equations, as they are known to result in larger photospheric radii (Paczynski and Proszynski, 1986; Guichandut et al., 2021). We thank Alexander Heger for helpful discussions on convective boundary mixing. We also thank Hang Yu for sharing results and analysis of his 2018 paper, and Rob Farmer for guidance on the MESA source code. S.G. is supported by an NSERC scholarship. This work was supported by NSERC Discovery Grant RGPIN-2017-04780. SG and AC are members of the Centre de Recherche en Astrophysique du Quebec (CRAQ). Simulations were ran on the Graham cluster, operated by the Digital Research Alliance of Canada. This work made use of the Python libraries _NumPy_(Harris et al., 2020), _SciPy_(Virtanen et al., 2020), and _Matplotlib_(Hunter, 2007). The _py_mesa_reader_ package (Wolf and Schwab, 2017) was used for MESA output files.
2304.04955
On Beckner's Inequality for Axially Symmetric Functions on $\mathbb{S}^6$
We prove that axially symmetric solutions to the $Q$-curvature type problem $$ \alpha P_6 u + 120(1-\frac{e^{6u}}{\int_{\mathbb{S}^6} e^{6u}})=0 \ \ \ \ \ \mbox{on} \ \mathbb{S}^6 $$ must be constants, provided that $ \frac{1}{2}\leq \alpha <1$. In view of the existence of non-constant solutions obtained by Gui-Hu-Xie \cite{GHW2022} for $\frac{1}{7}<\alpha<\frac{1}{2}$, this result is sharp. This result closes the gap of the related results in \cite{GHW2022}, which proved a similar uniqueness result for $\alpha \geq 0.6168$. The improvement is based on two types of new estimates: one is a better estimate of the semi-norm $\lfloor G\rfloor^2$, the other one is a family of refined estimates on Gegenbauer coefficients, such as pointwise decaying and cancellations properties.
Changfeng Gui, Tuoxin Li, Juncheng Wei, Zikai Ye
2023-04-11T03:56:17Z
http://arxiv.org/abs/2304.04955v1
# On Beckner's inequality for axially symmetric functions on \(\mathbb{S}^{6}\) ###### Abstract. We prove that axially symmetric solutions to the \(Q\)-curvature type problem \[\alpha P_{6}u+120(1-\frac{e^{6u}}{\int_{\mathbb{S}^{6}}e^{6u}})=0\quad\text{ on }\mathbb{S}^{6}\] must be constants, provided that \(\frac{1}{2}\leq\alpha<1\). In view of the existence of non-constant solutions obtained by Gui-Hu-Xie [17] for \(\frac{1}{7}<\alpha<\frac{1}{2}\), this result is sharp. This result closes the gap of the related results in [17], which proved a similar uniqueness result for \(\alpha\geq 0.6168\). The improvement is based on two types of new estimates: one is a better estimate of the semi-norm \(\lfloor G\rfloor^{2}\), the other one is a family of refined estimates on Gegenbauer coefficients, such as pointwise decaying and cancellations properties. ## 1. Introduction and Main Results Beckner's inequality on \(\mathbb{S}^{6}\), a higher order Moser-Trudinger inequality, asserts that the functional \[J_{\alpha}(u):=\frac{\alpha}{2}\int_{\mathbb{S}^{6}}u(P_{6}u)\mathrm{d}w+120 \int_{\mathbb{S}^{6}}u\mathrm{d}w-20\ln\int_{\mathbb{S}^{6}}e^{6u}\mathrm{d}w\] is non-negative for \(\alpha=1\) and all \(u\in H^{2}(\mathbb{S}^{6})\), where \(\mathrm{d}w\) denotes the normalized Lebesgue measure on \(\mathbb{S}^{6}\) with \(\int_{\mathbb{S}^{6}}\mathrm{d}w=1\) and \(P_{6}=-\Delta(-\Delta+4)(-\Delta+6)\) represents the Paneitz operator on \(\mathbb{S}^{6}\). Additionally, with the extra assumption that the mass center of \(u\) is at the origin and \(u\) belongs to the set \[\mathcal{L}=\left\{u\in H^{2}(\mathbb{S}^{6})\ :\ \int_{\mathbb{S}^{6}}e^{6u}x_{j} \mathrm{d}w=0,\ j=1,...,7\right\},\] an improved higher-order Moser-Trudinger-Onofri inequality demonstrates that for any \(\alpha\geq\frac{1}{2}\), a constant \(C(\alpha)\geq 0\) exists such that \(J_{\alpha}(u)\geq-C(\alpha)\). As in the second-order case [7], it is conjectured that \(C(\alpha)\) can be chosen to be \(0\) for any \(\alpha\geq\frac{1}{2}\). The functional \(J_{\alpha}\)'s Euler-Lagrange equation is the following \(Q\)-curvature-type equation on \(\mathbb{S}^{6}\) \[\alpha P_{6}u+120(1-\frac{e^{6u}}{\int_{\mathbb{S}^{6}}e^{6u}\mathrm{d}w})=0 \ \text{on}\ \mathbb{S}^{6}, \tag{1.1}\] If (1.1) admits only constant solutions, then the conjecture is valid. If \(\alpha<1\) is near \(1\), the third author and Xu [26] proved that all solutions to (1.1) are constants. However, for general \(\alpha\in[\frac{1}{2},1)\), it remains unresolved. For results and backgrounds on \(Q\)-curvature problems, we refer to [9, 10, 11, 12, 16, 19, 21, 23, 26] and the references therein. The corresponding problem on \(\mathbb{S}^{2}\) is known as the Nirenberg problem: \[-\alpha\Delta u+1-\frac{e^{2u}}{\int_{\mathbb{S}^{2}}e^{2u}}=0\ \ \text{on}\ \mathbb{S}^{2}.\] This problem has been extensively studied over the past four decades. For more information, refer to [7, 8, 20] and the references therein. A. Chang and P. Yang conjectured in [7, 8] that the following functional \[\alpha\int_{\mathbb{S}^{2}}|\nabla u|^{2}\mathrm{d}w+2\int_{\mathbb{S}^{2}}u \mathrm{d}w-\ln\int_{\mathbb{S}^{2}}e^{2u}\mathrm{d}w\] is non-negative for any \(\alpha\geq\frac{1}{2}\) and \(u\) with zero center of mass \(\int_{\mathbb{S}^{2}}e^{2u}\vec{x}\mathrm{d}w=0\). Feldman, Froese, Ghoussoub and the first author [13] demonstrated that the conjecture is true for axially symmetric functions when \(\alpha>\frac{16}{25}-\epsilon\), the first and the third author in [18] confirmed that the conjecture is indeed true for axially symmetric functions. Later Ghoussoub and Lin [14] showed that the conjecture holds true for \(\alpha>\frac{2}{3}-\epsilon\). Finally, the first author and Moradifam [15] proved the full conjecture. For more general results on improved Moser-Trudinger-Onofri inequality on \(\mathbb{S}^{2}\) and its connections with the Szeg"o limit theorem, see [5, 6]. For the related problem on \(\mathbb{S}^{4}\), \[\alpha P_{4}u+6(1-\frac{e^{4u}}{\int_{\mathbb{S}^{4}}e^{4u}\mathrm{d}w})=0\ \text{on}\ \mathbb{S}^{4}, \tag{1.2}\] various results have been achieved for axially symmetric solutions. Gui-Hu-Xie [16] proved the existence of non-constant solutions for \(\frac{1}{5}<\alpha<\frac{1}{2}\) using bifurcation methods. They also demonstrated that for \(\alpha\geq 0.517\), the above equation admits only constant solutions with axially symmetric assumption. The precise bound \(\alpha\geq\frac{1}{2}\) is obtained by Li-Wei-Ye [22] using refined estimates on Gegenbauer polynomials. These settings can be extended to the \(\mathbb{S}^{n}\) case for any \(n\geq 3\). Gui-Hu-Xie [17] established the existence of non-constant solutions using bifurcation methods for \(\frac{1}{n+1}<\alpha<\frac{1}{2}\), while for \(\alpha\geq 0.6168\) (\(n=6\)) and \(\alpha\geq 0.8261\) (\(n=8\)), all critical points are constants. In this paper, we focus on axially symmetric solutions in the \(\mathbb{S}^{6}\) case for \(\alpha\in[\frac{1}{2},1)\). As we will see later, the problem is considerably difficult. As in [17], (1.1) becomes: \[-\alpha[(1-x^{2})^{3}u^{\prime}]^{(5)}+120-128\frac{e^{6u}}{\gamma}=0,\ x\in(-1,1), \tag{1.3}\] which is the critical point of the functional \[I_{\alpha}(u) =-\frac{\alpha}{2}\int_{-1}^{1}(1-x^{2})^{2}[(1-x^{2})^{3}u^{ \prime}]^{(5)}u+120\int_{-1}^{1}(1-x^{2})^{2}u\] \[-\frac{64}{3}\ln\left(\frac{15}{16}\int_{-1}^{1}(1-x^{2})^{2}e^{6 u}\right) \tag{1.4}\] restricted to the set \[\mathcal{L}_{r}=\{u\in H^{2}(\mathbb{S}^{6}):\ u=u(x)\ \text{and}\ \int_{-1}^{1}x(1-x^{2})^{2}e^{6u}dx=0\}. \tag{1.5}\] The main result of this paper is: **Theorem 1.1**.: _If \(\alpha\geq\frac{1}{2}\), then the only critical points of the functional \(I_{\alpha}\) restricted to \(\mathcal{L}_{r}\) are constant functions. As a consequence, we have the following improved Beckner's inequality for axially symmetric functions on \(\mathbb{S}^{6}\)_ \[\inf_{u\in\mathcal{L}_{r}}I_{\alpha}(u)=0,\ \alpha\geq\frac{1}{2}.\] In the work of Gui-Hu-Xie [17], the assumption \(\alpha\geq\frac{1}{2}\) is shown to be sharp, and they proved Theorem 1.1 for \(\alpha\geq 0.6168\) using a strategy similar to that in [16, 18, 22]. Specifically, they expand \(G=(1-x^{2})u^{\prime}\) in terms of Gegenbauer polynomials and introduce a quantity \(D\) related to the Gegenbauer coefficients and the estimate of \(\lfloor G\rfloor^{2}\) (see (3.2)). However, unlike the \(\mathbb{S}^{4}\) case discussed in [16], they are unable to obtain a bound on \(\beta\) and, consequently, on \(a=\frac{6}{7}(1-\alpha\beta)\). As a result, they cannot use \(D\) to generate a series of inequalities as in [16] and proceed through the induction procedure. In this paper, we provide a better estimate on \(\lfloor G\rfloor^{2}\) and work with a revised quantity \(D\). To render the induction procedure \(a\leq\frac{d_{0}}{\lambda_{n}}\) feasible, we employ refined point-wise estimates of Gegenbauer polynomials similar to those in \(\mathbb{S}^{4}\)[22] to improve the estimates of \(G\)'s Gegenbauer coefficients. More precisely, we refine the decaying behavior of Gegenbauer polynomials near \(x=\pm 1\). Additionally, we utilize the cancellation properties of consecutive Gegenbauer polynomials to modify the methods in the \(\mathbb{S}^{4}\) case. This paper is organized as follows. In Section 2, we gather some properties of Gegenbauer polynomials, expand \(G\) in terms of Gegenbauer polynomials, and cite some basic facts from [17]. In Section 3, we present improved estimates of \(\lfloor G\rfloor^{2}\) and Gegenbauer coefficients of \(G\). In Section 4, we prove Theorem 1.1 using the estimates above. Several Lemmas in Section 3 and Proposition 4.1 are proven in the appendices. ## 2. Preliminaries and some basic estimates In this section, we first introduce some properties of Gegenbauer polynomials and some known facts about the equation. The Gegenbauer polynomials of order \(\nu\) and degree \(k\) ([24]) is given by \[C_{k}^{\nu}(x)=\frac{(-1)^{k}}{2^{k}k!}\frac{\Gamma(\nu+\frac{1}{2})\Gamma(k +2\nu)}{\Gamma(2\nu)\Gamma(\nu+k+\frac{1}{2})}(1-x^{2})^{-\nu+\frac{1}{2}} \frac{d^{k}}{dx^{k}}(1-x^{2})^{k+\nu-\frac{1}{2}}.\] \(C_{k}^{\nu}\) is an even function if \(k\) is even and it is odd if \(k\) is odd. The derivative of \(C_{k}^{\nu}\) satisfies \[\frac{d}{dx}C_{k}^{\nu}(x)=2\nu C_{k-1}^{\nu+1}(x). \tag{2.1}\] Let \(F_{k}^{\nu}\) be the normalization of \(C_{k}^{\nu}\) such that \(F_{k}^{\nu}(1)=1\), i.e. \[F_{k}^{\nu}=\frac{k!\Gamma(2\nu)}{\Gamma(k+2\nu)}C_{k}^{\nu}, \tag{2.2}\] then \(F_{k}^{\nu}\) satisfies \[(1-x^{2})(F_{k}^{\nu})^{\prime\prime}-(2\nu+1)x(F_{k}^{\nu})^{\prime}+k(k+2\nu )F_{k}^{\nu}=0, \tag{2.3}\] and (2.1) becomes \[(F_{k}^{\nu})^{\prime}=\frac{k(k+2\nu)}{2\nu+1}F_{k-1}^{\nu+1}. \tag{2.4}\] It is also useful to introduce the following expressions using hypergeometric functions \[F_{2m+1}^{\nu}(\cos\theta)=\cos\theta_{2}F_{1}(-m,m+\nu+1;\nu+\frac{1}{2};\sin^{ 2}\theta), \tag{2.5}\] \[F_{2m}^{\nu+1}(\cos\theta)={}_{2}F_{1}(-m,m+\nu+1;\nu+\frac{3}{2};\sin^{2} \theta), \tag{2.6}\] where we recall the hypergeometric function is defined for \(|x|<1\) by power series \[{}_{2}F_{1}(a,b;c;x)=\sum_{k=0}^{\infty}\frac{(a)_{k}(b)_{k}}{(c)_{k}}\frac{x^ {k}}{k!}.\] Here \((a)_{k}=\frac{\Gamma(a+k)}{\Gamma(a)}\) is the Pochhammer symbol. On \(\mathbb{S}^{6}\), the corresponding Gegenbauer polynomial is \(C_{k}^{\frac{5}{2}}\). For notational simplicity, in what follows we will write \(F_{k}\) for \(F_{k}^{\frac{5}{2}}\), and there should be no danger of confusion. From (2.3) it turns out that \(F_{k}\) satisfies \[(1-x^{2})F_{k}^{\prime\prime}-6xF_{k}^{\prime}+\lambda_{k}F_{k}=0 \tag{2.7}\] and \[\int_{-1}^{1}(1-x^{2})F_{k}F_{l}=\frac{128}{(2k+5)(\lambda_{k}+4)(\lambda_{k} +6)}\delta_{kl}, \tag{2.8}\] where \(\lambda_{k}=k(k+5)\). As in [16, 18], we define the following key quantity \[G(x)=(1-x^{2})u^{\prime}, \tag{2.9}\] where \(u\) is a solution to (1.3). Then \(G\) satisfies the equation \[\alpha[(1-x^{2})^{2}G]^{(5)}+120-128\frac{e^{6u}}{\gamma}=0, \tag{2.10}\] where \[\gamma=\int_{-1}^{1}(1-x^{2})^{2}e^{6u}. \tag{2.11}\] \(G\) can be expanded in terms of Gegenbauer polynomials \[G=a_{0}F_{0}+\beta x+a_{2}F_{2}(x)+\sum_{k=3}^{\infty}a_{k}F_{k}(x). \tag{2.12}\] Denote \[g=(1-x^{2})^{2}\frac{e^{6u}}{\gamma},\ a:=\int_{-1}^{1}(1-x^{2})g. \tag{2.13}\] We recall some results from [17]. **Lemma 2.1**.: _For \(g=(1-x^{2})^{2}\frac{e^{6u}}{\gamma}\) and \(G=(1-x^{2})u^{\prime}\) as above, we have \(a_{0}=0\) and_ \[\int_{-1}^{1}(1-x^{2})F_{1}G=\frac{16}{105}\beta, \tag{2.14}\] \[a=\int_{-1}^{1}(1-x^{2})g=\frac{6}{7}(1-\alpha\beta), \tag{2.15}\] \[\int_{-1}^{1}(1-x^{2})F_{k}G=-\frac{128}{\alpha(\lambda_{k}+4)(\lambda_{k}+6) }\int_{-1}^{1}(1-x^{2})gF_{k}^{\prime},\ k\geq 2, \tag{2.16}\] \[\int_{-1}^{1}|[(1-x^{2})^{2}G]^{\prime\prime}|^{2}=\frac{256}{35}(7-\frac{1}{ \alpha})\beta. \tag{2.17}\] **Lemma 2.2**.: _For all \(x\in(-1,1)\), we have_ \[G_{j}:=(-1)^{j}[(1-x^{2})^{j}G]^{(2j+1)}\leq\frac{(2j+1)!}{\alpha},\ 0\leq j\leq 2. \tag{2.18}\] ## 3. Refined Estimates In this section, we deduce two refined estimates on the semi-norm \(\lfloor G\rfloor^{2}\) and \(b_{k}\) defined later. To get a rough estimate of \(\beta\) and \(a=\frac{6}{7}(1-\alpha\beta)\), we need an estimate of the following semi-norm \(\lfloor G\rfloor^{2}\). Let \[\lfloor G\rfloor^{2}=-\int_{-1}^{1}(1-x^{2})^{2}[(1-x^{2})^{3}G^{\prime}]^{(5) }G. \tag{3.1}\] By integrating by parts (see Gui-Hu-Xie [17]), we have \[\lfloor G\rfloor^{2}= -15\int_{-1}^{1}|[(1-x^{2})^{2}G]^{\prime\prime}|^{2}+\frac{720 }{\alpha}\int_{-1}^{1}(1-x^{2})^{2}G^{2}+30\int_{-1}^{1}(1-x^{2})^{4}G^{\prime }(G^{\prime\prime})^{2}\] \[+160\int_{-1}^{1}(1-x^{2})^{3}(G^{\prime})^{3}. \tag{3.2}\] With the help of Lemma 2.2, they applied \(G^{\prime}\leq\frac{1}{\alpha}\) directly to the last two integrals and obtained an estimate of \(\lfloor G\rfloor^{2}\) \[\lfloor G\rfloor^{2}\leq(\frac{30}{\alpha}-15)\int_{-1}^{1}|[(1-x^{2})^{2}G]^{ \prime\prime}|^{2}-\frac{320}{\alpha}\int_{-1}^{1}(1-x^{2})^{3}(G^{\prime})^{ 2}.\] However, with this estimate, it is not enough to get a rough lower bound of \(\beta\), hence an upper bound of \(a\). The main issue here is that the coefficient of \(\int_{-1}^{1}|[(1-x^{2})^{2}G]^{\prime\prime}|^{2}\) is too large. To solve this problem, we introduce the following Proposition to drop the third integral in (3.2). **Proposition 3.1**.: \[\lfloor G\rfloor^{2}\leq-15\int_{-1}^{1}|[(1-x^{2})^{2}G]^{\prime\prime}|^{2 }+\frac{720}{\alpha}\int_{-1}^{1}(1-x^{2})^{2}G^{2}+\frac{160}{\alpha}\int_{ -1}^{1}(1-x^{2})^{3}(G^{\prime})^{2},\] (3.3) Proof.: Integrating (3.2) by parts, we get \[\lfloor G\rfloor^{2}= -15\int_{-1}^{1}|[(1-x^{2})^{2}G]^{\prime\prime}|^{2}+\frac{720} {\alpha}\int_{-1}^{1}(1-x^{2})^{2}G^{2}+\int_{-1}^{1}(1-x^{2})^{3}\tilde{G}(G ^{\prime})^{2},\] where \[\tilde{G}=-15(1-x^{2})G^{\prime\prime\prime}+120xG^{\prime\prime}+160G^{ \prime}. \tag{3.4}\] Let \[\hat{G}=-15(1-x^{2})G^{\prime\prime\prime}+120xG^{\prime\prime}+150G^{\prime}. \tag{3.5}\] Direct calculation yields that \(\hat{G}\) satisfies \[(1-x^{2})\hat{G}^{\prime\prime}-8x\hat{G}^{\prime}-12\hat{G}=-15[(1-x^{2})^{2} G]^{(5)}\geq-\frac{1800}{\alpha}.\] The last inequality follows from Lemma 2.2. Then we claim that \[\hat{G}\leq\frac{150}{\alpha}.\] To prove the claim, denote \(M=\max\limits_{-1\leq x\leq 1}\hat{G}(x)\). If \(M\) is attained at some point \(x_{0}\in(-1,1)\), then \[\hat{G}^{\prime}(x_{0})=0,\ \hat{G}^{\prime\prime}(x_{0})\leq 0\] and the desired esitmate follows. If \(M\) is attained at \(1\) or \(-1\), without loss of generality, suppose there exists a sequence \(x_{k}\to 1\) such that \[M=\lim\limits_{k\to\infty}\hat{G}(x_{k}).\] Let \(r=\sqrt{1-x^{2}}\) and write \[G(x)=\bar{G}(r)\ \text{and}\ u(x)=\bar{u}(r)\ \text{for}\ r\in[0,1),\ x\in(0,1].\] Then we can extend \(\bar{u}(r)\) to be a smooth even function on \((-\frac{1}{2},\frac{1}{2})\). Hence, \[G(x)=\bar{G}(r)=-r\sqrt{1-r^{2}}u_{r}\] is a smooth function. Direct calculation yields that \[\hat{G}(r)=-15(1-r^{2})^{2}u_{rrrr}+30(1-r^{2})(7r^{2}-4)\frac{u_{rrr}}{r}-15(4 8r^{4}-50r^{2}+5)\frac{u_{rr}}{r^{2}}\] is an even function with respect to \(r\). Moreover, since \[\lim\limits_{r\to 0}\frac{u_{rrr}(r)}{r}=u_{rrrr}(0),\ \lim\limits_{r\to 0}\frac{u_{rr}(r)}{r^{2}}= \frac{1}{2}u_{rrrr}(0),\] \(\hat{G}(r)\) is smooth on \((-\frac{1}{2},\frac{1}{2})\). Now we can write \[\hat{G}(r) =c_{1}+c_{2}r^{2}+c_{3}r^{4}+O(r^{6}),\] \[x\hat{G}^{\prime}(x) =-2c_{2}+O(r^{2}),\] \[(1-x^{2})\hat{G}^{\prime\prime}(x) =(-2c_{2}+8c_{3})r^{2}+O(r^{4})\] near \(r=0\). Since \(\hat{G}(r)\) attains its local maximum at \(r=0\), we have \(c_{2}\leq 0\) and hence \[\lim\limits_{x\to 1}x\hat{G}^{\prime}(x)\leq 0,\ \lim\limits_{x\to 1}(1-x^{2})\hat{G}^{ \prime\prime}(x)=0.\] Then we obtain \(M\leq\frac{150}{\alpha}\). Applying Lemma 2.2 again, we get \[\bar{G}\leq\frac{160}{\alpha}.\] and the Proposition follows. In the following part, we begin to estimate \(b_{k}:=a_{k}\sqrt{\int_{-1}^{1}(1-x^{2})F_{k}^{2}}\), where \(a_{k}\) is the \(k\)-th coefficient in the expansion of \(G\) (see (2.12)). The estimates of \(b_{k}\) play a key role in the proofs of [16, 18]. In [16], they used (2.16) and the fact that \[|F_{k}^{\prime}(x)|\leq|F_{k}^{\prime}(1)|=\frac{\lambda_{k}}{6} \tag{3.6}\] to estimate \(b_{k}\) as follows \[b_{k}^{2} =a_{k}^{2}\int_{-1}^{1}(1-x^{2})F_{k}^{2}=\frac{1}{\int_{-1}^{1}(1-x ^{2})F_{k}^{2}}\left[\frac{128}{\alpha\lambda_{k}}\int_{-1}^{1}(1-x^{2})gF_{k}^{ \prime}\right]^{2}\] \[\leq\frac{(2k+5)(\lambda_{k}+4)(\lambda_{k}+6)}{128}\left[\frac{1 28}{\alpha\lambda_{k}(\lambda_{k}+2)}\frac{\lambda_{k}}{6}a\right]^{2}\] \[=\frac{32(2k+5)}{9\alpha^{2}(\lambda_{k}+4)(\lambda_{k}+6)}a^{2}.\] However, as in the \(\mathbb{S}^{4}\) case, this estimate is not strong enough to deduce the induction \[a=\frac{6}{7}(1-\alpha\beta)\leq\frac{d_{0}}{\lambda_{n}}. \tag{3.7}\] Likewise, we need a refined estimate on \(b_{k}\), which follows from the following refined estimate on Gegenbauer polynomials. For simplicity, in the rest of the paper, we denote \[\tilde{F}_{k}^{\prime}=\frac{6}{\lambda_{k}}F_{k}^{\prime}=\frac{720}{\lambda _{k}(\lambda_{k}+4)(\lambda_{k}+6)}C_{k-1}^{\frac{7}{2}} \tag{3.8}\] so that \(\tilde{F}_{k}^{\prime}(1)=1\). As in \(\mathbb{S}^{4}\), we split the integral in the right hand side of \(b_{k}\) into two parts. To this end, we define \[a_{+}:=\int_{0}^{1}(1-x^{2})g,\,a_{-}:=\int_{-1}^{0}(1-x^{2})g,\,A_{k}^{+}= \int_{0}^{1}(1-x^{2})\tilde{F}_{k}^{\prime}g,\,A_{k}^{-}=\int_{-1}^{0}(1-x^{2} )\tilde{F}_{k}^{\prime}g, \tag{3.9}\] Without loss of generality, we may assume \(a_{+}=\lambda a\) with \(\frac{1}{2}\leq\lambda\leq 1\). Now we derive some estimates about \(g\). Recalling the definition of \(g\), we have \[\int_{-1}^{1}g=1,\,\,\int_{-1}^{1}xg=0\text{ and }\int_{-1}^{1}(1-x^{2})g=a.\] From the second integral above, we have \[\int_{0}^{1}g-\int_{0}^{1}(1-x)g=\int_{0}^{1}xg=-\int_{-1}^{0}xg=\int_{-1}^{0 }g-\int_{-1}^{0}(1+x)g. \tag{3.10}\] Since \[\left|\int_{0}^{1}(1-x)g\right|\leq\int_{0}^{1}(1-x^{2})g=a_{+},\,\,\left| \int_{-1}^{0}(1+x)g\right|\leq\int_{0}^{1}(1-x^{2})g=a_{-},\] combining with (3.10), we have \[\left|\int_{0}^{1}g-\int_{-1}^{0}g\right|\leq a.\] Hence \[\frac{1-a}{2}\leq\int_{0}^{1}g,\int_{-1}^{0}g\leq\frac{1+a}{2}. \tag{3.11}\] Moreover, it follows directly from the definition of \(g\) that \[\int_{0}^{1}xg\leq\min\{\int_{0}^{1}g,\int_{-1}^{0}g\}\leq\frac{1}{2}, \tag{3.12}\] and \[\int_{0}^{1}(1+x)g=1-\int_{-1}^{0}(1+x)g<1. \tag{3.13}\] With the estimates on \(g\) above, the following Theorem gives a refined estimate on \(A_{k}^{\pm}\), hence on \(b_{k}\). **Theorem 3.2**.: _Let \(d=8\), \(b=0.33\). Suppose \(a\leq\frac{16}{\lambda_{n}}\) for some \(n\geq 3\). Then for all \(k\), we have_ \[|A_{k}^{+}|\leq\mathcal{A}_{k}^{+}:=\begin{cases}a_{+}-\frac{1-b}{d}\lambda_{k }a_{+}^{2},\text{ if }\lambda_{k}\leq\frac{\lambda_{n}}{4},\\ ba_{+}+(1-b)\frac{d}{4\lambda_{k}},\text{ if }\frac{\lambda_{n}}{4}<\lambda_{k} \leq\lambda_{n},\end{cases} \tag{3.14}\] \[|A_{k}^{-}|\leq\mathcal{A}_{k}^{-}:=\begin{cases}a_{-}-\frac{1-b}{d}\lambda_{ k}a_{-}^{2},\text{ if }a_{-}\leq\frac{4}{\lambda_{n}},\\ ba_{-}+(1-b)\frac{d}{4\lambda_{k}}\chi_{\{\lambda\neq 1\}},\text{ if }\frac{4}{ \lambda_{n}}<a_{-}\leq\frac{8}{\lambda_{n}}.\end{cases} \tag{3.15}\] In fact, for the toy cases in which \(k\)'s are small, better estimates can be obtained. The proof is left to Appendix A. **Lemma 3.3**.: _For \(A_{k}\), \(2\leq k\leq 5\),_ \[|A_{2}| \leq a_{+}-a_{+}^{2}, \tag{3.16}\] \[|A_{3}| \leq a-\frac{9}{4}\frac{a^{2}}{a+1}(2\lambda^{2}-2\lambda+1),\] (3.17) \[|A_{4}| \leq (a_{+}-a_{+}^{2})-\frac{11}{4}(a_{+}-a_{+}^{2})^{2}+\frac{1}{4 \sqrt{11}}a_{-},\] (3.18) \[|A_{5}| \leq a-\frac{11(a_{+}^{2}+a_{-}^{2})}{2(a+1)}+\frac{143(a_{+}^{3}+a_{ -}^{3})}{10(a+1)^{2}}. \tag{3.19}\] Before we prove Theorem 3.2 for general \(k\)'s, we first introduce some point-wise estimates of Gegenbauer polynomials. **Lemma 3.4** (Corollary 5.3 of Nemes and Olde Daalhuis [25] ).: _Let \(0<\zeta<\pi\) and \(N\geq 3\) be an integer. Then_ \[C_{k-1}^{\frac{7}{2}}(\cos\zeta)=\frac{2}{\Gamma(\frac{7}{2})(2\sin\zeta)^{ \frac{7}{2}}}\left(\sum_{n=0}^{N-1}t_{n}(3)\frac{\Gamma(k+6)}{\Gamma(k+n+\frac {7}{2})}\frac{\cos\left(\delta_{k-1,n}\right)}{\sin^{n}\zeta}+R_{N}(\zeta,k-1 )\right), \tag{3.20}\] _where \(\delta_{k,n}=(k+n+\frac{7}{2})\zeta-(\frac{7}{2}-n)\frac{\pi}{2}\), \(t_{n}(\mu)=\frac{(\frac{1}{2}-\mu)_{n}(\frac{1}{2}+\mu)_{n}}{(-2)^{n}n!}\), and \((x)_{n}=\frac{\Gamma(x+n)}{\Gamma(x)}\) is the Pochhammer symbol. The remainder term \(R\) satisfies the estimate_ \[|R_{N}(\zeta,k)|\leq|t_{N}(3)|\frac{\Gamma(k+6)}{\Gamma(k+N+\frac{7}{2})} \frac{1}{\sin^{N}\zeta}\cdot\begin{cases}|\sec\zeta|&\text{ if }0<\zeta\leq\frac{\pi}{4}\text{ or }\frac{3\pi}{4}\leq \zeta<\pi,\\ 2\sin\zeta&\text{ if }\frac{\pi}{4}<\zeta<\frac{3\pi}{4}.\end{cases} \tag{3.21}\] Using the pointwise estimate (3.20), we can prove the following lower and upper bounds for \(\tilde{F}_{k}^{\prime}\). Recall that \(\tilde{F}_{k}^{\prime}\) is odd for \(k\) even and even for \(k\) odd. It suffices to estimate \(\tilde{F}_{k}^{\prime}\) on \([0,1]\). The proofs are left to Appendix B. **Lemma 3.5**.: _Let \(m_{0}=0.04\), then for all \(k\geq 8\), we have_ \[\widetilde{F}_{k}^{\prime}\geq-m_{0},\quad 0\leq x\leq 1.\] **Lemma 3.6**.: _Let \(d=8\) and \(b=0.33\). Then for all \(k\geq 6\),_ \[\widetilde{F}^{{}^{\prime}}_{k}\leq\begin{cases}b,&0\leq x\leq 1-\dfrac{d}{ \lambda_{k}},\\ 1-\dfrac{\lambda_{k}}{d}(1-b)(1-x),&1-\dfrac{d}{\lambda_{k}}\leq x \leq 1.\end{cases}\] With the help of the above two lemmas, we are able to derive Theoreo 3.2. Proof of Theorem 3.2.: By (4.4) below, we have \(\beta\geq\frac{113}{88}\), \(\alpha<0.578\) and hence \(a\leq 0.221\). It is straightforward to check the cases when \(2\leq k\leq 5\) hold for better estimate in the form of Lemma 3.6. In the following argument, we may assume \(k\geq 6\). Define \(I=(0,1-\frac{d}{\lambda_{k}})\), \(II=(1-\frac{d}{\lambda_{k}},1)\), and \(a_{I}=\int_{I}(1-x^{2})g\), \(a_{II}=\int_{II}(1-x^{2})g\). Then by Lemma 3.6 and (3.13), we have \[\int_{0}^{1}(1-x^{2})\widetilde{F}^{\prime}_{k}g =\int_{I}(1-x^{2})\widetilde{F}^{\prime}_{k}g+\int_{II}(1-x^{2}) \widetilde{F}^{\prime}_{k}g\] \[\leq\int_{I}(1-x^{2})bg+\int_{II}(1-x^{2})(1-\frac{\lambda_{k}}{ d}(1-b)(1-x))g\] \[=ba_{I}+a_{II}-\frac{\lambda_{k}}{d}(1-b)\int_{II}(1-x^{2})(1-x)g\] \[\leq ba_{I}+a_{II}-\frac{\lambda_{k}}{d}(1-b)\frac{(\int_{II}(1-x ^{2})g)^{2}}{\int_{II}(1+x)g}\] \[\leq ba_{I}+a_{II}-\frac{\lambda_{k}}{d}(1-b)a_{II}^{2}\] \[=ba_{+}+(1-b)(a_{II}-\frac{\lambda_{k}}{d}a_{II}^{2}). \tag{3.22}\] If \(\lambda_{k}\leq\frac{\lambda_{n}}{4}\), we have \(a_{II}\leq a_{+}\leq a\leq\frac{16}{\lambda_{n}}\leq\frac{d}{2\lambda_{k}}\). Hence, \[\int_{0}^{1}(1-x^{2})\widetilde{F}^{\prime}_{k}g\leq a_{+}+(1-b)(a_{+}-\frac{ \lambda_{k}}{d}a_{+}^{2})=a_{+}-\frac{\lambda_{k}}{d}(1-b)a_{+}^{2}.\] For the case when \(\lambda_{k}>\frac{\lambda_{n}}{4}\), we get directly \[\int_{0}^{1}(1-x^{2})\widetilde{F}^{\prime}_{k}g\leq ba_{+}+(1-b)\frac{d}{4 \lambda_{k}}.\] On the other hand, Lemma 3.5 yields \[\int_{0}^{1}(1-x^{2})\widetilde{F}^{\prime}_{k}g\geq-0.04\int_{0}^{1}(1-x^{2} )g=-0.04a_{+}.\] Combining the above three estimates, we obtain the desired estimate on \(A_{k}^{+}\). Similarly, on estimating \(A_{k}^{-}\), just note that \(a_{-}\leq\frac{a}{2}\leq\frac{8}{\lambda_{n}}\). We can get an estimate analogous to (3.22) and then (3.15) follows directly. We omit the details. Next we derive a uniform estimate of cancellation of consecutive Gegenbauer polynomials. The estimate is based on the recursion formula and a useful inequality of Gegenbauer polynomials. It is well known that for \(0<\nu<1\), \(-1\leq x\leq 1\), one has \[(1-x^{2})^{\frac{\nu}{2}}|C^{\nu}_{n}(x)|<\frac{2^{1-\nu}}{\Gamma(\nu)}n^{\nu- 1}, \tag{3.23}\] where the constant \(\frac{2^{1-\nu}}{\Gamma(\nu)}\) is optimal. (See Theorem 7.33.2 in [3]). We believe that an analogous result of (3.23) exists for \(\nu>1\), but now the following lemma, whose proof is left to Appendix C, is enough for our use. We will use \(F_{n}^{\nu}\) instead of \(C_{n}^{\nu}\) for the sake of notational consistency. **Lemma 3.7**.: _For \(\nu\geq 2\) and \(-1\leq x\leq 1\), if \(n\geq\max\{2\nu+2,12\}\), then we have_ \[|(1-x^{2})F_{n}^{\nu}(x)|\leq\frac{\widetilde{C}_{\nu}}{n(n+2\nu)}, \tag{3.24}\] _where \(\widetilde{C}_{\nu}\) is given in(C.6)._ With the help of the above lemma, we can prove the following proposition. **Proposition 3.8**.: _Let \(c_{n}^{\nu}=\max\limits_{0\leq x\leq 1}|F_{n+1}^{\nu}(x)-F_{n}^{\nu}(x)|\). For \(\nu\geq 2\), we have_ \[c_{n}^{\nu}\leq\frac{1}{n}\left(\frac{\widetilde{C}_{\nu}}{n+2\nu+1}+ \widetilde{C}_{\nu+1}\right)\] _if \(n\geq\max\{2\nu+2,12\}\)._ Proof.: Recall the recursion formula for Gegenbauer polynomials \[(1-x^{2})2\nu C_{n}^{\nu+1}=-(n+1)xC_{n+1}^{\nu}+(n+2\nu)C_{n}^{\nu},\] which, in view of (2.2), can be rewritten as \[(1-x^{2})(n+2\nu+1)F_{n}^{\nu+1}=-xF_{n+1}^{\nu}+F_{n}^{\nu}.\] Then by (3.24), \[|F_{n+1}^{\nu}(x)-F_{n}^{\nu}(x)| =|(1-x)F_{n+1}^{\nu}(x)-(1-x^{2})(n+2\nu+1)F_{n}^{\nu+1}(x)|\] \[\leq|(1-x^{2})F_{n+1}^{\nu}(x)|+(n+2\nu+1)|(1-x^{2})F_{n}^{\nu+1 }(x)|\] \[\leq\frac{\widetilde{C}_{\nu}}{(n+1)(n+2\nu+1)}+\frac{(n+2\nu+1) \widetilde{C}_{\nu+1}}{n(n+2\nu+2)}\] \[\leq\frac{1}{n}\left(\frac{\widetilde{C}_{\nu}}{n+2\nu+1}+ \widetilde{C}_{\nu+1}\right).\] Recall that \(\tilde{F}_{n}^{\prime}=F_{n-1}^{\frac{7}{2}}\), so we have **Corollary 3.9**.: _Let \(c_{n}=\max\limits_{0\leq x\leq 1}|\tilde{F}_{n+1}^{\prime}-\tilde{F}_{n}^{ \prime}|\), then \(c_{n}\leq 0.12\) if \(6\leq n\leq 29\) and \(c_{n}<0.026\) if \(n\geq 30\)._ Proof.: Direct computation by Matlab shows that the first assertion holds, and \(c_{n}<0.026\) for \(30\leq n\leq 428\) (the computational results are recorded in a supplemental data file). For \(n>428\), by (C.6), we have \(\widetilde{C}_{\frac{7}{2}}\leq 9.19\) and \(\widetilde{C}_{\frac{9}{2}}\leq 11.02\), so we can also deduce that \[c_{n}=c_{n-1}^{\frac{7}{2}}\leq\frac{11.1}{n-1}<0.026.\] ## 4. proof of main theorem for \(\mathbb{S}^{6}\) In this section, we will prove Theorem 1.1 for \(\mathbb{S}^{6}\) by induction argument, with the help of refined estimates on \(b_{k}\)'s. We claim that \(\beta=0\), which yields that \((1-x^{2})^{2}G\) is a linear function by (2.17). Since \(G\) is bounded on \((-1,1)\), we get \(G\equiv 0\) and we are done. So it suffices to show that \(\beta=0\). We will argue by contradiction. If \(\beta\neq 0\), then \(0<\beta<\frac{1}{\alpha}\) since \(a=\int_{-1}^{1}(1-x^{2})g=\frac{6}{7}(1-\alpha\beta)>0\). It then suffices to show \(a=0\). We will achieve this by proving \[a=\frac{6}{7}(1-\alpha\beta)\leq\frac{d_{0}}{\lambda_{n}},\ \forall n\geq 5\ \text{with}\ n\equiv 1\ (\text{mod}\ 4), \tag{4.1}\] where \(d_{0}=16\). As in [18] and [22], we will prove (4.1) by induction. To begin with, we introduce the quantity \[D=\sum_{k=3}^{\infty}\left[\lambda_{k}(\lambda_{k}+4)(\lambda_{k}+6)-(14-\frac {74}{9\alpha})(\lambda_{k}+4)(\lambda_{k}+6)-\frac{160}{\alpha}\lambda_{k}- \frac{720}{\alpha}\right]b_{k}^{2}. \tag{4.2}\] Then by (2.17) and (3.3), we get \[D= \lfloor G\rfloor^{2}-(14-\frac{74}{9\alpha})\int_{-1}^{1}|[(1-x^{ 2})^{2}G]^{\prime\prime}|^{2}-\frac{160}{\alpha}\int_{-1}^{1}(1-x^{2})^{3}(G^{ \prime})^{2}\] \[-\frac{720}{\alpha}\int_{-1}^{1}(1-x^{2})^{2}G^{2}+\frac{16}{105} (\frac{2080}{3\alpha}+960)\beta^{2}\] \[\leq (\frac{74}{9\alpha}-29)\int_{-1}^{1}|[(1-x^{2})^{2}G]^{\prime \prime}|^{2}+\frac{16}{105}(\frac{2080}{3\alpha}+960)\beta^{2}\] \[= \frac{256}{35}(\frac{74}{9\alpha}-29)(7-\frac{1}{\alpha})\beta+ \frac{512}{7}(\frac{13}{9\alpha}+2)\beta^{2}. \tag{4.3}\] Since \(D\geq 0\), \(\alpha\geq\frac{1}{2}\) and \(0<\beta<\frac{1}{\alpha}\), we obtain \[\beta\geq\frac{9}{440}(29-\frac{74}{9\alpha})(7-\frac{1}{\alpha})\geq\frac{11 3}{88}, \tag{4.4}\] and \[\frac{256}{35}(\frac{74}{9\alpha}-29)(7-\frac{1}{\alpha})+\frac{512}{7}(\frac {13}{9\alpha}+2)\frac{1}{\alpha}\geq 0, \tag{4.5}\] which implies that \[\alpha<0.578. \tag{4.6}\] On the other hand, fix any integer \(n\geq 3\), we have \[D= \sum_{k=3}^{\infty}\left[\lambda_{k}(\lambda_{k}+4)(\lambda_{k}+6)- (14-\frac{74}{9\alpha})(\lambda_{k}+4)(\lambda_{k}+6)-\frac{160}{\alpha}\lambda_ {k}-\frac{720}{\alpha}\right]b_{k}^{2}\] \[\geq \sum_{k=n+1}^{\infty}\left[\lambda_{n+1}-14+\frac{74}{9\alpha}- \frac{160\lambda_{n+1}+720}{(\lambda_{n+1}+4)(\lambda_{n+1}+6)\alpha}\right]( \lambda_{k}+4)(\lambda_{k}+6)b_{k}^{2}\] \[+\sum_{k=3}^{n}\left[\lambda_{k}-14+\frac{74}{9\alpha}-\frac{160 \lambda_{k}+720}{(\lambda_{k}+4)(\lambda_{k}+6)\alpha}\right](\lambda_{k}+4)( \lambda_{k}+6)b_{k}^{2}\] \[\geq (\lambda_{n+1}-14+\frac{275}{63\alpha})\sum_{k=n+1}^{\infty}( \lambda_{k}+4)(\lambda_{k}+6)b_{k}^{2}\] \[+\sum_{k=3}^{n}(\lambda_{k}-14+\frac{176}{63}\alpha)(\lambda_{k}+ 4)(\lambda_{k}+6)b_{k}^{2}\] \[= \sum_{k=3}^{n}(\lambda_{k}-\lambda_{n+1}-\frac{11}{7\alpha})( \lambda_{k}+4)(\lambda_{k}+6)b_{k}^{2}\] \[+(\lambda_{n+1}-14+\frac{275}{63\alpha})\left[\frac{256}{35}(7- \frac{1}{\alpha})\beta-\frac{128}{7}\beta^{2}-360b_{2}^{2}\right]. \tag{4.7}\] Combining (4.3) and (4.7), we get \[0\leq \frac{256}{35}(7-\frac{1}{\alpha})(\frac{27}{7\alpha}-15-\lambda_ {n+1})\beta+\frac{128}{7}(\lambda_{n+1}-6+\frac{71}{7\alpha})\beta^{2}\] \[+\frac{176}{63\alpha}(\lambda_{2}+4)(\lambda_{2}+6)b_{2}^{2}+\sum _{k=2}^{n}(\lambda_{n+1}-\lambda_{k}+\frac{11}{7\alpha})(\lambda_{k}+4)( \lambda_{k}+6)b_{k}^{2}. \tag{4.8}\] Then we can start the induction procedure to prove \(a\leq\frac{16}{\lambda_{n}}\), for all \(n\geq 5\) with \(n\equiv 1\ (\text{mod}\ 4)\). Note that from (4.4) and (4.6), we already have \(a\leq 0.221\leq\frac{16}{\lambda_{5}}\). By induction, now we assume \(a\leq\frac{16}{\lambda_{n}}\) for some \(n\geq 5\) with \(n\equiv 1\ (\text{mod}\ 4)\). Then we will show that \(a\leq\frac{16}{\lambda_{n+4}}\). We argue by contradiction and suppose \(a>\frac{16}{\lambda_{n+4}}\) on the contrary. Let \(B_{k}=\frac{9\alpha^{2}}{32}(\lambda_{n+1}-\lambda_{k}+\frac{11}{7\alpha})(2k+5)\), then for every even \(k\), we have \[\frac{9\alpha^{2}}{32}\left[(\lambda_{n+1}-\lambda_{k}+\frac{11}{ 7\alpha})(\lambda_{k}+4)(\lambda_{k}+6)b_{k}^{2}+(\lambda_{n+1}-\lambda_{k+1}+ \frac{11}{7\alpha})(\lambda_{k+1}+4)(\lambda_{k+1}+6)b_{k+1}^{2}\right]\] \[= B_{k}(\int_{-1}^{1}(1-x^{2})\tilde{F}_{k}^{\prime}g)^{2}+B_{k+1} (\int_{-1}^{1}(1-x^{2})\tilde{F}_{k+1}^{\prime}g)^{2}\] \[= B_{k}\left[(\int_{0}^{1}(1-x^{2})\tilde{F}_{k}^{\prime}g)^{2}+( \int_{-1}^{0}(1-x^{2})\tilde{F}_{k}^{\prime}g)^{2}\right]+B_{k+1}\left[(\int_{ 0}^{1}(1-x^{2})\tilde{F}_{k+1}^{\prime}g)^{2}+(\int_{-1}^{0}(1-x^{2})\tilde{ F}_{k+1}^{\prime}g)^{2}\right]\] \[+ 2B_{k}\int_{0}^{1}(1-x^{2})\tilde{F}_{k}^{\prime}g\int_{-1}^{0}( 1-x^{2})(\tilde{F}_{k}^{\prime}+\tilde{F}_{k+1}^{\prime})g+2B_{k+1}\int_{0}^{ 1}(1-x^{2})(\tilde{F}_{k+1}^{\prime}-\tilde{F}_{k}^{\prime})g\int_{-1}^{0}(1-x ^{2})\tilde{F}_{k+1}^{\prime}g\] \[+ 2(B_{k+1}-B_{k})\int_{0}^{1}(1-x^{2})\tilde{F}_{k}^{\prime}g\int_ {-1}^{0}(1-x^{2})\tilde{F}_{k+1}^{\prime}g\] \[:= R_{k,1}+R_{k,2}+R_{k,3}.\] Recall the definition of \(\mathcal{A}_{k}^{\pm}\) from Theorem 3.2. By Theorem 3.2, we have \[R_{k,1} =B_{k}\left[(\int_{0}^{1}(1-x^{2})\tilde{F}_{k}^{\prime}g)^{2}+( \int_{-1}^{0}(1-x^{2})\tilde{F}_{k}^{\prime}g)^{2}\right]\] \[+B_{k+1}\left[(\int_{0}^{1}(1-x^{2})\tilde{F}_{k+1}^{\prime}g)^{2 }+(\int_{-1}^{0}(1-x^{2})\tilde{F}_{k+1}^{\prime}g)^{2}\right]\] \[\leq B_{k}\left(|\mathcal{A}_{k}^{+}|^{2}+|\mathcal{A}_{k}^{-}|^ {2}\right)+B_{k+1}\left(|\mathcal{A}_{k+1}^{+}|^{2}+|\mathcal{A}_{k+1}^{-}|^{ 2}\right). \tag{4.9}\] Let \(c_{k}\) be defined as in Corollary 3.9, then we have \[|\int_{-1}^{0}(1-x^{2})(\tilde{F}_{k}^{\prime}+\tilde{F}_{k+1}^{ \prime})g|\leq c_{k}a_{-}=c_{k}(1-\lambda)a,\] \[|\int_{0}^{1}(1-x^{2})(\tilde{F}_{k+1}^{\prime}-\tilde{F}_{k}^{ \prime})g|\leq c_{k}a_{+}=c_{k}\lambda a.\] So \[R_{k,2} =2B_{k}\int_{0}^{1}(1-x^{2})\tilde{F}_{k}^{\prime}g\int_{-1}^{0}( 1-x^{2})(\tilde{F}_{k}^{\prime}+\tilde{F}_{k+1}^{\prime})g\] \[+2B_{k+1}\int_{0}^{1}(1-x^{2})(\tilde{F}_{k+1}^{\prime}-\tilde{F} _{k}^{\prime})g\int_{-1}^{0}(1-x^{2})\tilde{F}_{k+1}^{\prime}g\] \[\leq 2(B_{k}+B_{k+1})c_{k}\lambda(1-\lambda)a^{2}. \tag{4.10}\] Finally by Lemma 3.5, we have \[R_{k,3}\leq\begin{cases}2(B_{k+1}-B_{k})\lambda(1-\lambda)a^{2},&\text{ if }B_{k}\leq B_{k+1},\\ 2(B_{k}-B_{k+1})m_{0}(1-\lambda)a^{2},&\text{ if }B_{k+1}<B_{k}.\end{cases} \tag{4.11}\] Now from (4.9), (4.10) and (4.11), we can get the estimate of each term in the summation in (4.7) for each even \(k\). \[\frac{9\alpha^{2}}{32}[(\lambda_{n+1}-\lambda_{k}+\frac{11}{7 \alpha})(\lambda_{k}+4)(\lambda_{k}+6)b_{k}^{2}+(\lambda_{n+1}-\lambda_{k+1}+ \frac{11}{7\alpha})(\lambda_{k+1}+4)(\lambda_{k+1}+6)b_{k+1}^{2}]\] \[\leq B_{k}\left(|\mathcal{A}_{k}^{+}|^{2}+|\mathcal{A}_{k}^{-}|^{2} \right)+B_{k+1}\left(|\mathcal{A}_{k+1}^{+}|^{2}+|\mathcal{A}_{k+1}^{-}|^{2} \right)+2(B_{k}+B_{k+1})c_{k}\lambda(1-\lambda)a^{2}\] \[+\begin{cases}2(B_{k+1}-B_{k})\lambda(1-\lambda)a^{2},&\text{ if }B_{k}\leq B _{k+1},\\ 2(B_{k}-B_{k+1})m_{0}(1-\lambda)a^{2},&\text{ if }B_{k+1}<B_{k}.\end{cases} \tag{4.12}\] **Remark 4.1**.: _Note that this estimate is better than the one in \(\mathbb{S}^{4}\) case. Cancellation of consecutive Gegenbauer polynomials is used in the proof._ The right hand side above can be viewed as a function \(f_{k,a}(\lambda)\) of \(\lambda=\frac{a_{+}}{a}\). The following Proposition yields that the worst case is \(\lambda=1\). In particular, in this case, we can drop the small terms \(R_{k,2}\) and \(R_{k,3}\). The proof is left to Appendix D. **Proposition 4.1**.: _Suppose \(a\) satisfies \(\frac{d_{0}}{\lambda_{n+4}}\leq a\leq\frac{d_{0}}{\lambda_{n}}\) for some \(n\geq 5\) with \(n\equiv 1\) (mod \(4\)) where \(d_{0}=16\). Let \(f_{k,a}(\lambda)\) be defined as above. Then for any \(k\) even, we have for \(n\geq 41\), (1) If \(\lambda_{k}\leq\frac{1}{4}\lambda_{n}\), then_ \[f_{k,a}(\lambda)\leq f_{k,a}(1)=B_{k}(a-\frac{1-b}{d}\lambda_{k}a^{2})^{2}+B_{ k+1}(a-\frac{1-b}{d}\lambda_{k+1}a^{2})^{2}. \tag{4.13}\] _(2) If \(\frac{1}{4}\lambda_{n}<\lambda_{k}\leq\lambda_{n}\), then_ \[f_{k,a}(\lambda)\leq f_{k,a}(1)=B_{k}(ba+(1-b)\frac{d}{4\lambda_{k}})^{2}+B_{k+1 }(ba+(1-b)\frac{d}{4\lambda_{k+1}})^{2}. \tag{4.14}\] _For \(5\leq n\leq 65,\) we have_ _(1) If \(\lambda_{k}\leq\frac{1}{4}\lambda_{n}\), then_ \[f_{k,a}(\lambda)\leq B_{k}(a-\frac{1-b}{d}\lambda_{k}a^{2})^{2}+B_{k+1}(a- \frac{1-b}{d}\lambda_{k+1}a^{2})^{2}+\frac{1}{2}(B_{k}+B_{k+1})c_{k}a^{2}. \tag{4.15}\] _(2) If \(\frac{1}{4}\lambda_{n}<\lambda_{k}\leq\lambda_{n}\), then_ \[f_{k,a}(\lambda)\leq B_{k}(ba+(1-b)\frac{d}{4\lambda_{k}})^{2}+B_{k+1}(ba+(1-b )\frac{d}{4\lambda_{k+1}})^{2}+\frac{1}{2}(B_{k}+B_{k+1})c_{k}a^{2}. \tag{4.16}\] In the following, we will assume \(n>10000\). The case when \(n<10000\) is checked by Matlab and is left to Appendix E. With the help of Proposition 4.1 and by plugging it into (4.8), we obtain \[0\leq \frac{256}{35}(7-\frac{1}{\alpha})(\frac{27}{7\alpha}-15-\lambda_ {n+1})\frac{1}{\alpha}(1-\frac{7}{6}a)+\frac{128}{7}(\lambda_{n+1}-6+\frac{71} {7\alpha})\frac{1}{\alpha^{2}}(1-\frac{7}{6}a)^{2}\] \[+ \frac{176}{63}\alpha(\lambda_{2}+4)(\lambda_{2}+6)b_{2}^{2}\] \[+ \frac{32}{9\alpha^{2}}\sum_{k=2}^{\frac{n-3}{2}}(\lambda_{n+1}- \lambda_{k}+\frac{11}{7\alpha})(2k+5)(1-\frac{1-b}{d}\lambda_{k}\frac{16}{ \lambda_{n+4}})^{2}a^{2}\] \[+ \frac{32}{9\alpha^{2}}\sum_{k=\frac{n-1}{2}}^{n}(\lambda_{n+1}- \lambda_{k}+\frac{11}{7\alpha})(2k+5)(ba+(1-b)\frac{d}{4\lambda_{k}})^{2}.\] \[\leq -\frac{512}{7}(\lambda_{n+1}+\frac{51}{7})(1-\frac{7}{6}a)+\frac{ 512}{7}(\lambda_{n+1}+\frac{100}{7})(1-\frac{7}{6}a)^{2}+\frac{22528}{63 \alpha}a^{2}\] \[+ \frac{128}{9}\sum_{k=2}^{\frac{n-3}{2}}(\lambda_{n+1}-\lambda_{k} +\frac{22}{7})(2k+5)(1-\frac{1-b}{d}\lambda_{k}\frac{16}{\lambda_{n+4}})^{2}a^ {2}\] \[+ \frac{128}{9}\sum_{k=\frac{n-1}{2}}^{n}(\lambda_{n+1}-\lambda_{k} +\frac{22}{7})(2k+5)(ba+(1-b)\frac{d}{4\lambda_{k}})^{2}.\] \[=: g_{n,1}(a)+g_{n,2}(a)+g_{n,3}(a)=g_{n}(a) \tag{4.17}\] where \(g_{n,1},g_{n,2}\) and \(g_{n,3}\) are defined at the last equality. For \(g_{n,2}(a)\), we can decompose it into three summations \[g_{n,2}(a)=\frac{128}{9}\left[S_{1}-\frac{34(1-b)}{d\lambda_{n+4}}S_{2}+\frac{ 289(1-b)^{2}}{d^{2}\lambda_{n+4}^{2}}S_{3}\right]a^{2}, \tag{4.18}\] where \[S_{1}=\sum_{k=2}^{\frac{n-3}{2}}(\lambda_{n+1}-\lambda_{k}+\frac{11}{7\alpha}) (2k+5)=\frac{7}{32}n^{4}+\frac{23}{8}n^{3}-\frac{115}{112}n^{2}-\frac{4265}{5 6}n-\frac{20075}{224}, \tag{4.19}\] \[S_{2} =\sum_{k=2}^{\frac{n-3}{2}}(\lambda_{n+1}-\lambda_{k}+\frac{11}{7 \alpha})(2k+5)\lambda_{k}\] \[=\frac{5}{192}n^{6}+\frac{1}{2}n^{5}+\frac{3611}{1344}n^{4}-\frac{ 9}{28}n^{3}-\frac{100207}{5376}n^{2}-\frac{1393237}{896}n-\frac{1040985}{1024}, \tag{4.20}\] \[S_{3} =\sum_{k=2}^{\frac{n-3}{2}}(\lambda_{n+1}-\lambda_{k}+\frac{11}{7 \alpha})(2k+5)\lambda_{k}^{2}\] \[=\frac{13}{3072}n^{8}+\frac{41}{384}n^{7}+\frac{1525}{1792}n^{6}+ \frac{3011}{2688}n^{5}-\frac{48697}{3584}n^{4}-\frac{14917}{384}n^{3}\] \[+\frac{1000525}{5376}n^{2}-\frac{1393237}{896}n-\frac{1040985}{10 24} \tag{4.21}\] For \(g_{n,3}(a)\), direct calculation yields that \[\sum_{k=\frac{n-1}{2}}^{n}(\lambda_{n+1}-\lambda_{k}+\frac{22}{7 })(2k+5)(ba+(1-b)\frac{d}{4\lambda_{k}})^{2}\] \[= b^{2}S_{4}a^{2}+2b(1-b)(\lambda_{n+1}+\frac{22}{7})\frac{d}{4}S _{5}a-2b(1-b)\frac{d}{4}S_{6}a+(1-b)^{2}\frac{d^{2}}{16}(\lambda_{n+1}+\frac{ 22}{7})S_{7}-(1-b)^{2}\frac{d^{2}}{16}S_{5}, \tag{4.22}\] where \[S_{4}=\sum_{k=\frac{n-1}{2}}^{n}(\lambda_{n+1}-\lambda_{k}+\frac{22}{7})(2k+5 )=\frac{9}{32}n^{4}+\frac{33}{8}n^{3}+\frac{2763}{112}n^{2}+\frac{3753}{56}n+ \frac{15147}{224}, \tag{4.23}\] \[S_{5}=\sum_{k=\frac{n-1}{2}}^{n}\frac{2k+5}{\lambda_{k}}=\sum_{k=\frac{n-1}{2 }}^{n}(\frac{1}{k}+\frac{1}{k+5})\geq 1.3863, \tag{4.24}\] \[S_{6}=\sum_{k=\frac{n-1}{2}}^{n}(2k+5)=\frac{3}{4}n^{2}+\frac{9}{2}n+\frac{27 }{4}, \tag{4.25}\] \[S_{7} =\sum_{k=\frac{n-1}{2}}^{n}\frac{2k+5}{\lambda_{k}^{2}}=\frac{1}{5} \sum_{k=\frac{n-1}{2}}^{n}(\frac{1}{k^{2}}-\frac{1}{(k+5)^{2}})\] \[=\frac{1}{5}\left(\frac{3}{(n+1)^{2}}-\frac{1}{(n+2)^{2}}+\frac{3 }{(n+3)^{2}}-\frac{1}{(n+4)^{2}}+\frac{3}{(n+5)^{2}}+\frac{4}{(n+7)^{2}}+\frac {4}{(n-1)^{2}}\right)\] \[\leq\frac{3}{n^{2}}. \tag{4.26}\] To get a contradiction, we need to show that \(g_{n}(a)\) is negative for \(\frac{16}{\lambda_{n+4}}<a<\frac{16}{\lambda_{n}}\). Direct computation gives that for \(n>10000\) with \(n\equiv 1\;(\text{mod}\;4)\), we have the following three estimates \[g_{n,1}(a)= -\frac{512}{7}(\lambda_{n+1}+\frac{51}{7})(1-\frac{7}{6}a)+\frac{512 }{7}(\lambda_{n+1}+\frac{100}{7})(1-\frac{7}{6}a)^{2}+\frac{22528}{63\alpha}a^{2}\] \[= \frac{512}{7}\left[7-\frac{7}{6}a\lambda_{n+1}-\frac{149}{6}a+ \frac{49}{36}\lambda_{n+1}a^{2}+\frac{175}{9}a^{2}\right]+\frac{22528}{63 \alpha}a^{2}\] \[\leq \frac{512}{7}\left(7-\frac{56}{3}\frac{\lambda_{n+1}}{\lambda_{n+ 4}}-\frac{1192}{3\lambda_{n+4}}+\frac{3136}{9\lambda_{n}}+\frac{44800}{9 \lambda n^{2}}\right)+\frac{91543}{\lambda_{n}^{2}}\leq-853.33,\] \[g_{n,2}(a)= \frac{128}{9}\left[S_{1}-\frac{67}{25\lambda_{n+4}}S_{2}+\frac{4 489}{2500\lambda_{n+4}^{2}}S_{3}\right]a^{2}\] \[\leq \frac{128}{9}\left[\left(\frac{56n^{4}}{\lambda_{n}^{2}}+\frac{7 36n^{3}}{\lambda_{n}^{2}}+\frac{1840n^{2}}{\lambda_{n+4}^{2}}-\frac{136480n}{7 \lambda_{n+4}^{2}}-\frac{160600}{7\lambda_{n+4}^{2}}\right)\right.\] \[+\left(-\frac{268n^{6}}{15\lambda_{n+4}^{3}}-\frac{343n^{5}}{ \lambda_{n+4}^{3}}-\frac{1843n^{4}}{\lambda_{n+4}^{3}}+\frac{221n^{3}}{ \lambda_{n}^{3}}+\frac{51154n^{2}}{\lambda_{n}^{3}}+\frac{231234n}{\lambda_{n }^{3}}+\frac{40683}{\lambda_{n}^{3}}\right)\] \[+\left(\frac{1.94524n^{8}}{\lambda_{n}^{4}}+\frac{49.0797n^{7}}{ \lambda_{n}^{4}}+\frac{391.18n^{6}}{\lambda_{n}^{4}}+\frac{515n^{5}}{\lambda_ {n+4}^{4}}-\frac{6245n^{4}}{\lambda_{n+4}^{4}}-\frac{17856n^{3}}{\lambda_{n+4 }^{4}}\right.\] \[-\left.\frac{85550n^{2}}{\lambda_{n}^{4}}-\frac{714770n}{\lambda_ {n}^{4}}-\frac{467298}{\lambda_{n+4}^{4}}\right)\right]\] \[\leq 571.123,\] \[g_{n,3}(a)= \frac{128}{9}\left[0.1089S_{4}a^{2}+\frac{2211}{2500}(\lambda_{n+ 1}+\frac{22}{7})S_{5}a-\frac{2211}{2500}S_{6}a+\frac{4489}{2500}(\lambda_{n+1} +\frac{22}{7})S_{7}-\frac{4489}{2500}S_{5}\right]\] \[\leq \frac{128}{9}\left[\left(\frac{7.8408n^{4}}{\lambda_{n}^{2}}+ \frac{115n^{3}}{\lambda_{n}^{2}}+\frac{688n^{2}}{\lambda_{n}^{2}}+\frac{1869 n}{\lambda_{n}^{2}}+\frac{1886}{\lambda_{n}^{2}}\right)+19.6166\frac{\lambda_{n+1}+ \frac{22}{7}}{\lambda_{n}}-\frac{10.6128n^{2}}{\lambda_{n+4}}\right.\] \[+\left.\frac{13467}{2500}\frac{\lambda_{n+1}+\frac{22}{7}}{n^{2}} -2.48923\right]\] \[\leq 280.95.\] Combining three estimates above, we found \[0\leq g_{n}(a)\leq-853.33+571.123+280.95<-1.257<0,\] for all \(n>10000\) with \(n\equiv 1\ (\text{mod}\ 4)\) and \(\frac{16}{\lambda_{n+4}}<a\leq\frac{16}{\lambda_{n}}\), which is a contradiction. Consequently, we finish the proof of Theorem 1.1. ## Appendix A proof of Lemma 3.3 In this appendix, we prove Lemma 3.3. Proof of Lemma 3.3.: Define \(A_{m,n}^{+}=\int_{0}^{1}x^{m}(1-x^{2})^{n}g\), \(A_{m,n}^{-}=\int_{-1}^{0}|x|^{m}(1-x^{2})^{n}g\), and \(A_{m,n}=A_{m,n}^{+}+A_{m,n}^{-}\). We begin with the estimate of \(A_{2}\). By definition, \[|A_{2}|=|\int_{-1}^{1}x(1-x^{2})g|\leq\max\left\{A_{1,1}^{+},A_{1,1}^{-}\right\}.\] By Cauchy-Schwartz inequality and (3.13), \[a_{+}-A_{1,1}^{+}=\int_{0}^{1}(1-x^{2})(1-x)g\geq\frac{(\int_{0}^{1}(1-x^{2})g)^{ 2}}{\int_{0}^{1}(1+x)g}\geq a_{+}^{2},\] so \[A_{1,1}^{+}\leq a_{+}-a_{+}^{2}.\] Similarly, \[A_{1,1}^{-}\leq a_{-}-a_{-}^{2}.\] Since \(a<1\) and we have assumed \(\lambda\geq\frac{1}{2}\), we conclude that \[|A_{2}|\leq a_{+}-a_{+}^{2}.\] The estimate of \(|A_{4}|\) is similar to that of \(|A_{2}|\). By definition, \[A_{4}=\int_{-1}^{1}(1-x^{2})g\widetilde{F}_{4}^{\prime}=\frac{1}{8}\int_{-1}^{ 1}(1-x^{2})(11x^{2}-3)xg=A_{1,1}-\frac{11}{8}A_{1,2}.\] By Cauchy-Schwartz inequality and (3.12), \[A_{1,2}\geq\frac{(A_{1,1}^{+})^{2}}{\int_{0}^{1}xg}\geq 2(A_{1,1}^{+})^{2},\] so \[A_{4}^{+}\leq A_{1,1}^{+}-\frac{11}{4}(A_{1,1}^{+})^{2}\] On the other hand, \[A_{4}^{+}\geq\frac{1}{8}\min_{0\leq x\leq 1}\{(11x^{2}-3)x\}\int_{0}^{1}(1-x^{2 })g=-\frac{1}{4\sqrt{11}}a_{+}.\] In the same way, \[-(A_{1,1}^{-}-\frac{11}{4}(A_{1,1}^{-})^{2})\leq A_{4}^{-}\leq\frac{1}{4\sqrt{ 11}}a_{-}.\] Since \(\lambda\geq\frac{1}{2}\), we conclude that \[|A_{4}|\leq A_{1,1}^{+}-\frac{11}{4}(A_{1,1}^{+})^{2}+\frac{1}{4\sqrt{11}}a_{- }\leq(a_{+}-a_{+}^{2})-\frac{11}{4}(a_{+}-a_{+}^{2})^{2}+\frac{1}{4\sqrt{11}}a _{-}.\] The estimates of \(A_{3}\) and \(A_{5}\) are slightly different. For \(A_{3}\), we write \[A_{3}=\int_{-1}^{1}(1-x^{2})g\widetilde{F}_{3}^{\prime}=\frac{1}{8}\int_{-1}^ {1}(1-x^{2})(9x^{2}-1)g=\frac{1}{8}(9A_{2,1}-a).\] By Cauchy-Schwartz inequality and (3.11), \[(A_{2,1}^{+})^{2} \leq\int_{0}^{1}(1-x^{2})^{2}g\int_{0}^{1}x^{4}g\] \[\leq(a_{+}-A_{2,1}^{+})(\frac{a+1}{2}-a_{+}-A_{2,1}^{+}),\] so \[A_{2,1}^{+}\leq a_{+}-\frac{2a_{+}^{2}}{a+1}.\] (A.1) In the same way, \[A_{2,1}^{-}\leq a_{-}-\frac{2a_{-}^{2}}{a+1}.\] Hence, \[A_{2,1}\leq a-\frac{2a_{+}^{2}+2a_{-}^{2}}{a+1}=a-\frac{2a^{2}}{a+1}(2\lambda^{2} -2\lambda+1).\] Therefore \[A_{3}\leq a-\frac{9}{4}\frac{a^{2}}{a+1}(2\lambda^{2}-2\lambda+1),\] which, together with the definition of \(A_{3}\), implies \[|A_{3}|\leq\max\left\{a-\frac{9}{4}\frac{a^{2}}{a+1}(2\lambda^{2}-2\lambda+1), \frac{a}{8}\right\}=a-\frac{9}{4}\frac{a^{2}}{a+1}(2\lambda^{2}-2\lambda+1).\] Finally, for \(A_{5}\), we have \[A_{5}=\frac{1}{80}\int_{-1}^{1}(1-x^{2})(3-66x^{2}+143x^{4})g=\frac{1}{80}(80a -143A_{2,2}-77A_{2,0}).\] By Cauchy-Schwartz inequality and (3.11), \[A_{2,2}^{+}\geq\frac{(A_{2,1}^{+})^{2}}{\int_{0}^{1}x^{2}g}\geq\frac{(A_{2,1} ^{+})^{2}}{\frac{a+1}{2}-a_{+}},\] so by (A.1), \[A_{5}^{+} \leq\frac{1}{80}\big{(}80a_{+}-\frac{143(A_{2,1}^{+})^{2}}{\frac {a+1}{2}-a_{+}}-77(a_{+}-A_{2,1}^{+})\big{)}\] \[\leq\frac{1}{80}\Big{(}3a_{+}-11(a_{+}-\frac{2a_{+}^{2}}{a+1})( \frac{26a_{+}}{a+1}-7)\Big{)}\] \[=a_{+}-\frac{11a_{+}^{2}}{2(a+1)}+\frac{143a_{+}^{3}}{10(a+1)^{2}}.\] Therefore \[A_{5}\leq a-\frac{11(a_{+}^{2}+a_{-}^{2})}{2(a+1)}+\frac{143(a_{+}^{3}+a_{-}^ {3})}{10(a+1)^{2}}.\] On the other hand, \[A_{5}\geq\frac{1}{80}\min_{-1\leq x\leq 1}\{3-66x^{2}+143x^{4}\}\int_{-1}^{1}(1 -x^{2})g=-\frac{3}{52}a.\] From (4.4) and the estimates of \(|A_{2}|\) and \(|A_{3}|\), we can deduce that \(a<0.125\), so now it is not hard to see that \[|A_{5}|\leq a-\frac{11(a_{+}^{2}+a_{-}^{2})}{2(a+1)}+\frac{143(a_{+}^{3}+a_{- }^{3})}{10(a+1)^{2}}.\] Thus the proof of Lemma 3.3 is completed. ## Appendix B proof of Lemma 3.5 and 3.6 In this appendix we prove Lemma 3.5 and Lemma 3.6. The proofs are technical and make use of many quantitative properties of Gegenbauer polynomials. Before we prove Lemma 3.5, we first state some general lemma about Gegenbauer polynomials. Denote by \(x_{nk}(\nu)\), \(k=1,\cdots,n\), the zeros of \(C_{n}^{\nu}(x)\) enumerated in decreasing order, that is, \(1>x_{n1}(\nu)>\cdots>x_{nn}(\nu)>-1\). **Lemma B.1** (Corollary 2.3 in Area et al.[1]).: _For any \(n\geq 2\) and for every \(\nu\geq 1\), the inequality_ \[x_{n1}(\nu)\leq\sqrt{\frac{(n-1)(n+2\nu-2)}{(n+\nu-2)(n+\nu-1)}}\cos(\frac{\pi}{ n+1})\] (B.1) _holds._ The next lemma is well-known and it is valid for many other orthogonal polynomials. **Lemma B.2** (Olver et al. [2]).: _Denote by \(y_{nk}(\nu)\), \(k=0,1,\cdots,n-1,n\), the local maxima of \(|C_{n}^{\nu}(x)|\) enumerated in decreasing order, then \(y_{n0}(\nu)=1,y_{nn}(\nu)=-1\), and we have_ \((a)\): \(y_{nk}(\nu)=x_{n-1,k}(\nu+1),\ k=1,\cdots,n-1\)_._ \((b)\): \(|C_{n}^{\nu}(y_{n0}(\nu))|>|C_{n}^{\nu}(y_{n1}(\nu))|>\cdots>|C_{n}^{\nu}(y_ {n,[\frac{n+1}{2}]}(\nu))|\)_._ \((c)\): \((C_{n}^{\nu})^{(k)}(x)>0\) _on_ \((x_{n1}(\nu),1)\) _for all_ \(k=0,1,\cdots,n\)_._ Proof of Lemma 3.5.: Direct computation by Matlab shows that the lemma holds for \(8\leq k\leq 200\), so in what follows we may assume \(k>200\).. By Lemma B.1 and (2.1), we know that the minimum of \(\widetilde{F}_{k}^{\prime}\) on \([0,1]\) is achieved at the point \[x_{k-2,1}(\frac{9}{2})\leq\sqrt{\frac{(k-3)(k+5)}{(k+\frac{3}{2})(k+\frac{1}{2 })}}\cos(\frac{\pi}{k-1})<1-\frac{12.5}{k^{2}}.\] (B.2) Taking \(N=4\) in Lemma 3.4, we obtain \[\widetilde{F}_{k}^{\prime}(\cos\zeta)=F_{k-1}^{\frac{7}{2}}(\cos\zeta)=48 \sqrt{\frac{2}{\pi}}\left(\sum_{m=0}^{3}t_{m}(3)\frac{\Gamma(k)}{\Gamma(k+m+ \frac{7}{2})}\frac{\cos\left(\delta_{k-1,m}\right)}{\sin^{m+\frac{7}{2}}\zeta }+\widetilde{R}\right),\] (B.3) where \(\widetilde{R}\) satisfies \[|\widetilde{R}|\leq t_{4}(3)\frac{\Gamma(k)}{\Gamma(k+\frac{15}{2})}(\sin \zeta)^{-\frac{15}{2}}\cdot\begin{cases}\sec\zeta&\text{ if }0<\zeta\leq\frac{\pi}{4},\\ 2\sin\zeta&\text{ if }\frac{\pi}{4}<\zeta<\frac{\pi}{2},\end{cases}\] (B.4) the value of \(t_{m}(3)\) for \(0\leq m\leq 3\) are listed below: \[t_{0}(3)=1,t_{1}(3)=\frac{35}{8},t_{2}(3)=\frac{945}{128},t_{3}(3)=\frac{346 5}{1024},t_{4}(3)=-\frac{45045}{32768}.\] Let \(\sin\zeta=\frac{l}{k}\). Then by (B.2) we can assume \(l\geq 5\). From (B.4) we know that if \(l\leq\frac{k}{\sqrt{2}}\), then \[|\widetilde{R}|\leq|t_{4}(3)|\frac{k^{\frac{15}{2}}\Gamma(k)}{l^{\frac{15}{2} }\Gamma(k+\frac{15}{2})}\frac{1}{\sqrt{1-\frac{l^{2}}{k^{2}}}}<\frac{1.5}{l^{ \frac{15}{2}}\sqrt{1-\frac{l^{2}}{k^{2}}}};\] (B.5) while if \(l>\frac{k}{\sqrt{2}}\), then \[|\widetilde{R}|\leq 2|t_{4}(3)|\frac{k^{\frac{15}{2}}\Gamma(k)}{l^{\frac{15}{2} }\Gamma(k+\frac{9}{2})}<\frac{3(\sqrt{2})^{\frac{15}{2}}}{k^{\frac{15}{2}}}.\] (B.6) To get the desired lower bound, we shall use the following simple estimates. \[\cos(x+\delta)=\cos x-\delta\sin(x+h\delta)\geq\cos x-|\delta|.\] (B.7) \[\zeta-\sin\zeta\leq(\frac{\pi}{2}-1)\sin^{3}\zeta\leq\sin^{3}\zeta,\ 0<\zeta<\frac{\pi}{2}.\] (B.8) With the help of (B.7) and (B.8), we have \[\cos\left(\delta_{k-1,m}\right) =\cos\left((k+\frac{5}{2}+m)\zeta-(\frac{7}{2}-m)\frac{\pi}{2}\right)\] \[=\cos\left((k+\frac{5}{2}+m)\frac{l}{k}+(k+\frac{5}{2}+m)(\zeta- \sin\zeta)-(\frac{7}{2}-m)\frac{\pi}{2}\right)\] \[\geq\cos\left(l-(\frac{7}{2}-m)\frac{\pi}{2}\right)-\left((k+ \frac{5}{2}+m)(\zeta-\sin\zeta)+(\frac{5}{2}+m)\frac{l}{k}\right)\] \[\geq\cos\left(l-(\frac{7}{2}-m)\frac{\pi}{2}\right)-(3+m)\frac{l} {k}.\] (B.9) Therefore we have \[\sum_{m=0}^{3}t_{m}(3)\frac{\Gamma(k)}{\Gamma(k+m+\frac{7}{2})} \frac{\cos\left(\delta_{k-1,m}\right)}{\sin^{m+\frac{7}{2}}\zeta}=\sum_{m=0}^{ 3}t_{m}(3)\frac{k^{m+\frac{7}{2}}\Gamma(k)}{\Gamma(k+m+\frac{7}{2})}\frac{ \cos\left(\delta_{k-1,m}\right)}{l^{m+\frac{7}{2}}}\] \[\geq\frac{k^{\frac{13}{2}}\Gamma(k)}{\Gamma(k+\frac{13}{2})}\sum _{m=0}^{3}t_{m}(3)\frac{(k+\frac{7}{2}+m)_{3-m}}{k^{3-m}l^{m+\frac{7}{2}}} \left(\cos\left(l-(\frac{7}{2}-m)\frac{\pi}{2}\right)-(3+m)\frac{l}{k}\right)\] \[\geq\min\left\{(1-\frac{16}{k})\sum_{m=0}^{3}t_{m}(3)\frac{(k+ \frac{7}{2}+m)_{3-m}}{k^{3-m}l^{m+\frac{7}{2}}}\left(\cos\left(l-(\frac{7}{2}- m)\frac{\pi}{2}\right)-(3+m)\frac{l}{k}\right),0\right\}.\] Write \[(1-\frac{16}{k})\sum_{m=0}^{3}t_{m}(3)\frac{(k+\frac{7}{2}+m)_{3-m}}{k^{3-m}l^ {m+\frac{7}{2}}}\left(\cos\left(l-(\frac{7}{2}-m)\frac{\pi}{2}\right)-(3+m) \frac{l}{k}\right)=\sum_{i=0}^{4}E_{i},\] where \[E_{0}=\frac{1024l^{3}\cos\left(l+\frac{\pi}{4}\right)-1920l^{2}\cos\left(l- \frac{\pi}{4}\right)-840l\cos\left(l+\frac{\pi}{4}\right)-315\cos\left(l- \frac{\pi}{4}\right)}{1024l^{13/2}},\] \[E_{1}=\frac{-3\left(512l^{3}+1280l^{2}-2304l^{2}\cos\left(l+\frac{\pi}{4} \right)+700l-1920l\cos\left(l-\frac{\pi}{4}\right)+770\cos\left(l+\frac{\pi}{4 }\right)-315\right)}{512kl^{11/2}},\] \[E_{2}=\frac{-10368l^{2}+11520l+15296l\cos\left(l+\frac{\pi}{4}\right)+64920 \cos\left(l-\frac{\pi}{4}\right)-5775}{256k^{2}l^{9/2}},\] \[E_{3}=\frac{-3\left(478l^{2}-2705l-231l\cos\left(l+\frac{\pi}{4}\right)-1980 \cos\left(l-\frac{\pi}{4}\right)\right)}{8k^{3}l^{9/2}}\] \[E_{4}=\frac{297(-7l+80)}{8k^{4}l^{7/2}}.\] If \(5\leq l\leq 6.5\), then \(E_{0}\geq-0.0002\), \(E_{1}\geq-0.00025\), \(E_{2}\geq-1.5\times 10^{-5}\), \(E_{3}\geq-10^{-7}\), \(E_{4}\geq 0\). By (B.5), \(|\bar{R}|\leq 8\times 10^{-6}\). Therefore from (B.3) we have \[\tilde{F}_{k}^{\prime}(\cos\zeta)\geq-48\sqrt{\frac{2}{\pi}}\times 0.005\geq-0.04.\] If \(l>6.5\), then \(E_{0}\geq-0.00077\), \(E_{1}\geq-0.0002\), \(E_{2}\geq-10^{-5}\), \(E_{3}\geq-10^{-7}\), \(E_{4}\geq-10^{-9}\). Either (B.5) or (B.6) implies \(|\tilde{R}|\leq 3\times 10^{-7}\), so we also have \[\tilde{F}^{\prime}_{k}(\cos\zeta)\geq-48\sqrt{\frac{2}{\pi}}\times 0.01\geq-0.04.\] Thus the lemma is proved. Proof of Lemma 3.6.: We first prove the following estimate at one point: \[0.3\leq\tilde{F}^{\prime}_{k}(1-\frac{8}{\lambda_{k}})\leq 0.33,\quad k\geq 6.\] (B.10) Direct computation by Matlab shows that (B.10) holds for \(6\leq k\leq 100\), so in what follows we may assume \(k>100\). The main tool we use is the hypergeometric expansion (2.5) and (2.6). We will prove (B.10) only for even \(k\), and the case for odd \(k\) is similar. Let \(k=2m+2\), then \(\tilde{F}^{\prime}_{k}=F^{\frac{7}{2}}_{k-1}\), so by (2.5), \[\tilde{F}^{\prime}_{k}(1-\frac{8}{\lambda_{k}})=(1-\frac{8}{\lambda_{k}})_{2} F_{1}(-m,m+\frac{9}{2};4;t),\] where \(t=1-(1-\frac{8}{\lambda_{k}})^{2}=\frac{8}{\lambda_{k}}(2-\frac{8}{\lambda_{k}})\). Now we write \[{}_{2}F_{1}(-m,m+\frac{9}{2};4;t)=\sum_{i=0}^{m}(-1)^{i}\gamma_{i}t^{i},\] where \(\gamma_{i}=\frac{(m-i+1)_{i}(m+\frac{7}{2})_{i}}{i!(4)_{i}}\). It is easy to see that \[\min_{1\leq i<m}\{\frac{\gamma_{i}}{\gamma_{i+1}}\}=\frac{\gamma_{1}}{\gamma_ {2}}=\frac{10}{(m-1)(m+\frac{9}{2})}=\frac{40}{(k-3)(k+7)}>t.\] Therefore \[\sum_{i=0}^{j_{1}\text{ is odd}}(-1)^{i}\gamma_{i}t^{i}\leq{}_{2}F_{1}(-m,m+ \frac{9}{2};4;t)\leq\sum_{i=0}^{j_{2}\text{ is even}}(-1)^{i}\gamma_{i}t^{i}\] Take \(j_{1}=5,j_{2}=6\), then direct computation shows that (B.10) holds since \(m\geq 50\). Now in view of Lemma 3.5, we see that \(\tilde{F}^{\prime}_{k}(1-\frac{8}{\lambda_{k}})>-\min_{0\leq x\leq 1} \tilde{F}^{\prime}_{k}(x)\). Then by Lemma B.2\((b)\), \(\tilde{F}^{\prime}_{k}(1-\frac{8}{\lambda_{k}})\geq\tilde{F}^{\prime}_{k}(x)\) for all \(0\leq x\leq 1-\frac{8}{\lambda_{k}}\). Moreover, the convexity of \(\tilde{F}^{\prime}_{k}(x)\) on \([1-\frac{8}{\lambda_{k}},1]\) is guaranteed by Lemma B.2\((c)\). This completes the proof of Lemma 3.6. ## Appendix C proof of Lemma 3.7 We first prove a simple lemma, which enables us to focus on the region near \(x=1\). By letting \(x=\cos\theta\), we introduce the function \(v(\theta)=(\sin\theta)^{2}F^{\nu}_{n}(\cos\theta)\) in this appendix. **Lemma C.1**.: _For \(n\geq 2\) and \(\nu>0\), let \(v(\theta)\) be defined as above. If \(\nu\geq 2\), then the successive relative maxima of \(|v(\theta)|\) form an increasing sequence as \(\theta\) decreases from \(\frac{\pi}{2}\) to \(0\)._ Proof of Lemma c.1.: By (2.3) it is straightforward to check that \(v\) satisfies the equation \[v^{\prime\prime}(\theta)+p(\theta)v^{\prime}(\theta)+q(\theta)=0,\] where \(p(\theta)=(2\nu-4)\cos\theta\), and \(q(\theta)=(n^{2}+2\nu n+4)-\frac{2}{\sin^{2}\theta}+(4\nu-8)(\sin\theta-\frac{ 1}{\sin\theta})\). Since \(\nu>2\), we know that \(p\geq 0\), \(q\) is increasing and \(q\) has a unique zero \(\widetilde{\theta}\) in \((0,\frac{\pi}{2})\). Since \(v(0)=0,v^{\prime}>0\) near \(0\), and \(q(\theta)<0\) in \((0,\widetilde{\theta})\), by the maximum principle, it's easy to see that \(|v(\theta)|\) has no local maxima in \((0,\widetilde{\theta}]\). Now we consider the case when \(\theta\in(\widetilde{\theta},\frac{\pi}{2}]\). Let \(\tilde{q}=q^{-1}\), then \(\tilde{q}>0\) is strictly decreasing in \((\widetilde{\theta},\frac{\pi}{2}]\). Introducing \[f(\theta)=v^{2}(\theta)+\tilde{q}(\theta)(v^{\prime})^{2}(\theta),\] we have \[f^{\prime}=\tilde{q}^{\prime}(v^{\prime})^{2}+2v^{\prime}(\tilde{q}v^{\prime \prime}+v)=(\tilde{q}^{\prime}-2p\tilde{q})u^{\prime 2}<0.\] But \(f(\theta)=v^{2}(\theta)\) if \(v^{\prime}(\theta)=0\), so the lemma is proved. Proof of Lemma 3.7.: In view of Lemma C.1, we need to find a bound for \(\theta_{*}\), the smallest zero of \(v^{\prime}(\theta)\) in \((0,\frac{\pi}{2})\). By definition of \(v\) and (2.4), \[v^{\prime}(\theta) =\sin\theta\left(2\cos\theta F_{n}^{\nu}(\cos\theta)-\sin^{2} \theta(F_{n}^{\nu})^{\prime}(\cos\theta)\right)\] \[=\sin\theta\left(2\cos\theta F_{n}^{\nu}(\cos\theta)-\frac{n(n+2 \nu)}{2\nu+1}\sin^{2}\theta F_{n-1}^{\nu+1}(\cos\theta)\right).\] We claim that when \(\theta=\overline{\theta}=\arcsin\sqrt{\frac{4\nu+2}{n(n+2\nu)}}\), \[v^{\prime}(\overline{\theta})=2\sin\overline{\theta}\left(\cos\overline{ \theta}F_{n}^{\nu}(\cos\overline{\theta})-F_{n-1}^{\nu+1}(\cos\overline{ \theta})\right)<0.\] (C.1) We will use the hypergeometric function expansion for Gegenbauer polynomials (2.5) and (2.6) to prove (C.1). We only give the proof for odd \(n\), and the proof for even \(n\) is similar. Write \(n=2m+1\). By Lemma B.1, it is not difficult to show \(\cos\overline{\theta}>x_{2m+1,1}(\nu)\), hence \(F_{2m+1}^{\nu}(\cos\overline{\theta})>0\), so we have \[F_{2m}^{\nu+1}(\cos\overline{\theta})-\cos\overline{\theta}F_{2 m+1}^{\nu}(\cos\overline{\theta}) \geq{}_{2}F_{1}(-m,m+\nu+1;\nu+\frac{3}{2};\sin^{2}\overline{ \theta})-{}_{2}F_{1}(-m,m+\nu+1;\nu+\frac{1}{2};\sin^{2}\overline{\theta})\] \[=\sum_{k=1}^{m}(-1)^{k+1}\alpha_{k}(\sin^{2}\overline{\theta})^{k},\] where \(\alpha_{k}=\frac{(m-k+1)_{k}(m+\nu+1)_{k}}{(k-1)!(\nu+\frac{1}{2})_{k+1}}\). We compute \[\frac{\alpha_{k}}{\alpha_{k+1}}=\frac{k(k+\nu+\frac{3}{2})}{(m-k)(m+\nu+k+1)}.\] It is then easy to see that \[\min_{1\leq k<m}\{\frac{\alpha_{k}}{\alpha_{k+1}}\}=\frac{\alpha_{1}}{\alpha_ {2}}=\frac{\nu+\frac{5}{2}}{(m-1)(m+\nu+2)}.\] Since \(\sin^{2}\overline{\theta}=\frac{4\nu+2}{n(n+2\nu)}=\frac{4\nu+2}{(2m+1)(2m+2 \nu+1)}<\frac{\nu+\frac{5}{2}}{(m-1)(m+\nu+2)}\), no matter \(m\) is even or \[F_{2m}^{\nu+1}(\cos\overline{\theta})-\cos\overline{\theta}F_{2m+1}^{\nu}( \cos\overline{\theta})\geq\sum_{\begin{subarray}{c}1\leq k\leq m\\ k\text{ is odd}\end{subarray}}(\sin^{2}\overline{\theta})^{k}(\alpha_{k}-\alpha_{k +1}\sin^{2}\overline{\theta})>0,\] where \(\alpha_{m+1}=0\) is understood, so (C.1) holds. Consequently, since \(v^{\prime}(\theta)>0\) when \(\theta\) is small, from (C.1) we know that \(\theta_{*}<\overline{\theta}.\) Now we look for a lower bound of \(\theta_{*}\). Let \(\underline{\theta}=\arcsin\sqrt{\frac{4\nu+2}{n(n+2\nu)}\delta},\) where \(0<\delta<1\) is to be determined. We want to show that \[v^{\prime}(\theta)=2\sin\theta\left(\cos\theta F_{n}^{\nu}(\cos\theta)-\delta F _{n-1}^{\nu+1}(\cos\theta)\right)>0\] (C.2) for all \(0\leq\theta<\underline{\theta}.\) As before, we only consider the case \(n=2m+1\), then we can write \[\cos\theta F_{n}^{\nu}(\cos\theta)-\delta F_{n-1}^{\nu+1}(\cos\theta)=\sum_{k= 0}^{m}(-1)^{k}\beta_{k}(\sin^{2}\theta)^{k},\] where \[\beta_{k}=\frac{(m-k+1)_{k}(m+\nu+1)_{k}}{k!(\nu+\frac{1}{2})_{k+1}}\left(( \nu+\frac{1}{2}+k)\cos^{2}\theta)-\delta(\nu+\frac{1}{2})\right).\] We compute \[\frac{\beta_{k}}{\beta_{k+1}}=\frac{(k+1)(\nu+\frac{1}{2}+k+1)}{(m-k)(m+\nu+k+ 1)}\frac{(\nu+\frac{1}{2}+k)\cos^{2}\theta-\delta(\nu+\frac{1}{2})}{(\nu+ \frac{3}{2}+k)\cos^{2}\theta-\delta(\nu+\frac{1}{2})},\] so \[\min_{0\leq k<m}\{\frac{\beta_{k}}{\beta_{k+1}}\}=\frac{\beta_{0}}{\beta_{1}}= \frac{\nu+\frac{3}{2}}{m(m+\nu+1)}\frac{(\nu+\frac{1}{2})\cos^{2}\theta- \delta(\nu+\frac{1}{2})}{(\nu+\frac{3}{2})\cos^{2}\theta-\delta(\nu+\frac{1}{ 2})}.\] Therefore to prove (C.2), it is enough to show \(\frac{\beta_{0}}{\beta_{1}}>\sin^{2}\theta,\) or equivalently \[(\nu+\frac{3}{2})(\nu+\frac{1}{2})(\cos^{2}\theta-\delta)>m(m+\nu+1)\left(( \nu+\frac{3}{2})\cos^{2}\theta-\delta(\nu+\frac{1}{2})\right)\sin^{2}\theta.\] This is a quadratic inequality about \(\sin^{2}\theta\). If we choose \[\delta=\frac{\nu-\sqrt{\nu}+\frac{1}{2}}{\nu+\frac{1}{2}},\] (C.3) then since we have assumed that \(n\geq 2\nu+2\), direct computation shows that it is enough to prove the above inequality for \(\theta=\underline{\theta}\), which reduces to \[(\nu+\frac{3}{2})(\cos^{2}\underline{\theta}-\delta)>m(m+\nu+1)\left((\nu+ \frac{3}{2})\cos^{2}\underline{\theta}-\delta(\nu+\frac{1}{2})\right)\frac{4 \delta}{n(n+2\nu)}.\] Since \(\frac{4m(m+\nu+1)}{n(n+2\nu)}=\frac{(n-1)(n+2\nu+1)}{n(n+2\nu)}<1\), we only need to show \[\cos^{2}\underline{\theta}-\delta>\delta\left(\cos^{2}\underline{\theta}- \frac{\nu+\frac{1}{2}}{\nu+\frac{3}{2}}\delta\right),\] which is easy to verify, so we omit the details. From (C.1) and (C.2), we have \(\underline{\theta}<\theta_{*}<\overline{\theta}\), so \[|v(\theta_{*})|=|\sin^{2}\theta_{*}F_{n}^{\nu}(\theta_{*})|\leq|\sin^{2} \overline{\theta}F_{n}^{\nu}(\underline{\theta})|=\frac{4\nu+2}{n(n+2\nu)}F_{ n}^{\nu}(\underline{\theta}).\] (C.4) It remains to give an upper bound for \(F_{n}^{\nu}(\underline{\theta})\). Let \(n=2m+1\), then \[F_{n}^{\nu}(\underline{\theta}) =\cos\underline{\theta}\sum_{k=0}^{m}\frac{(-1)^{k}(m-k+1)_{k}(m+ \nu+1)_{k}}{k!(\nu+\frac{1}{2})_{k}}\sin^{2k}\underline{\theta}\] \[\leq\sum_{k=0}^{l\text{ is even}}\frac{(-1)^{k}(m-k+1)_{k}(m+\nu+ 1)_{k}}{k!(\nu+\frac{1}{2})_{k}}\sin^{2k}\underline{\theta}.\] For \(m\geq 5\), we can choose \(l=4\) to obtain \[F_{n}^{\nu}(\underline{\theta}) \leq\sum_{k=0}^{4}\frac{(-1)^{k}(m-k+1)_{k}(m+\nu+1)_{k}}{k!(\nu +\frac{1}{2})_{k}}\left(\frac{4\nu+2}{n(n+2\nu)}\right)^{k}\delta^{k}\] \[=\sum_{k=0}^{4}\frac{(-1)^{k}(m-k+1)_{k}(m+\nu+1)_{k}}{k!(\nu+ \frac{1}{2})_{k}}\left(\frac{\nu-\sqrt{\nu}+\frac{1}{2}}{(m+\frac{1}{2})(m+ \nu+\frac{1}{2})}\right)^{k}\] (C.5) Direct computation shows that for fixed \(\nu\), then above expression, viewed as a function of \(m>\nu\), is decreasing in \(m\). Therefore if \(n\geq\max\{2\nu+2,12\}\), (C.4) and (C.5) together imply that \[|v(\theta_{*})|\leq\frac{\widetilde{C}_{\nu}}{n(n+2\nu)},\] where \[\widetilde{C}_{\nu}=\begin{cases}(4\nu+2)\sum_{k=0}^{4}\frac{(-1)^{k}(6-k)_{k }(6+\nu)_{k}}{k!(\nu+\frac{1}{2})_{k}}\left(\frac{\nu-\sqrt{\nu}+\frac{1}{2}} {\frac{11}{2}(\nu+\frac{11}{2})}\right)^{k},&\text{if $\nu<5$},\\ (4\nu+2)\sum_{k=0}^{4}\frac{(-1)^{k}(\nu-k+1)_{k}(2\nu+1)_{k}}{k!(\nu+\frac{1 }{2})_{k}}\left(\frac{\nu-\sqrt{\nu}+\frac{1}{2}}{(\nu+\frac{1}{2})(2\nu+\frac {1}{2})}\right)^{k},&\text{if $\nu\geq 5$}.\end{cases}\] (C.6) We remark that same estimates holds for even \(n\). Finally, since \[|v(\frac{\pi}{2})|=F_{n}^{\nu}(0)=\begin{cases}0,&\text{if $n$ is odd},\\ \frac{\Gamma(\frac{n}{2}+\nu)}{\Gamma(\nu)(\frac{n}{2})!}\Big{/}\frac{\Gamma(n +2\nu)}{\Gamma(2\nu)n!}=\frac{2\Gamma(\nu+\frac{1}{2})\Gamma(\frac{n+3}{2})}{ (n+1)\sqrt{\pi}\Gamma(\frac{n+1}{2}+\nu)},&\text{if $n$ is even},\end{cases}\] we conclude that \[|v(\theta)|\leq\max\{|v(\theta_{*})|,|v(\frac{\pi}{2})|\}\leq\frac{\widetilde {C}_{\nu}}{n(n+2\nu)}.\] ## Appendix D proof of Proposition 4.1 Proof of Proposition 4.1.: If \(k=2\) or \(4\), then by Lemma 3.3, one can check the proposition holds true for all \(n\geq 6\) directly, so in what follows we may assume \(k\geq 6\). We first consider the case when \(n\geq 65\). Recall that \(d=8\), \(b=0.33\) are given in Theorem 3.2, \(d_{0}=17\),and \(B_{k}=\frac{9\alpha^{2}}{32}(\lambda_{n+1}-\lambda_{k}+\frac{11}{7\alpha})(2k +5)\), so we have \[\frac{B_{k+1}-B_{k}}{B_{k+1}+B_{k}}=\frac{\left(n^{2}+7n-3k^{2}-18k-15\right)+ \frac{11}{7\alpha}}{(k+3)\left((2n^{2}+14n-2k^{2}-12k+5)+\frac{22}{7\alpha} \right)}.\] (D.1) Case 1: \(\lambda_{7}\leq\lambda_{k+1}\leq\frac{d\lambda_{n}}{2d_{0}}\). In this case, \(6\leq k\leq\frac{n}{2}-1\), hence \(B_{k+1}>B_{k}\), and by (D.1), one can show that \(\frac{B_{k+1}-B_{k}}{B_{k+1}+B_{k}}\) is decreasing in \(k\), so we have \[\frac{B_{k+1}-B_{k}}{B_{k+1}+B_{k}}\leq\frac{\left(n^{2}+7n-231\right)+\frac{11 }{7\alpha}}{9\left(\left(2n^{2}+14n-139\right)+\frac{22}{7\alpha}\right)}<0.054.\] (D.2) Moreover, \(a\leq\frac{d_{0}}{\lambda_{n}}\leq\frac{d}{2\lambda_{k}}\). so (4.9) becomes \[R_{k,1} \leq B_{k}\Big{(}(a_{+}-\frac{\lambda_{k}}{d}(1-b)a_{+}^{2})^{2}+( a_{-}-\frac{\lambda_{k}}{d}(1-b)a_{-}^{2})^{2}\Big{)}^{2}\] \[+B_{k+1}\Big{(}(a_{+}-\frac{\lambda_{k+1}}{d}(1-b)a_{+}^{2})^{2} +(a_{-}-\frac{\lambda_{k+1}}{d}(1-b)a_{-}^{2})^{2}\Big{)}^{2}\] \[=B_{k}\Big{(}(2\lambda^{2}-2\lambda+1)a^{2}-\frac{2\lambda_{k}}{d }(1-b)(1-3\lambda+3\lambda^{2})a^{3}+(\frac{\lambda_{k}}{d}(1-b))^{2}(\lambda^ {4}+(1-\lambda)^{4})a^{4}\Big{)}\] \[+B_{k+1}\Big{(}(2\lambda^{2}-2\lambda+1)a^{2}-\frac{2\lambda_{k+1 }}{d}(1-b)(1-3\lambda+3\lambda^{2})a^{3}+(\frac{\lambda_{k+1}}{d}(1-b))^{2}( \lambda^{4}+(1-\lambda)^{4})a^{4}\Big{)},\] and (4.11) becomes \[R_{k,3}\leq 2(B_{k+1}-B_{k})\lambda(1-\lambda)a^{2}.\] Combined with (4.10), we can write \[f_{k,a}(\lambda) =B_{k}\Big{(}(2\lambda^{2}-2\lambda+1)-\frac{2\lambda_{k}}{d}(1- b)(1-3\lambda+3\lambda^{2})a+(\frac{\lambda_{k}}{d}(1-b))^{2}(\lambda^{4}+(1- \lambda)^{4})a^{2}\Big{)}\] \[+B_{k+1}\Big{(}(2\lambda^{2}-2\lambda+1)-\frac{2\lambda_{k+1}}{d }(1-b)(1-3\lambda+3\lambda^{2})a+(\frac{\lambda_{k+1}}{d}(1-b))^{2}(\lambda^{4 }+(1-\lambda)^{4})a^{2}\Big{)}\] \[+(2c_{k}(B_{k}+B_{k+1})+2(B_{k+1}-B_{k}))\lambda(1-\lambda).\] For \(\frac{1}{2}\leq\lambda<1\), direct computation yields \[\frac{f_{k,a}(1)-f_{k,a}(\lambda)}{2(\lambda-\lambda^{2})} =B_{k}\Big{(}1-\frac{3\lambda_{k}}{d}(1-b)a+(\frac{\lambda_{k}}{d} (1-b))^{2}a^{2}(\lambda^{2}-\lambda+2)\Big{)}\] \[+B_{k+1}\Big{(}1-\frac{3\lambda_{k+1}}{d}(1-b)a+(\frac{\lambda_{ k+1}}{d}(1-b))^{2}a^{2}(\lambda^{2}-\lambda+2)\Big{)}-c_{k}(B_{k+1}+B_{k})-(B_{k+1}-B _{k})\] \[\geq B_{k}(1-3w_{k}+\frac{7}{4}w_{k}^{2})+B_{k+1}(1-3w_{k+1}+ \frac{7}{4}w_{k+1}^{2})-c_{k}(B_{k+1}+B_{k})-(B_{k+1}-B_{k}),\] where \[w_{j}=\frac{\lambda_{j}}{d}(1-b)a\leq\frac{1-b}{2}<\frac{6}{7},\ j=k,k+1.\] So by Corollary 3.9 and (D.2) \[\frac{f_{k,a}(1)-f_{k,a}(\lambda)}{2(\lambda-\lambda^{2})} \geq(\frac{7b^{2}+10b-1}{16}-c_{k})(B_{k}+B_{k+1})-(B_{k+1}-B_{k})\] \[\geq(B_{k}+B_{k+1})(0.191-0.12-0.054)>0.\] Case 2: \(\frac{\lambda_{n}}{4}=\frac{d\lambda_{n}}{2d_{0}}<\lambda_{k+1}\leq\lambda_{n}\), but \(a_{-}\leq\frac{d}{2\lambda_{k+1}}\). In this case, \(\frac{n}{2}-2\leq k\leq n-1\), \(\lambda\geq 1-\frac{d}{2\lambda_{k}a}\), and we have \[R_{k,1} \leq B_{k}\left((ba_{+}+\frac{d}{4\lambda_{k}}(1-b))^{2}+(a_{-}- \frac{\lambda_{k}}{d}(1-b)a_{-}^{2})^{2}\right)\] \[+B_{k+1}\left((ba_{+}+\frac{d}{4\lambda_{k+1}}(1-b))^{2}+(a_{-}- \frac{\lambda_{k+1}}{d}(1-b)a_{-}^{2})^{2}\right).\] Since the sign of \(B_{k+1}-B_{k}\) is unknown, we need to discuss both cases separately. If \(B_{k+1}\leq B_{k}\), then by (D.1), \[\frac{B_{k}-B_{k+1}}{B_{k}+B_{k+1}}\leq\frac{7\alpha(2n+5)-11}{(n+2)(21\alpha(2n +5)+22)}<\frac{1}{3},\] (D.3) and we have \[R_{k,3}\leq 2(B_{k}-B_{k+1})m_{0}(1-\lambda)a^{2}.\] Combined with (4.10), for \(\frac{1}{2}\leq\lambda<1\), we have \[\phi(\lambda):=\frac{f_{k,a}(1)-f_{k,a}(\lambda)}{(1-\lambda)a^{ 2}} =B_{k}\left((1+\lambda)b^{2}+\frac{d}{2\lambda_{k}a}(1-b)b-(1- \lambda)(1-\frac{\lambda_{k}}{d}(1-b)(1-\lambda)a)^{2}\right)\] \[+B_{k+1}\left((1+\lambda)b^{2}+\frac{d}{2\lambda_{k+1}a}(1-b)b-( 1-\lambda)(1-\frac{\lambda_{k+1}}{d}(1-b)(1-\lambda)a)^{2}\right)\] \[-2c_{k}(B_{k}+B_{k+1})\lambda-2(B_{k}-B_{k+1})m_{0}.\] Then \[\phi^{\prime}(\lambda) =B_{k}\left(b^{2}+\left(1-\frac{\lambda_{k}}{d}(1-b)(1-\lambda)a \right)\left(1-\frac{3\lambda_{k}}{d}(1-b)(1-\lambda)a\right)\right)\] \[+B_{k+1}\left(b^{2}+\left(1-\frac{\lambda_{k+1}}{d}(1-b)(1- \lambda)a\right)\left(1-\frac{3\lambda_{k+1}}{d}(1-b)(1-\lambda)a\right) \right)-2c_{k}(B_{k}+B_{k+1}).\] By assumption \(\frac{\lambda_{k}}{d}(1-b)(1-\lambda)a\leq\frac{\lambda_{k+1}}{d}(1-b)(1- \lambda)a=\frac{\lambda_{k+1}}{d}(1-b)a_{-}\leq\frac{1-b}{2}\), so by Corollary 3.9, \[\phi^{\prime}(\lambda)\geq(B_{k}+B_{k+1})\left(b^{2}-2c_{k}+\frac{(b+1)(3b-1)} {4}\right)>(B_{k}+B_{k+1})\left(0.105-2c_{k}\right)>0.\] Since \(\lambda\geq 1-\frac{d}{2\lambda_{k+1}a}\), we need to discuss the following two cases: If \(\frac{d}{2\lambda_{k+1}a}\geq\frac{1}{2}\), then the lower bound of \(\lambda\) is \(\frac{1}{2}\). Moreover, \(\lambda_{k+1}\leq\frac{d}{a}\leq\frac{\lambda_{n+4}d}{d_{0}}=\frac{\lambda_{n +4}}{2}\), so \(k\leq\frac{n}{\sqrt{2}}+2\). Consequently, from (D.1) it's easy to check that \[\frac{B_{k}-B_{k+1}}{B_{k}+B_{k+1}}<0.008,\] Therefore by Lemma 3.5 and Corollary 3.9, we have \[\phi(\lambda)\geq\phi(\frac{1}{2}) =B_{k}\left(\frac{3}{2}b^{2}+\frac{d}{2\lambda_{k}a}(1-b)b-\frac{ 1}{2}\left(1-\frac{\lambda_{k}}{2d}(1-b)a\right)^{2}\right)\] \[+B_{k+1}\left(\frac{3}{2}b^{2}+\frac{d}{2\lambda_{k+1}a}(1-b)b- \frac{1}{2}\left(1-\frac{\lambda_{k+1}}{2d}(1-b)a\right)^{2}\right)\] \[-(B_{k}+B_{k+1})c_{k}-2m_{0}(B_{k}-B_{k+1})\] \[\geq(B_{k}+B_{k+1})(0.02746-c_{k}-0.016m_{0})\] \[\geq 0.\] If \(\frac{d}{2\lambda_{k+1}a}\leq\frac{1}{2}\), then the lower bound of \(\lambda\) is \(1-\frac{d}{2\lambda_{k+1}a}\), so by (D.3), Lemma 3.5 and Corollary 3.9, we have \[\phi(\lambda)\geq\phi(1-\frac{d}{2\lambda_{k+1}}a) =B_{k}\left((2-\frac{d}{2\lambda_{k+1}a})b^{2}+\frac{d}{2\lambda_ {k}a}(1-b)b-\frac{d}{2\lambda_{k+1}a}(1-\frac{\lambda_{k}}{\lambda_{k+1}}\frac {1-b}{2})^{2}\right)\] \[+B_{k+1}\left((2-\frac{d}{2\lambda_{k+1}a})b^{2}+\frac{d}{2 \lambda_{k+1}a}(1-b)b-\frac{d}{2\lambda_{k+1}a}(1-\frac{1-b}{2})^{2}\right)\] \[-(B_{k}+B_{k+1})c_{k}-2m_{0}(B_{k}-B_{k+1})\] \[\geq(B_{k}+B_{k+1})\left(\frac{3}{2}b^{2}+\frac{(1-b)b}{2}-\frac {1}{2}(1-\frac{\lambda_{k}}{\lambda_{k+1}}\frac{1-b}{2})^{2}-c_{k}-2m_{0} \frac{B_{k}-B_{k+1}}{B_{k}+B_{k+1}}\right)\] \[\geq(B_{k}+B_{k+1})(0.05-c_{k}-\frac{2}{3}m_{0})\] \[>0.\] If \(B_{k}<B_{k+1}\), then \(\frac{n}{2}-2\leq k\leq\frac{n}{\sqrt{3}}\), so \[\frac{B_{k+1}-B_{k}}{B_{k}+B_{k+1}}\leq\frac{7\alpha\left(n^{2}+16n+36\right)+ 44}{(n+2)\left(21\alpha\left(n^{2}+8n+14\right)+44\right)}\leq 0.004.\] (D.4) In this case, we have \[R_{k,3}\leq 2(B_{k+1}-B_{k})\lambda(1-\lambda)a^{2}.\] Then one can go through the same argument as before to prove that \(f_{k,a}(1)\geq f_{k,a}(\lambda)\) for \(\frac{1}{2}\leq\lambda\leq 1\). The details are omitted. Case 3: \(\frac{\lambda_{n}}{4}=\frac{d\lambda_{n}}{2d_{0}}<\lambda_{k+1}\leq\lambda_{n}\), and \(a_{-}>\frac{d}{2\lambda_{k+1}}\). In this case \(4(1-\lambda)\lambda_{k+1}>\lambda_{n}\), so \(\frac{1}{2}\leq\lambda<\frac{3}{4}\), and \(2\lambda_{k+1}>\lambda_{n}\). Hence \(k\geq\frac{n-2}{\sqrt{2}}\) and \(B_{k}\geq B_{k+1}\). Now (4.9) and (4.11) becomes \[R_{k,1} \leq B_{k}\left((ba_{+}+\frac{d(1-b)}{4\lambda_{k}})^{2}+(ba_{-}+ \frac{d(1-b)}{4\lambda_{k}})^{2}\right)+B_{k+1}\left((ba_{+}+\frac{d(1-b)}{4 \lambda_{k}})^{2}+(ba_{-}+\frac{d(1-b)}{4\lambda_{k}})^{2}\right)\] \[=B_{k}\left((2\lambda^{2}-2\lambda+1)b^{2}a^{2}+\frac{4ab(1-b)}{ \lambda_{k}}+8(\frac{1-b}{\lambda_{k}})^{2}\right)\] \[+B_{k+1}\left((2\lambda^{2}-2\lambda+1)b^{2}a^{2}+\frac{4ab(1-b) }{\lambda_{k+1}}+8(\frac{1-b}{\lambda_{k+1}})^{2}\right)\] and \[R_{k,3}\leq 2(B_{k}-B_{k+1})m_{0}(1-\lambda)a^{2}\] respectively. With the help of (4.10), after some computations, we deduce that \[\frac{f_{k,a}(1)-f_{k,a}(\lambda)}{a^{2}} =B_{k}\left((2\lambda-2\lambda^{2})b^{2}-4(\frac{1-b}{\lambda_{k}a })^{2}\right)+B_{k+1}\left((2\lambda-2\lambda^{2})b^{2}-4(\frac{1-b}{\lambda_ {k+1}a})^{2}\right)\] \[-2(B_{k}+B_{k+1})c_{k}\lambda(1-\lambda)-2(B_{k}-B_{k+1})m_{0}(1- \lambda).\] It's easy to see that for fixed \(k\), the above function is increasing in \(\lambda\), so \[\frac{f_{k,a}(1)-f_{k,a}(\lambda)}{a^{2}} \geq B_{k}\left(\frac{1}{2}b^{2}-4(\frac{1-b}{\lambda_{k}a})^{2} \right)+B_{k+1}\left(\frac{1}{2}b^{2}-4(\frac{1-b}{\lambda_{k+1}a})^{2}\right)- \frac{1}{2}(B_{k}+B_{k+1})c_{k}-(B_{k}-B_{k+1})m_{0}\] \[\geq(B_{k}+B_{k+1})\left(\frac{b^{2}-c_{k}}{2}-4(\frac{1-b}{ \lambda_{k}a})^{2}-m_{0}\frac{B_{k}-B_{k+1}}{B_{k}+B_{k+1}}\right)\] \[\geq(B_{k}+B_{k+1})\left(0.04-\frac{1.7956}{\lambda_{k}^{2}a^{2}} -0.04\frac{B_{k}-B_{k+1}}{B_{k}+B_{k+1}}\right)\] \[\geq(B_{k}+B_{k+1})\left(0.04-\left(0.0071(\frac{\lambda_{n+4}}{ \lambda_{k}})^{2}+0.04\frac{B_{k}-B_{k+1}}{B_{k}+B_{k+1}}\right)\right).\] Direct computation shows that \(0.0071\frac{\lambda_{n+4}}{\lambda_{k}}+0.04\frac{B_{k}-B_{k+1}}{B_{k}+B_{k+1}}\) is decreasing in \(k\) when \(\frac{n-1}{\sqrt{2}}\leq k\leq n\), therefore \[\frac{f_{k,a}(1)-f_{k,a}(\lambda)}{a^{2}}\geq(B_{k}+B_{k+1})(0.04-0.035)>0.\] To sum up, by now we have proved Proposition 4 when \(n\geq 65\). When \(n<65\), above arguments fail since \(c_{k}\) (hence \(R_{k,2}\)) is no longer small enough. In this case, we keep \(R_{k,2}\) aside and consider only \(R_{k,1}\) and \(R_{k,3}\). Then the same argument as above shows that \(R_{k,3}\) can be absorbed, which completes the proof. The details are omitted. ## Appendix E Proof for small \(n\) In the proof of Corollary 3.9 and Theorem 1.1, we argue for \(n\) sufficiently large. In this appendix, we give the numerical data to prove the corresponding cases when \(n\) is small. We first prove Corollary 3.9 for small \(n\) Proof of Corollary 3.9 for \(30\leq n\leq 428\).: We can use Matlab to calculate the values of \(c_{n}\)'s, which are listed as scatter diagrams as follows. Then we give the proof of Theorem 1.1 when \(n\) is small. Proof of Theorem 1.1 for \(n<10000\).: We follow the argument in Section 4. We only prove for \(n\geq 65\) (For the case when \(5\leq n\leq 61\), we can use similar methods to run the induction procedure). Applying Proposition 4.1 and plugging it into (4.8), we have \[0\leq \frac{256}{35}(7-\frac{1}{\alpha})(\frac{27}{7\alpha}-15-\lambda_{ n+1})\frac{1}{\alpha}(1-\frac{7}{6}a)+\frac{128}{7}(\lambda_{n+1}-6+\frac{71}{7 \alpha})\frac{1}{\alpha^{2}}(1-\frac{7}{6}a)^{2}\] \[+ \frac{176}{63}\alpha(\lambda_{2}+4)(\lambda_{2}+6)b_{2}^{2}\] \[+ \frac{32}{9\alpha^{2}}\sum_{k=2}^{\frac{n-3}{2}}(\lambda_{n+1}- \lambda_{k}+\frac{11}{7\alpha})(2k+5)(1-\frac{1-b}{d}\lambda_{k}a)^{2}a^{2}\] \[+ \frac{32}{9\alpha^{2}}\sum_{k=\frac{n-1}{2}}^{n}(\lambda_{n+1}- \lambda_{k}+\frac{11}{7\alpha})(2k+5)(ba+(1-b)\frac{d}{4\lambda_{k}})^{2}.\] \[\leq -\frac{512}{7}(\lambda_{n+1}+\frac{51}{7})(1-\frac{7}{6}a)+\frac {512}{7}(\lambda_{n+1}+\frac{100}{7})(1-\frac{7}{6}a)^{2}+\frac{22528}{63 \alpha}a^{2}\] \[+ \frac{128}{9}\sum_{k=2}^{\frac{n-3}{2}}(\lambda_{n+1}-\lambda_{k }+\frac{22}{7})(2k+5)[(1-\frac{1-b}{d}\lambda_{k}a)^{2}+\frac{1}{2}c_{k} \chi_{\{5\leq n\leq 61\}}]a^{2}\] \[+ \frac{128}{9}\sum_{k=\frac{n-1}{2}}^{n}(\lambda_{n+1}-\lambda_{k }+\frac{22}{7})(2k+5)[(ba+(1-b)\frac{d}{4\lambda_{k}})^{2}+\frac{1}{2}c_{k} \chi_{\{5\leq n\leq 61\}}]a^{2}\] \[= :\tilde{g}_{n}(a).\] (E.1) To obtain a contradiction, it suffices to show that \(\tilde{g}_{n}(a)\) is negative for \(\frac{16}{\lambda_{n+4}}<a\leq\frac{16}{\lambda_{n}}\), for any \(n<10000\) with \(n\equiv 1\ (\text{mod}\ 4)\). Note that \(\tilde{g}_{n}(a)\) is a parabola of \(a\) with positive constant term. It suffices to show \(\tilde{g}_{n}(\frac{16}{\lambda_{n+4}})\) and \(\tilde{g}_{n}(\frac{16}{\lambda_{n}})\) are negative. Using Matlab, we obtain the following scatter diagrams for the above two quantities and thus we are done. ## Acknowledgements The research of J. Wei was partially supported by NSERC of Canada. The research of C.Gui was partially supported by NSF award DMS-2155183 and a UMDF Professorial Fellowship of University of Macau.
2308.13124
Thermal effect on microwave pulse driven magnetization switching of Stoner particle
Recently it has been demonstrated that the cosine chirp microwave pulse (CCMP) is capable of achieving fast and energy-efficient magnetization-reversal of a nanoparticle with zero-Temperature. However, we investigate the finite temperature, $T$ effect on the CCMP-driven magnetization reversal using the framework of the stochastic Landau Lifshitz Gilbert equation. At finite Temperature, we obtain the CCMP-driven fast and energy-efficient reversal and hence estimate the maximal temperature, $T_{max}$ at which the magnetization reversal is valid. $T_{max}$ increases with increasing the nanoparticle cross-sectional area/shape anisotropy up to a certain value, and afterward $T_{max}$ decreases with the further increment of nanoparticle cross-sectional area/shape anisotropy. This is because of demagnetization/shape anisotropy field opposes the magnetocrystalline anisotropy, i.e., reduces the energy barrier which separates the two stable states. For smaller cross-sectional area/shape anisotropy, the controlling parameters of CCMP show decreasing trend with temperature. We also find that with the increment easy-plane shape-anisotropy, the required initial frequency of CCMP significantly reduces. For the larger volume of nanoparticles, the parameters of CCMP remains constant for a wide range of temperature which are desired for the device application. Therefore, The above findings might be useful to realize the CCMP-driven fast and energy-efficient magnetization reversal in realistic conditions.
S. Chowdhury, M. A. S. Akanda, M. A. J. Pikul, M. T. Islam, Tai Min
2023-08-25T00:30:32Z
http://arxiv.org/abs/2308.13124v1
# Thermal effect on microwave pulse driven magnetization switching of Stoner particle ###### Abstract Recently it has been demonstrated that the cosine chirp microwave pulse (CCMP) is capable of achieving fast and energy-efficient magnetization-reversal of a nanoparticle with zero-Temperature. However, we investigate the finite temperature, \(T\) effect on the CCMP-driven magnetization reversal using the framework of the stochastic Landau Lifshitz Gilbert equation. At finite Temperature, we obtain the CCMP-driven fast and energy-efficient reversal and hence estimate the maximal temperature, \(T_{max}\) at which the magnetization reversal is valid. \(T_{max}\) increases with increasing the nanoparticle cross-sectional area/shape anisotropy up to a certain value, and afterward \(T_{max}\) decreases with the further increment of nanoparticle cross-sectional area/shape anisotropy. This is because of demagnetization/shape anisotropy field opposes the magnetocrystalline anisotropy, i.e., reduces the energy barrier which separates the two stable states. For smaller cross-sectional area/shape anisotropy, the controlling parameters of CCMP show decreasing trend with temperature. We also find that with the increment easy-plane shape-anisotropy, the required initial frequency of CCMP significantly reduces. For the larger volume of nanoparticles, the parameters of CCMP remains constant for a wide range of temperature which are desired for the device application. Therefore, The above findings might be useful to realize the CCMP-driven fast and energy-efficient magnetization reversal in realistic conditions. ## I Introduction Obtaining swift and efficient magnetization switching of a single nanoparticle has attracted much attention because of its non-volatility [1; 2; 3] and speedy data processing properties [4]. In the last decades, several controlling parameters, for instance, magnetic fields, microwaves fields [5; 6], spin-polarized electric current and spin-orbit torque [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21] are employed to reverse magnetization reversal. However, these methods are facing specific challenging issues in memory-device applications. Particularly, in the case of the magnetic field, it requires a large field and a longer switching time [5]; also field localization to a bit-cell is a bottleneck. On the other hand, the spin-polarized-current can induce magnetization switching by the mechanism of spin transfer torque (STT) and/ or spin-orbit torque (SOT) [22; 23; 24; 25; 26; 27]. However, for STT-MRAM-based devices, the threshold current density is large and it creates Joule heat which, in turn, causes the device lifetime and reliability issues [28; 29; 30; 31; 32; 33; 34]. For SOT-MRAM-based device applications, the main hindrance is that the requirement of two transistors for each bit-cell enhances the area of the bit-cell [35]. Later on, people employ the microwave field with constant or time-dependent frequency profile to drive magnetization switching at zero temperature [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. However, without considering the thermal effect, the recent study [50] reported that the swift and energy-efficient magnetization switching is obtained by a cosine chirped microwave pulse (cosine CMP). This is because the frequency changing (of Cosine CMP) is closely matched with the magnetization precession frequency which leads to the stimulated energy absorption (emission) by magnetization efficiently before (after) crossing the energy barrier. However, in practice, the temperature is prevailing everywhere, and devices are operated at room temperature. So, from the practical point of view, it is interesting to verify whether the Cosine CMP still efficiently drives magnetization switching at finite temperature. In addition, the study [50] reported that the increase of easy-plane shape anisotropy (i.e., the increase of demagnetization field) makes the magnetization switching faster which is expected for device application. But, the increase of the demagnetization field (which opposes the magnetocrystalline anisotropy) reduces the height of the energy barrier, originated by anisotropy, which may cause thermal instability issues at room/operating temperature. Thus at operating temperature, there is a possibility of spontaneous magnetization switching which may increase the error rate, which is actually undesired. Therefore, in this study, we include the finite/room temperature in the system and relax it, afterward we investigate the cosine CMP-driven magnetization switching to check whether the switching is robust at operating (room) temperature and how the parameters (i.e., the optimal initial frequency and field amplitude) of cosine CMP are altered with temperature. This study interestingly finds that the cosine CMP-driven swift and energy-efficient switching is robust even with a finite/room temperature. For the considered nanoparticles/stoner particle, we also estimate the maximal temperature, \(T_{\text{m}}\) at which the magnetization is valid. \(T_{\text{m}}\) increases with increasing the nanoparticle volume (by increasing cross-sectional area, \(A=xy\)) or shape-anisotropy coefficient up to a certain value, and \(T_{\text{m}}\) significantly decreases with the further increment of nanoparticle cross-sectional area/shape-anisotropy coefficient. Here the demagnetization/shape anisotropy reduces the effective uniaxial anisotropy i.e., reduces the energy barrier which separates the two stable states. Still, the cosine CMP-driven magnetization switching is valid for a wide operating temperature above room temperature. For the smaller volume of nanoparticles/stoner particle, the controlling parameters of cosine CMP, i.e., the minimal amplitude \(H_{\text{mw}}\) (T), the optimal rate \(R\) (GHz), and the minimal initial frequency \(f_{0}\) (GHz) show a decreasing trend with temperature. We also find that with the increment of easy-plane shape-anisotropy, the required initial frequency of cosine CMP significantly reduces. For the larger volume of nanoparticles, the parameters of cosine CMP remain constant for a wide range of temperature that is desired for the device application. Note that this strategy might be employed to switch the magnetization of other materials, for instance, synthetic antiferromagnetic/ ferrimagnetic nanoparticles by in-plane cosine chirped current pulse via spin-orbit torque. Therefore, the above findings might be useful to realize the cosine CMP-driven fast and energy-efficient magnetization switching in practical spintronics device applications with considering realistic conditions. ## II Model and method We assume a single-domain magnetic nanoparticle/Stoner particle of volume \(V=Ah\), where \(A\) is the cross-sectional area and \(h\) is thickness, and its uniaxial anisotropy is directed along \(z\) axis at temperature \(T\), as shown in Figure 1(a). The nanoparticle's size is chosen in such a way that the magnetization of can be treated as a macrospin which is indicated by the unit-vector \(\mathbf{m}\) with saturation magnetization \(M_{s}\). An easy-plane shape anisotropy is approximated the demagnetization field which is opposite to the magnetocrystalline anisotropy field. The shape anisotropy field is indicated as \(\mathbf{H}_{\text{shape}}=K_{\text{shape}}m_{z}\hat{\mathbf{z}}\), where \(K_{\text{shape}}=-\mu_{0}(N_{z}-N_{x})M_{s}\) is the shape anisotropy coefficient, \(N_{z}\) and \(N_{x}\) are demagnetization factors [51; 52] and \(\mu_{0}=4\pi\times 10^{-7}\) N/A\({}^{2}\) is the vacuum magnetic permeability. The total anisotropy is dominated by the magnetocrystalline anisotropy \(\mathbf{H}_{\text{ani}}=K_{\text{ani}}m_{z}\hat{\mathbf{z}}\) and hence the magnetic particle possesses two stable states (i.e., the \(\mathbf{m}\) aligns parallel to \(\hat{\mathbf{z}}\) and \(-\hat{\mathbf{z}}\)). At finite temperatures, the dynamics of magnetization are governed by the stochastic Landau Lifshitz Gilbert (sLLG) with the application of a circularly polarized cosine CMP [53]. \[\frac{d\mathbf{m}}{dt}=-\gamma\mathbf{m}\times[\mathbf{H}_{\text{tot}}+ \mathbf{H}_{\text{th}}]+\alpha\mathbf{m}\times\frac{d\mathbf{m}}{dt}, \tag{1}\] where \(\alpha\) and \(\gamma\) are the Gilbert damping constant and the gyromagnetic ratio respectively. Additionally, the total magnetic field (\(\mathbf{H}_{\text{tot}}\)) consists of the effective field, which comes from the exchange field \(\frac{2A_{\text{sh}}}{M_{s}}\nabla^{2}\mathbf{m}\), and effective the easy-axis anisotropy field along \(z\) direction, and the external microwave field (\(\mathbf{H}_{\text{mw}}\)). The stochastic thermal field is denoted by \(\mathbf{H}_{\text{th}}\), which comes from a finite temperature. The thermal field exhibits the following relations which are described by the Gaussian process [54; 55]. \[\begin{split}\langle H_{\text{th,}ip}(t)\rangle=0,\\ \langle H_{\text{th,}ip}(t)H_{\text{th,}jq}(t+\Delta t)\rangle= \frac{2\alpha k_{\text{B}}T}{\gamma M_{s}\Delta V}\delta_{ij}\delta_{pq}\delta (\Delta t),\end{split} \tag{2}\] where \(k_{B}\) is the Boltzman constant, \(p\) and \(q\) are the Cartesian thermal field components, \(\Delta V\) is the volume of a single micromagnetic cell, and \(i\) and \(j\) designate the micromagnetic cells. Depending on the temperature, the thermal/random field is generated as \[\mathbf{H}_{\text{th, i,p}}=\vec{\eta}\sqrt{\frac{2\alpha k_{\text{B}}T}{\gamma M _{s}\Delta V\Delta t}} \tag{3}\] where \(\Delta t\) is the time step and \(\vec{\eta}\) is a random vector which changes with time step and provides the normal distribution with zero average. Figure 1: (a) Schematic figure shows a single domain nanoparticle/stoner particle with in-plane orthogonal microwave field components at finite temperature, where \(\mathbf{m}\) denotes the direction of magnetization. (b) Time-depended frequency outline for cosine CMP from \(+f_{0}\) to \(-f_{0}\). Without microwave field and with zero-temperature, there are two ground states (or energy minima) \(\mathbf{m}\parallel\hat{z}\) and \(\mathbf{m}\parallel-\hat{z}\) in which the magnetization likes to stay due to uniaxial anisotropy. The main task is to switch the magnetization from one energy minima to another energy minima, purposely the study [50] reported that, at zero temperature, the cosine CMP is efficient to achieve fast switching. The cosine CMP is constructed as \(\mathbf{H}_{\rm{mw}}=H_{\rm{mw}}\left[\cos\phi(t)\hat{\mathbf{x}}+\sin\phi(t) \hat{\mathbf{y}}\right]\), where \(H_{\rm{mw}}\) is microwave amplitude and \(\phi(t)\) is the phase. \(\phi(t)\) is denoted as \(2\pi f_{0}\cos\left(2\pi Rt\right)t\), \(R\) (in GHz) is the controlling parameter, and \(f(t)\) is the instantaneous frequency of cosine CMP, is defined as \(f(t)=\frac{1}{2\pi}\frac{d\phi}{dt}=f_{0}\left[\cos\left(2\pi Rt\right)-\left( 2\pi Rt\right)\sin\left(2\pi Rt\right)\right]\) which sweeps from \(+f_{0}\) to final \(-f_{0}\) as shown in Figure 1(b). The physical picture of obtaining the fast and energy-efficient magnetization switching is that the cosine CMP induces stimulated microwave absorption (emissions) by (from) the macro-spin before (after) crossing over the energy barrier at zero temperature and, for detail, the formulation of the rate of energy change is given in the appendix [50]. In this study, we use the material parameters reported in Ref. [56], \(M_{s}=10^{6}\) A/m, \(H_{\rm{k}}=0.75\) T or \(3.75\times 10^{5}\) J/m\({}^{3}\), \(\gamma=1.76\times 10^{11}\) rad/(T \(\cdot\) s), exchange constant \(A_{\rm{ex}}=13\times 10^{-12}\) J/m, and \(\alpha=0.01\) to mimic Permalloy. This study shows a strategy that would work for other materials also since Permalloy does not possess such a high anisotropy [57; 58]. MuMax3 Package [59] has been used to solve the stochastic-LLG equation using the adaptive-Heun solver. We study the nanoparticles by increasing the cross-sectional area \(A=xy\) while the thickness \(h=8\) nm is kept constant. Particularly, the nanoparticles with the areas are \(A_{1}=8\times 8\) nm\({}^{2}\), \(A_{2}=12\times 12\) nm\({}^{2}\), \(A_{3}=16\times 16\) nm\({}^{2}\), and \(A_{4}=22\times 22\) nm\({}^{2}\) so that the demagnetization field is induced in the opposite direction of uniaxial anisotropy. We discretize the sample by the unit-cell size \(2\times 2\times 2\) nm\({}^{3}\). For efficient and stable calculation [60; 61], we employ the fixed time step of \(10^{-14}\) s. According to the practical requirement, we consider the switched state is obtained if the magnetization reaches at \(m_{z}=-0.7\). ## III Numerical results Firstly we investigate the magnetization switching of the nanoparticle \(12\times 12\times 8\) nm\({}^{3}\) driven by cosine CMP with the initial frequency \(f_{0}=17.70\) GHz and the microwave field \(H_{\rm{mw}}=0.045\) T, which are similar parameters to the study [50] for \(T=0\) K. By keeping \(f_{0}=17.70\) GHz and \(H_{\rm{mw}}=0.045\) T fixed, then we try to determine the optimal \(R\) (at which the switching is fastest) which is obtained 0.42 GHz for \(T=0\). Then we include the temperature and relax the system. Afterward, we apply the cosine CMP to the system and study the switching at finite temperatures. In principle, we simulate 30 independent switchings by varying the random numbers/thermseeds for the same parameters of material and of cosine CMP (\(f_{0}=17.70\) GHz, \(H_{\rm{mw}}=0.045\) T and \(R=0.39\) GHz) and take the ensemble average as shown in figure 2 (b). In this way, we estimate each observations/data points of this study. Now for \(T=300\) K, we study the system \(12\times 12\times 8\) nm\({}^{3}\) by cosine CMP with the parameters (\(f_{0}=17.70\) GHz, \(H_{\rm{mw}}=0.045\) T and \(R=0.42\) GHz), but the switching is not obtained as expected which shown by the red line in the 2(a). Then \(R\) is optimized (for fixed \(f_{0}=17.70\) GHz and \(H_{\rm{mw}}=0.045\) T), and for the optimal \(R=0.39\) GHz, we find the fastest magnetization switching as shown by the blue line in Figure 2 (a). Therefore, it is noted that optimal \(R\) depends on temperature, which is a crucial issue that needs to be addressed for realistic applications. It is reported that, with enlarging the volume of the samples/nanoparticles sizes, the thermal stability (\(E_{a}/E_{t}\)) increases since the anisotropy energy, (\(E_{a}=KV\), \(K\) is effective anisotropy coefficient) is proportional to volume, which increases the energy barrier [62; 63; 64; 65; 66] but thermal energy \(E_{t}=k_{B}T\) does not depend on volume. In this study, we focus on the samples with increasing cross-sectional area \(A=xy\) while the thickness \(h=8\) nm is kept constant so that the volume (\(V=Ah\)) is enlarged. Specifically, our considered samples are with \(A_{1}=8\times 8\), \(A_{2}=12\times 12\), \(A_{3}=16\times 16\) and \(A_{4}=22\times 22\) nm\({}^{2}\) while the thickness \(z=h=8\) nm is fixed. Therefore, in these sample shapes, the demagnetization field/shape anisotropy (which is the opposite of the crystalline anisotropy) would be induced because of the magnetization of the nanoparticle. For different \(A\), by finding the demagnetization-factors \(N_{z}\) and \(N_{x}\)[67; 51] analytically, the shape anisotropy of the samples \(K_{\rm{shape}}=\mu_{0}(N_{z}-N_{x})M_{s}m_{z}\hat{\mathbf{z}}\) has been calculated. So, shape anisotropy \(K_{\rm{shape}}\) are 0 T, 0.17718 T, 0.3064 T and 0.4459 T, for \(A_{1}\), \(A_{2}\), \(A_{3}\) and \(A_{4}\) respectively. The \(\mathbf{H}_{\rm{shape}}\) actually opposes the anisotropy field \(\mathbf{H}_{\rm{ani}}\) and reduces the stability as well as resonance frequency \(f_{0}=\frac{\gamma}{2\pi}\left[H_{\rm{ani}}-\mu_{0}(N_{z}-N_{x})M_{\rm{s}}\right]\). Therefore, there is an issue to study how increment of the shape anisotropy, as well as the volume of the sample, affect Figure 2: (a) The time evolutions of \(m_{z}\) of nanoparticle \(12\times 12\times 8\) nm\({}^{3}\) induced by cosine CMP with \(f_{0}=17.7\) GHz, \(H_{\rm{mw}}=0.045\) T, and optimal \(R\) for different temperature \(T\) (0 and 300 K). (b) At temperature \(T=300\) K, the average magnetization switching (bold line) of nanoparticle \(12\times 12\times 8\) nm\({}^{3}\) is shown from the 30 independent switchings induced by cosine CMP with \(H_{\rm{mw}}=0.045\), \(f_{0}=17.7\) GHz, and \(R=0.39\) GHz. the thermal stability and the parameters of cosine CMP. Purposely, we investigate the cosine CMP-driven magnetization switching of the patterned samples of \(A_{1}\), \(A_{2}\)\(A_{3}\), and \(A_{4}\) by varying \(R\) and estimate optimal \(R\) for different \(T\). Fig 3(a) demonstrates the change of optimal \(R\) as a function of thermal effect, \(T\) for different samples. Optimal \(R\) shows a decreasing trend for lower (smaller cross-sectional area) \(K_{\rm shape}\), albeit with some fluctuation. However, for higher \(K_{\rm shape}\), optimal \(R\) is larger (the magnetization switching is faster) and remains constant with \(T\). For each sample, there is a maximal temperature \(T_{\rm m}\) for which the magnetization switching is valid, which is indicated by the vertical dashed line in Figure 3(a). \(T_{\rm m}\)'s are higher than the room temperature. The above findings are useful for device applications. In Fig 3(b), we explicitly plot \(T_{\rm m}\) and optimal \(R\) at \(T_{\rm m}\) as a function \(K_{\rm shape}\), it is found that \(R\) decreases till a certain value of \(K_{\rm shape}\) and for further increment of \(K_{\rm shape}\), \(R\) increases significantly, i.e., the switching time becomes smallest. The reason can be attributed as (using basic knowledge) the effective saturation magnetization \(M_{s}\) decreases with temperature because of the spin-wave generation [68]. For \(A_{1}\), \(A_{2}\) and \(A_{3}\), i.e., smaller volume, \(M_{s}\) decreases faster with \(T\) refers to the study [69], so optimal \(R\) decreases as a function of \(T\) with some fluctuations, which might be absent if one can take a large number of ensemble average, and the change of optimal \(R\) with \(T\) would be more consistent. However, for larger \(A_{4}\), i.e., larger volume, the decrement of \(M_{s}\) with \(T\) is not significant, so optimal \(R\) remains constant. For \(A_{1}\), \(A_{2}\) and \(A_{3}\), \(T_{\rm m}\) increases because of the increment of the sample volume, rather than \(K_{\rm shape}\) as it is not dominant, which plays a role to increase the thermal stability. But for the larger \(A_{4}\), \(K_{\rm shape}\) becomes dominant and it reduces the uniaxial anisotropy as well as the energy barrier which separates the two stable states. This is why thermal stability decreases, i.e., \(T_{\rm m}\) decreases significantly. Subsequently, we determine the optimal \(f_{0}\) of cosine CMP, with the investigation of magnetization switching, with \(T\) by keeping amplitude \(H_{\rm mw}=0.045\) T and the optimal \(R\) for corresponding \(T\) fixed. Figure 4(a) shows the magnetization switching (black line) induced by cosine CMP with \(H_{\rm mw}=0.045\) T, \(f_{0}=13.5\) GHz, and \(R=0.37\) GHz for \(T\)= 0 K, but at \(T\)= 300 K, the magnetization switching (red line) is no longer valid with these parameters. Then we tune \(f_{0}\) by keeping \(H_{\rm mw}=0.045\)T and optimal \(R\) (at \(T\)= 300 K) fixed and we obtain magnetization switching (blue line) with \(f_{0}=13.10\) GHz. Then for different \(K_{\rm shape}\) or samples, we investigate the variation of \(f_{0}\) with \(T\) and demonstrate in Figure 4(b). With increasing the \(K_{\rm shape}\), the initial frequency \(f_{0}\) decreases significantly, which is expected as the \(\mathbf{H}_{\rm shape}\) actually acts in the opposite direction of the \(\mathbf{H}_{\rm ani}\) and thus \(f_{0}\) decreases as shown in the Figure 4 (a). For the samples of \(K_{\rm shape}=0\) T, 0.17718 T and 0.3064 T, minimal \(f_{0}=\) remains almost constant up to a maximal temperature \(T_{\rm m}\) which is useful for practical device realization. But, for the sample of \(K_{\rm shape}=0.4459\) T, optimal shows a decreasing trend. This is because of the reduction of effective \(M_{s}\) (since the demagnetization field is strong), which leads to a decrement of intrinsic frequency \(\gamma M_{s}\) as well. As a result, all dynamics become slow, as though the time scale of the magnetization dynamics has been expanded. Explicitly, \(T_{\rm m}\) and optimal \(f_{0}\) at \(T_{\rm m}\) as a function of \(K_{\rm shape}\) are plotted in the Figure 4(b), it is observed that \(T_{\rm m}\) initially increases up to certain temperature and then \(T_{\rm m}\) decreases rapidly for further increment of \(K_{\rm shape}\). This is because the \(K_{\rm shape}\) opposes and reduces the effective anisotropy \(H_{k}\) and thus reduces the energy barrier. Lastly, we study how the minimally required \(H_{\rm mw}\) of cosine CMP varies with temperature \(T\) by keeping the optimal \(f_{0}\) and \(R\) at corresponding \(T\) fixed. For \({\bf K}_{\rm shape}=0\) or \(A_{1}=8\times 8\) nm\({}^{2}\), the variation of \(H_{\rm mw}\) (black line) presented in the Figure 5(a) and it is found that the required \(H_{\rm mw}\) remains almost constant up to a maximum temperature \(T_{\rm m}\) which is indicated by the vertical dashed line. After \(T_{\rm m}\), the magnetization switching is not obtained. Similarly, for other \(K_{\rm shape}\) or samples, we study the magnetization switching to find minimal \(H_{\rm mw}\) of cosine CMP varies with temperature \(T\) by keeping the optimal \(f_{0}\) and \(R\) at corresponding \(T\) fixed and presented in Figure 5(a). It is noted that for higher \(T\) and lower \(K_{\rm shape}\), the required \(H_{\rm mw}\) are smaller. To be more explicit, the \(T_{\rm m}\) and the minimal \(H_{\rm mw}\) at \(T_{\rm m}\) [\(H_{\rm mw}(T_{m})\)] are plotted in Figure 5(b) which shows that thermal stability \(T_{\rm m}\) increases with \(K_{\rm shape}\) up to a certain value and then decreases with the further increment of \(K_{\rm shape}\). It is because of a similar reason as the increment of the sample volume, rather than \(K_{\rm shape}\) which is not dominant, increases the thermal stability. But for the larger \(A_{4}\), \(K_{\rm shape}\) becomes dominant and it reduces the uniaxial anisotropy and thus the thermal stability decreases, i.e., \(T_{\rm m}\) decreases significantly. From the above study, we estimate the minimal \(H_{\rm mw}\), \(f_{0}\), and optimal \(R\) at different \(T\) for different \(K_{\rm shape}\) or the cross-sectional areas. the minimal \(H_{\rm mw}\), \(f_{0}\) and optimal \(R\) at three different \(T\) (= 0 K, 300 K and \(T_{\rm m}\)) are summarized in the Table 1. ## IV Discussions and Conclusions The recent study [50] has demonstrated that the cosine CMP is capable of driving fast and energy-efficient magnetization switching of a nanoparticle with \(T=0\). However, this study investigates the cosine CMP-driven magnetization switching of the nanoparticle by including finite \(T\) since the temperature is ubiquitous in nature. We found that cosine CMP-driven fast and energy-efficient switching is still valid in finite \(T\) or even at room temperature, which is important in practice. For the lower volume of samples, the required minimal amplitude, the optimal \(R\), and minimal frequency decrease with temperature because of the decrement of magnetization with \(T\). We also find that with the increment of easy-plane shape-anisotropy, the required initial frequency of cosine CMP significantly reduces. For the larger volume of nanoparticles, the parameters of cosine CMP remains constant for a wide range of temperature which is useful practically. It is mentioned that there is a very recent study which demonstrates how to generate such cosine CMP in practice [70] and furthermore, several technologies [71; 72] are available to generate such cosine CMP. It is suggested to simulate magnetization switching for a real nanoparticle and thus obtain the optimal parameters; later these parameters can be employed for all the same nano-particles of the device. In addition, it is mentioned that this strategy might be applicable to switch the magnetization of synthetic antiferromagnetic/ferrimagnetic nanoparticles. Therefore, the above findings may pave the way to realize the future-generation highly dense and speedy processing memory devices. ###### Acknowledgements. M T Islam acknowledges the National Key R&D Program of China (Grant No. 2021YFA1202200), the Khulna University Research Cell (Grant No. KU/RC-04/2000-158), Khulna, Bangladesh, and the Ministry of Education (BANBEIS, Grant No. SD2019972). ## Appendix A Calculation of the rate of change of energy The energy is a function of magnetization and microwave field, i.e., \(E({\bf m},\,{\bf H}_{\rm tot})\). The rate of change of energy is then, \[\frac{dE}{dt}=\frac{\partial E}{\partial{\bf m}}\frac{d{\bf m}}{dt}+\frac{ \partial E}{\partial{\bf H}_{\rm tot}}\frac{d{\bf H}_{\rm tot}}{dt}\] Figure 5: (a) Temperature \(T\) dependence of \(H_{\rm mw}\) while minimal \(f_{0}\) and \(R\) at corresponding \(T\) are fixed. (b) \(T_{\rm m}\) (red line) and the minimal \(H_{\rm mw}\) (black line) at \(T_{\rm m}\) as a function of the shape anisotropy \(K_{\rm shape}\). Figure 4: (a) The switching of \(m_{z}\) of nanoparticle with \(A_{3}=16\times 16\) nm\({}^{2}\) driven by the cosine CMP (using \(H_{\rm mw}=0.045\) T and \(R=0.37\) GHz) with optimal frequency \(f_{0}\) in GHz for different finite temperature, \(T\). (b) Temperature \(T\) dependence of \(f_{0}\) while \(H_{\rm mw}=0.045\) T and the optimal \(R\) at corresponding finite temperature, \(T\). (c) The maximal temperature \(T_{\rm m}\) (red line) and \(f_{0}\) (black line) at \(T_{\rm m}\) as the function of the shape anisotropy \(K_{\rm shape}\). where, \(\mathbf{H}_{\mathrm{tot}}=\mathbf{H}_{\mathrm{eff}}+\mathbf{H}_{\mathrm{mw}}\), the Hamilton's equation of motion for the system is, \[\frac{\partial E}{\partial\mathbf{H}_{\mathrm{tot}}}=-\mathbf{m}\] \[\frac{\partial E}{\partial\mathbf{m}}=-\mathbf{H}_{\mathrm{tot}}\] Now, after substituting the values of \(\frac{d\mathbf{m}}{dt},\frac{\partial E}{\partial\mathbf{H}_{\mathrm{tot}}}\) and \(\frac{\partial E}{\partial\mathbf{m}}\) in the equation, we get \[\frac{dE}{dt}=-\mathbf{H}_{\mathrm{tot}}\cdot(-\gamma\mathbf{m} \times\mathbf{H}_{\mathrm{tot}}-\alpha\mathbf{m}\times(\gamma\mathbf{m} \times\mathbf{H}_{\mathrm{tot}}))\\ -\mathbf{m}\cdot\frac{d\mathbf{H}_{\mathrm{mw}}}{dt}\\ \Rightarrow \frac{dE}{dt}=\gamma\mathbf{m}\cdot(\mathbf{H}_{\mathrm{tot}} \times\mathbf{H}_{\mathrm{tot}})+\alpha\gamma\mathbf{H}_{\mathrm{tot}}( \mathbf{m}\times(\mathbf{m}\times\mathbf{H}_{\mathrm{tot}}))\\ -\mathbf{m}\cdot\frac{d\mathbf{H}_{\mathrm{mw}}}{dt}\\ \Rightarrow \frac{dE}{dt}=\alpha\gamma\mathbf{H}_{\mathrm{tot}}(\mathbf{m} \times(\mathbf{m}\times\mathbf{H}_{\mathrm{tot}}))-\mathbf{m}\cdot\frac{d \mathbf{H}_{\mathrm{mw}}}{dt}\\ \Rightarrow \frac{dE}{dt}=\alpha\gamma(\mathbf{m}\times\mathbf{H}_{\mathrm{ tot}})\cdot(\mathbf{H}_{\mathrm{tot}}\times\mathbf{m})-\mathbf{m}\cdot\frac{d \mathbf{H}_{\mathrm{mw}}}{dt}\\ \Rightarrow \frac{dE}{dt}=-\alpha\gamma(\mathbf{m}\times\mathbf{H}_{\mathrm{ tot}})\cdot(\mathbf{m}\times\mathbf{H}_{\mathrm{tot}})-\mathbf{m}\cdot\frac{d \mathbf{H}_{\mathrm{mw}}}{dt}\\ \Rightarrow \frac{dE}{dt}=-\alpha\gamma\left|\mathbf{m}\times\mathbf{H}_{ \mathrm{tot}}\right|^{2}-\mathbf{m}\cdot\frac{d\mathbf{H}_{\mathrm{mw}}}{dt}\] We represent the second term of right-hand side of the above equation as \(\hat{\mathscr{E}}\). We can write it in the explicit form as the following. The rate of change of microwave field \(\mathbf{H}_{\mathrm{mw}}\) is \[\dot{\mathbf{H}}_{\mathrm{mw}} =\frac{d\mathbf{H}_{\mathrm{mw}}}{dt}\] \[=\frac{d}{dt}\left(H_{\mathrm{mw}}\left[\cos\phi(t)\hat{\mathbf{x }}+\sin\phi(t)\hat{\mathbf{y}}\right]\right)\] \[=H_{\mathrm{mw}}\left[-\sin\phi(t)\hat{\mathbf{x}}+\cos\phi(t) \hat{\mathbf{y}}\right]\frac{d\phi}{dt}\] \[=H_{\mathrm{mw}}\left[-\sin\phi(t)\hat{\mathbf{x}}+\cos\phi(t) \hat{\mathbf{y}}\right]\] \[\left[\frac{\phi(t)}{t}-\frac{d}{dt}\left(\frac{\phi(t)}{t}\right) t\right]\] And, the magnetization \(\mathbf{m}\) is recast by \[\mathbf{m} =m_{x}\hat{\mathbf{x}}+m_{y}\hat{\mathbf{y}}\] \[=\sin\theta(t)\cos\phi_{m}(t)\hat{\mathbf{x}}+\sin\theta(t)\sin \phi_{m}(t)\hat{\mathbf{y}}\] where \(\theta(t)\) and \(\phi_{m}(t)\) is the polar and azimuthal angle of the magnetization \(\mathbf{m}\) respectively. Now, substituting the expression of \(\mathbf{m}\) and \(\dot{\mathbf{H}}_{\mathrm{mw}}\) in the expression of \(\hat{\mathscr{E}}\), we get, \[\hat{\mathscr{E}} =-\mathbf{m}\cdot\dot{\mathbf{H}}_{\mathrm{mw}}\] \[=H_{\mathrm{mw}}\sin\theta(t)[-\sin\phi(t)\cos\phi_{m}(t)\] \[\quad+\cos\phi(t)\sin\phi_{m}(t)]\cdot\left[\frac{\phi(t)}{t}- \frac{d}{dt}\left(\frac{\phi(t)}{t}\right)t\right]\] \[=H_{\mathrm{mw}}\sin\theta(t)\sin\left(\phi(t)-\phi_{m}(t)\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left[\frac{\phi(t)}{t }-\frac{d}{dt}\left(\frac{\phi(t)}{t}\right)t\right]\] We can define \(\Phi(t)=\phi_{m}(t)-\phi(t)\), and then we get, \[\hat{\mathscr{E}}=H_{\mathrm{mw}}\sin\theta(t)\sin\Phi(t)\left[\frac{\phi(t)}{ t}-\frac{d}{dt}\left(\frac{\phi(t)}{t}\right)t\right]\] where \(\left[\frac{\phi(t)}{t}-\frac{d}{dt}\left(\frac{\phi(t)}{t}\right)t\right]\) defines the angular frequency \(\omega(t)\).
2304.05493
Optimizing Data-driven Causal Discovery Using Knowledge-guided Search
Learning causal relationships solely from observational data often fails to reveal the underlying causal mechanisms due to the vast search space of possible causal graphs, which can grow exponentially, especially for greedy algorithms using score-based approaches. Leveraging prior causal information, such as the presence or absence of causal edges, can help restrict and guide the score-based discovery process, leading to a more accurate search. In the healthcare domain, prior knowledge is abundant from sources like medical journals, electronic health records (EHRs), and clinical intervention outcomes. This study introduces a knowledge-guided causal structure search (KGS) approach that utilizes observational data and structural priors (such as causal edges) as constraints to learn the causal graph. KGS leverages prior edge information between variables, including the presence of a directed edge, the absence of an edge, and the presence of an undirected edge. We extensively evaluate KGS in multiple settings using synthetic and benchmark real-world datasets, as well as in a real-life healthcare application related to oxygen therapy treatment. To obtain causal priors, we use GPT-4 to retrieve relevant literature information. Our results show that structural priors of any type and amount enhance the search process, improving performance and optimizing causal discovery. This guided strategy ensures that the discovered edges align with established causal knowledge, enhancing the trustworthiness of findings while expediting the search process. It also enables a more focused exploration of causal mechanisms, potentially leading to more effective and personalized healthcare solutions.
Uzma Hasan, Md Osman Gani
2023-04-11T20:56:33Z
http://arxiv.org/abs/2304.05493v2
# KGS: Causal Discovery Using Knowledge-guided Greedy Equivalence Search ###### Abstract Learning causal relationships solely from observational data provides insufficient information about the underlying causal mechanism and the search space of possible causal graphs. As a result, often the search space can grow exponentially for approaches such as Greedy Equivalence Search (GES) that uses a score-based approach to search the space of equivalence classes of graphs. Prior causal information such as the presence or absence of a causal edge can be leveraged to guide the discovery process towards a more restricted and accurate search space. In this study, we present KGS, a knowledge-guided greedy score-based causal discovery approach that uses observational data and structural priors (causal edges) as constraints to learn the causal graph. KGS is a novel application of knowledge constraints that can leverage any of the following prior edge information between any two variables: the presence of a directed edge, the absence of an edge, and the presence of an undirected edge. We extensively evaluate KGS across multiple settings in both synthetic and benchmark real-world datasets. Our experimental results demonstrate that structural priors of any type and amount are helpful and guide the search process towards an improved performance and early convergence. ## 1 Introduction Causal discovery (CD) deals with unfolding the causal relationships among entities in a system. The causes and their corresponding effects are inferred from data, and represented using a causal graph that consists of nodes representing domain variables, and arrows indicating the direction of the relationship between those variables. There exist different approaches to discover the causal structure from data among which the constraint-based (Spirtes et al. (2000), Colombo et al. (2012)) and score-based approaches (Chickering (2002), Chickering and Meek (2015)) are the most prominent ones. Constraint-based approaches perform a series of conditional independence tests to find the causal relationships from data. Whereas, the score-based approaches search over the space of possible graphs to find the graph that best describes the data. Often, these approaches use a search process such as a greedy, A*, or any other heuristic search (Kleinegesse et al. (2022)) combined with a score function such as BIC, AIC, MDL, BDeu, etc. (Meek (2013)) to score all the candidate graphs. The final outcome is one or more causal graphs with the highest score. Among the score-based approaches, a commonly used approach is the Greedy Equivalence Search (GES) (Chickering (2002)) that searches over the space of equivalence classes of causal graphs. It is an iterative process that starts with an empty network and greedily adds edges one at a time until it reaches the local maximum. Then it continues with a series of greedy deletions of edges until the score keeps improving. Although it is widely used, there are some major disadvantages of this search technique. First, the search space becomes exponential when the number of possible candidate states increases rapidly with the increase in the number of variables (Chickering and Meek (2015), Chickering (2020)). This is because it considers all possible combinations of candidate search states. Hence, a large number of combinations must be considered as the number of variables keep growing, resulting in an exponential growth of the search space (Chickering and Meek (2015)). Even for sparse graphs, the search space is large enough to negatively impact the efficiency and performance of the algorithm. Second, the score needs to be computed for each possible graph which makes it computationally expensive (Chickering and Meek (2015), Chickering (2020)). The situation worsens while traversing the search space of densely-connected models. Because it drastically increases the number of times the algorithm needs to compute the score as it considers almost every node in the respective graph subspace (Chickering [2020]). Furthermore, the search process is typically repeated multiple times, which adds to the overall cost. To address these challenges in GES, existing causal information can be efficiently used during the discovery process. Often in many domains, there exists some prior information about the causal relationships between some of the variables. This information may be obtained from multiple sources such as domain experts' opinion, research articles, randomized control trials (RCTs), systematic reviews, and other domain sources. Currently, most of the existing causal discovery approaches are data-driven, rely heavily on the data samples, and do not always consider the available causal knowledge (Kleinegesse et al. [2022]). However, researchers are now getting interested to augment structure learning with known information about causal edges and study their effectiveness to mitigate various practical challenges of causal discovery solely from observational data (Chowdhury et al. [2023]). GES may also be benefited when prior knowledge about some of the causal relationships is used in the form of additional constraints Chickering [2002]. Knowledge constraints may restrict the search space by shifting the focus to a smaller set of potential solutions, thereby, providing a better understanding of the context in which the search is being performed. This may lead towards early convergence resulting in a lower computational cost by reducing the search time, as well as the number of search states that need to be explored. This eventually lowers the number of score calculations too. To the best of our knowledge, there is no comprehensive study focussing on the application and impact of leveraging the existing causal knowledge in a Greedy score-based causal discovery approach. Therefore, in this work, we present a **K**nowledge-guided **G**eredy Equivalence **S**earch (KGS), which leverages knowledge constraints in GES in a systematic way and study how these additional constraints help to guide the search process. We consider three types of causal-edge constraints: (i) Directed edges (\(\rightarrow\)), (ii) Forbidden edges (\(\not\rightarrow\)), and (iii) Undecided edges (\(\leftarrow\)). These types of prior information about the causal relationships are often available. Here, _directed_ edge and _forbidden_ edge refers to the _existence_ or _absence_ of a causal relationship respectively, and _undecided_ edge refers to the presence of a causal relationship whose _direction is unknown_. We evaluate the performance of KGS with the three types of edges as well as a combination of all the edges on three synthetic and three benchmark real datasets. We also compare the performances of KGS with GES and present a comparative analysis of how variation in structural prior influences the search process in terms of both graph discovery and computational efficiency. We further study how varying the amount of leveraged knowledge affects the performance of the graph discovery. To summarize, we aim to answer the following questions in this study: 1) How do knowledge constraints affect the learned graph's accuracy in greedy equivalence search?, 2) Which type of knowledge constraint is the most effective?, 3) How does varying the amount of knowledge influence the performance?, and 4) Do knowledge constraints helps to achieve an early convergence to the optimal causal graph and improves computational efficiency? Our **main contributions** are summarized below: * We present KGS, a novel application of causal constraints that leverages available information about different types of edges (directed, forbidden, and undecided) in a Greedy score-based causal discovery approach. * We demonstrate how the search space as well as scoring candidate graphs can be reduced when different edge constraints are leveraged during a search over equivalence classes of causal networks. * Through an extensive set of experiments in both synthetic and benchmark real-world settings, we show how different types of prior knowledge can impact the discovery process of graphs of varied densities (small, medium, and large networks). * We also show the influence of changing the proportion of knowledge constraints on the structure recovery through a set of experiments. ## 2 Related Works Conventional CD approaches include _Constraint-based_ approaches that tests for conditional independencies among variables (Spirtes et al. [2000], Spirtes [2001]) and _Score-based_ approaches that scores candidate causal graphs to find the one that best fits the data (Chickering [2002], Chickering and Meek [2015]). _Hybrid_ methods leverage conditional independence tests to learn the skeleton graph and prune a large portion of the search space combined with a score-based search to find the causal DAG (Tsamardinos et al. [2006], Li et al. [2022]). Other common approaches include _function-based_ methods that represent variables as a function of its parents and an independent noise term (Shimizu et al. [2006],Hoyer et al. [2008]). Some recent popular _gradient-based_ approaches use neural networks and a modified definition of the acyclicity constraint (Zheng et al. [2018], Yu et al. [2019], Lachapelle et al. [2019] etc.) that transforms the combinatorial search to a continuous optimization search. These approaches do not consider to leverage any knowledge constraints in the search process. However, multiple studies (Fenton and Neil [2018], Chowdhury et al. [2023]) mention the importance of considering existing causal information when learning causal representation models. Meek [2013] was one of the earliest to suggest orientation rules for incorporating prior knowledge in constraint-based causal discovery approaches. In recent years, there has been an increasing amount of interest in the incorporation of knowledge into causal structure learning (Perkovic et al. (2017)). Amirkhani et al. (2016) investigated the influence of the same kinds of prior probability on edge inclusion or exclusion from numerous experts which were applied to two variants of the Hill Climbing algorithm. Andrews et al. (2020) incorporates tiered background knowledge in the constraint-based FCI algorithm where they mention that prior knowledge allows for the identification of additional causal links. Fang and He (2020) explores the task of estimating all potential causal effects from observational data using direct and non ancestral causal information. Recently, Hasan and Gani (2022) proposed a generalized framework that uses prior knowledge as constraints to penalize an RL-based search algorithm which outputs the best rewarded causal graph. Kleinegesse et al. (2022) shows that even small amounts of prior knowledge can speed up and improve the performance of A*-based causal discovery. Chowdhury et al. (2023) studies the impact of expert causal knowledge in a continuous optimization-based causal algorithm and suggests that the practitioners should consider utilizing prior knowledge whenever available. Although different approaches have explored the incorporation of prior knowledge in multiple ways, to the best of our knowledge there is no approach that studies the impact and application of different edge constraints in a greedy score-based causal discovery approach. ## 3 Background ### Causal Graphical Model (CGM) A directed acyclic graph (DAG) is a type of graph \(G\) in which the edges \(e\) are directed (\(\rightarrow\)) and there are no cycles. A Causal Graphical Model (CGM) consists of a DAG \(G\) and a joint distribution \(P\) over a set of random variables \(X=(X_{1},X_{2},\ldots,X_{d})\) where \(P\) is Markovian with respect to \(G\)(Fang and He (2020)). In a CGM, the nodes represent variables X, and the arrows represent causal relationships between them. The joint distribution \(P\) can be factorized as follows where \(pa(x_{i},G)\) denotes the parents of \(x_{i}\) in \(G\). \[P(x_{1},\ldots,x_{d})=\prod_{i=1}^{n}P(x_{i}|pa(x_{i},G) \tag{1}\] A set of DAGs having the same conditional independencies belong to the same equivalence class. DAGs can come in a variety of forms based on the kinds of edges they contain. A Partially Directed Graph (PDAG) contains both directed and undirected edges. A Completed PDAG (CPDAG) consists of directed edges that exist in every DAG \(G\) belonging to the same equivalence class and undirected edges that are reversible in \(G\). ### Score-based Causal Discovery A score-based causal discovery approach typically searches over the equivalence classes of DAGs to learn the causal graph \(G\) that best fits the observed data \(D\) as per a score function \(S(G,D)\) which returns the score \(S\) of \(G\) given data \(D\)(Chickering (2002); Chowdhury et al. (2023)). Here, the optimization problem for structure learning is as follows: \[\min_{\begin{subarray}{c}G\\ \text{subject to }G\in D\end{subarray}}S(G,X) \tag{2}\] Typically, any score-based approach has two main components: _(i) a search strategy_ - to traverse the search space of candidate graphs \(G\), and _(ii) a score function_ - to evaluate the candidate causal graphs. **Score-function.** A scoring function \(S(G,D)\) maps causal DAGs \(G\) to a numerical score, based on how well \(G\) fits to a given dataset \(D\). A commonly used scoring function to select causal models is the Bayesian Information Criterion (BIC) (Schwarz (1978)) which is defined below: \[S_{BIC}=-2*loglikelihood+k*log(n), \tag{3}\] where \(n\) is the sample size used for training and \(k\) is the total number of parameters. ### Greedy Equivalence Search (GES) GES (Chickering (2002)) is one of the oldest score-based causal discovery methods that employ a greedy search over the space of equivalence classes of DAGs. Each search state is represented by a CPDAG where edge modification operators such as insert and delete operators allow for single-edge additions and deletions respectively. Primarily GES operates in two phases: (i) Forward Equivalence Search (FES) and (ii) Backward Equivalence Search (BES). The first phase FES starts with an empty (i.e., no-edge) CPDAG, and greedily adds edges by considering every single-edge addition that could be performed to every DAG \(G\) in the current equivalence class. This phase continues until it reaches a local maximum. After that the second phase BES starts where at each step, it considers all possible single-edge deletions. This continues until there is an improvement of the score. Finally, GES terminates once the second phase reaches a local maximum. GES assumes a decomposable score function \(S(G,D)\) which is expressed as a sum of the scores of individual nodes and their parents. \[S(G,D)=\sum_{i=1}^{d}s(x_{i},pa(x_{i},G)) \tag{4}\] A problem with GES is that the number of search states that it needs to evaluate scales exponentially with the number of nodes \(d\) in the graph (Chickering and Meek (2015)). This results in a vast search space, and also scoring a large number of graphs which adds to the overall cost as score computation is an expensive step. ## 4 Knowledge-guided greedy equivalence search In this section, we introduce our approach Knowledge-guided GES abbreviated as KGS, that uses a set of user-defined knowledge constraints to search for a causal graph that best fits the data and knowledge constraints. The constraints allow KGS to complete the search process using a reduced set of modification operators. KGS primarily works in three steps: _(i) knowledge organization, (ii) forward search and (iii) backward search_ as shown in Algorithm 1 and described in Subsection 4.2. ### Types of Knowledge Constraints We consider the following types of knowledge constraints (causal edges) between the nodes of a causal graph \(G\): (i) _Directed edge (d-edge)_: The existence of a directed edge (\(\rightarrow\)) from node \(X_{i}\) (cause) to node \(X_{j}\) (effect). This signifies that these nodes are causally related and \(X_{i}\) is the cause of the effect \(X_{j}\). (ii) _Forbidden edge (f-edge)_: The absence of an edge or causal link (\(\not\)) between two nodes \(X_{i}\) and \(X_{j}\). It signifies the non-existence of a causal relationship between the nodes. (iii) _Undecided edge (u-edge)_: These are the type of edges whose existence is known, however, their directions are unknown. It signifies the presence of an undirected edge (-) between two nodes \(X_{i}\) and \(X_{j}\) without any information about the direction of causality. **Assumptions.** In this study, we make the following assumptions. _First_, we assume that there are no hidden variables or selection bias. _Second_, we assume that the knowledge constraints are 100% true without any bias or error. _Third_, there can't be any conflict among the knowledge constraints. That is the same constraint can't fall into multiple categories. An edge can't be directed and undecided at the same time. ### Knowledge Incorporation Strategy In this subsection we discuss the steps of KGS in detail. **Step-1**: _Knowledge Organization_. In this step, a knowledge set \(K\) is formulated using the available prior causal edges \(e_{k}\). The knowledge set is basically a \(d\times d\) matrix with entries denoting the prior edges. As per our assumption of bias-free knowledge, only edges that are \(100\%\) reliable must be used in this process. Depending on the type of knowledge (directed, forbidden or undecided edge), a \(d\times d\) matrix is formed where the entries with '1' indicate a directed edge, '2' for forbidden edge and '3' for undecided edge. Other entries with no information about the edge are denoted by '0'. KGS can be used with any one of the three types of knowledge or even with a combination of all of them. ``` Input: Data \(D\), Prior causal edges \(E_{k}\). Output: Causal CPDAG \(C\). Initialize vertex set \(V=\{1,...,d\}\) and edge set \(E=\emptyset\) Construct knowledge set \(K\gets E_{k}\) \(E=E\cup K\) \(C\) (\(V\), \(E\)) \(\leftarrow\) a CPDAG with \(v\in V\) and \(e\in E\) \(S\gets S_{BIC}(C)\) //Initialize score //forward search starts for each \(i\in V\)do for each \(j\in V\)do \(I\gets K\)-consistent insert operators ifi \(\not\to j\notin K\)then \(C^{{}^{\prime}}\gets C\cup(i\to j)\) Compute \(S^{{}^{\prime}}\gets S_{BIC}(C^{{}^{\prime}})\) if\(S^{{}^{\prime}}>S\)then \(C\gets C^{{}^{\prime}}\) \(S\gets S^{{}^{\prime}}\) end if end for end for //backward search starts for each \(i\in V\)do for each \(j\in V\)do \(D\gets K\)-consistent delete operators ifi \(\to j\) or \(i-j\notin K\)then \(C^{{}^{\prime}}\gets C\cup(i\not\to j)\) Compute \(S^{{}^{\prime}}\gets S_{BIC}(C^{{}^{\prime}})\) if\(S^{{}^{\prime}}>S\)then \(C\gets C^{{}^{\prime}}\) \(S\gets S^{{}^{\prime}}\) end if end for end for end for return\(C\) ``` **Algorithm 1**Knowledge-guided GES (KGS) **Step-2**: _Forward Search_. To improve the forward search where edges that provide the current best gain are greedily added, instead of starting with a no-edge model, we start with an initial graph which is a CPDAG that consists of the edges \(e_{k}\) present in the knowledge set \(K\). KGS tries to restrict the initial vast set of insert operators based on the _directed_ or _undecided_ edges available in \(K\). This helps in reducing the initial large number of candidate states that were needed to be examined when there is no knowledge constraint. Furthermore, the insert operators that conflict with the _forbidden_ edges in \(K\) are completely ignored during the forward search and only \(K\)-consistent insert operators are used. This ensures that for any two nodes (\(X_{i}\), \(X_{j}\)) with a \(\not\rightarrow\) edge in \(K\), no unnecessary search is conducted to add an edge between them. This causes the reduction of many search states that might have been explored previously if no knowledge constraints were leveraged. Precisely, it rules out all of the candidate graphs which violate these edges. Going forward in each iteration we choose the candidate graph that gives the highest increment in score, i.e. the graph whose score \(S^{{}^{\prime}}\) is better than the current best score \(S\), and update the model with the changes done. When there is no further refinement of the score (\(S^{{}^{\prime}}<S\)), the forward stage stops and the search continues further to the next step. **Step-3**: _Backward Search._ To further refine the graph obtained in the previous phase, we iteratively delete edges and remove the conflicting delete operators to restrict the unnecessary search states. Particularly, the operators that contradict with the _directed_ or _undecided_ edges present in \(K\) are ruled out and only \(K\)-consistent delete operators \(D\) are used. Similar to the earlier step, the removal of an edge is allowed only when it improves the score. This process terminates if there is no further improvement in score. Finally, the output is the current graph as the estimated causal DAG. The **time complexity** of the Algorithm 1 is \(O(n^{2})\) in the worst case since both the loops are nested loops. ### Computational Benefits A potential practical problem GES has is the vast search space that grows exponentially with the increasing number of variables. However, with the use of any available knowledge constraints, this vast growth in the search space can be controlled to a good degree. The forward and backward phases in KGS take into account the user-defined knowledge constraints which influences in shifting the overall search strategy to a more accurate trajectory than GES. Let us consider the case of a 3-node graph in Figure 1. If GES starts with zero knowledge constraints (empty CPDAG), then the space of its DAG equivalence classes consists of a total 25 possible DAGs to choose from. Now, we suppose we are aware of the prior knowledge that a directed edge \(Y\)\(\rightarrow\)\(Z\) exists for the 3-node graph of Figure 1. Thus, with this prior information, KGS starts with a single-edge model instead of the zero-edge model, and the space of equivalence classes of DAGs are reduced to 8 possible states only. Since we are assuming constraints that are fully accurate, we can prune away the contradictory DAGs, leaving only DAGs that are consistent with the prior knowledge. Thus, it needs to score a much lower number of graphs than earlier due to the guidance of a single edge only. Also, it need not perform expensive score function evaluations for those DAGs that have been pruned away, which may result in significant computational gains. Since in the worst case scenario, the number of search states that GES needs to evaluate can be exponential to the number of nodes (Chickering and Meek (2015)), it is computationally beneficial even if a little amount of knowledge is used to guide or restrict the vast space. Chickering (2002) mentions that enormous savings can be gained by not generating candidates that we are aware of being invalid. Chickering (2002) also suggests that we can prune neighbors heuristically to make the search practical. Leveraging existing knowledge about the causal edges can work as a heuristic because it may help to restrict the search only through equivalence classes where the member DAGs are knowledge consistent. ## 5 Experiments We conduct a set of comprehensive experimental evaluations to demonstrate the effectiveness of our proposed approach. We study the impact of directed, forbidden and undecided edges separately as well as the impact of a combination of the three types of edges. We report the performance by comparing the estimated graphs with their ground truths \(G_{T}\). We distinguish the experiments with directed, forbidden and undecided edges by naming them as KGS-d, KGS-f and KGS-u respectively. Furthermore, the experiment that uses a combination of all of these edges is named KGS-c. We compare the performance of KGS-d, KGS-f, KGS-u and KGS-c with the baseline approach GES on both synthetic and benchmark real-world datasets. For each experiment, we randomly sample a fixed number (precisely 25%) of edge constraints from the ground-truth network \(G_{T}\). As a score function we use a BIC score as it is a standard one as well as the most commonly used score for causal discovery. **Metrics.** For performance evaluation, we use three common causal discovery metrics: _(i) Structural Hamming Distance (SHD)_ which denotes the total number of edge additions, deletions and reversals required to transform the estimated graph \(G\) to the ground-truth DAG \(G_{T}\), _(ii) True Positive Rate (TPR)_ that denotes the ratio of discovered true edges Figure 1: Reduction of the initial search space of a 3-node graph due to leveraging a single causal knowledge (directed edge). Here, the dotted graphs can be ignored based on the prior knowledge \(Y\to Z\) and the bolded graphs are the ones to be considered for initiating the search. with respect to the total number of edges discovered and _(iii) False Discovery Rate (FDR)_ which denotes the proportion of the estimated false edges. _Lower_ the values of SHD and FDR, and a _higher_ TPR resembles a better causal graph. We also report the run time (in seconds) of each experiment to see if there is any improvement in terms of computational efficiency. The reported metric values are the mean values (averaged over 5 seeds/runs). **Setup.** The experiments are conducted on a 4-core Intel Core i5 1.60 GHz CPU cluster, with each process having access to 4 GB RAM. We implemented KGS by extending the publicly available python implementation of GES in the causal-learn package. The source code and experimental datasets are provided as supplementary materials. ### Synthetic Data For synthetic datasets, we employ a similar experimental setup as (Zheng et al. (2018)) and generate random graphs with nodes \(d=10,40,100\) using the Erdos-Renyi (ER) model. The number of nodes are chosen as such to ensure that our approach is tested against networks of all sizes: _small, medium and large_. Each graph has an edge density of \(e\) = 2\(d\) where uniform random weights \(W\) are assigned to the edges. Data \(X\) is generated by taking \(n=1000\) i.i.d samples from the linear structural equation model (SEM) \(X=W^{T}X+z\), where \(z\) is a Gaussian noise term. Table 1 presents the performance on synthetic datasets (d-10, d-40 and d-100). From Table 1, we can see that clearly knowledge constraints have a positive impact on causal discovery due to the promising metric values. Although the extent of the impact varies for each type of knowledge, it is quite evident from the empirical results that any type of knowledge is beneficial in improving the search quality of GES. In terms of SHD and TPR, KGS-d seems to do better than others for networks of all sizes. This implies that directed edges improve accuracy the most as they provide a more concise or restrained information about the causal relationship than the other constraints. Also, in terms of FDR KGS-d has the best value for two out of three datasets. Overall, it seems that directed edges are more beneficial for producing better graphs with less false positives. KGS-u performs well in terms of all the metrics for the small network only. For networks of larger sizes, it does not perform significantly well. This can be due to the fact that the undecided edges provide partial information as it allows for any one of the two edge possibilities between two random variables \(X_{i}\) and \(X_{j}\). Either the edge direction can be from \(X\) to \(Y\) or \(Y\) to \(X\). The performance of KGS-f suggests that forbidden edges are less effective compared to constraints that provide direct information about the causal edges. This is reasonable as there are relatively more forbidden edges than the directed or undirected edges in a causal graph. The performance of KGS-c is moderately good. In terms of SHD, TPR, and FDR, KGS-c mostly seems to have the second best results which is quite understandable as it has the combined effects of all types of edges. In terms of run time, KGS-d have the fastest speed for the datasets d-10 and d-100. For d-40, KGS-c have the best run time. It seems that as the network density increases, there is a significant difference in the run time of GES and different versions of KGS. For denser graphs (d-40 and d-100), knowledge constraints seems to help a lot in a faster convergence with good margin. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{SHD} & \multicolumn{3}{c}{TPR} & \multicolumn{3}{c}{FDR} & \multicolumn{3}{c}{Run Time (s)} \\ \cline{2-13} & _d-10_ & _d-40_ & _d-100_ & _d-100_ & _d-40_ & _d-100_ & _d-100_ & _d-40_ & _d-100_ & _d-40_ & _d-100_ \\ \hline GES & 6 & 60 & 107 & 0.67 & 0.81 & 0.84 & 0.71 & 0.5 & 0.41 & 6.7 & 2380.7 & 176552.4 \\ KGS-d & **2** & **30** & **43** & **1** & **0.89** & **0.95** & 0.52 & **0.3** & **0.2** & **5.2** & 1535.9 & **34417.7** \\ KGS-f & 4 & 52 & 95 & 0.8 & 0.82 & 0.91 & 0.6 & 0.44 & 0.36 & 6.1 & 1312.3 & 44541.6 \\ KGS-u & **2** & 54 & 105 & **1** & 0.82 & 0.86 & **0.5** & 0.45 & 0.39 & 6.3 & 1833.3 & 106053.5 \\ KGS-c & **2** & 39 & 70 & **1** & 0.87 & 0.93 & 0.51 & 0.36 & 0.3 & 5.3 & **992.8** & 39203.8 \\ \hline \hline \end{tabular} \end{table} Table 1: Synthetic dataset results (metric values) averaged over 5 runs with the best results for each dataset boldfaced. Figure 2: SHD (_lower better_), TPR (_higher better_), FDR (_lower better_) and Run Time (log scale) plots of all the approaches on the synthetic datasets with \(d\) = 10, 40, and 100. ### Real Data For experimentation with real-world datasets, we evaluate our approach on benchmark causal graphs from the Bayesian Network Repository (BnLearn) (Scutari [2009]). It includes causal graphs inspired by real-world applications that are used as standards in literature. We evaluate GES and all the versions of KGS on 3 different networks namely _Child_, _Alarm_ and _Hepar2_ from BnLearn. These networks have available ground-truths and they vary in node and edge densities (small, medium & large networks). The corresponding datasets are available in the causal-learn package. We briefly introduce the networks below: **(i) CHILD**(Spiegelhalter et al. [1993]) is a medical Bayesian network for diagnosing congenital heart disease in a new born "blue baby". It is a small network with \(d=20\) nodes and \(e=25\) edges. The dataset includes patient demographics, physiological features and lab test reports. **(ii) ALARM**(Beinlich et al. [1989]) is a healthcare application that sends cautionary alarm messages for patient monitoring. It is used to study probabilistic reasoning techniques in belief networks. The ground-truth graph is a medium sized network with \(d=37\) nodes and \(e=46\) edges. **(iii) HEPAR2**(Onisko [2003]) is a probabilistic causal model for liver disorder diagnosis. It is a Bayesian network that tries to capture the causal relationships among different risk factors, diseases, symptoms, and test results. It is a large network with \(d=70\) nodes and \(e=123\) edges. Table 5 present the results on real datasets. From Table 5 we see that w.r.t SHD, all versions of KGS performs considerably better than GES. KGS-u, KGS-c and KGS-d have the best values of SHD for the Child, Alarm and Hepar2 datasets respectively. Compared to GES, the improvement margin in SHD for all of them is atleast by 10 which is quite impresssive. Overall, none of the SHDs of the estimated graphs by GES is better than KGS. In case of true positives, KGS-d seems to perform really well as it has the best TPR for two out of the three datasets (Alarm and Hepar2). For Child dataset, KGS-f and KGS-c have the highest TPR 0.62 and the TPRs of all the versions of KGS are larger than GES by a good margin. Overall, all versions of KGS have better TPRs than GES for all datasets. For false discoveries, KGS-f and KGS-c performs better than others as both have best FDRs for two out of three real datasets. KGS-c has a 15% improvement in FDR compared to GES for the Child dataset and KGS-f has a 7% lower FDR than GES for Alarm dataset. For Hepar2, the improvement margin is low (4%) which is still significant. In terms of run time, KGS-c is mostly faster than others with the lowest run time for two datasets (Child and Hepar2). For the other dataset Alarm, KGS-f is the fastest. One thing is common w.r.t. run time for all the 3 real datasets. The run time for KGS-f and KGS-c is quite close in all of these datasets. It signifies that forbidden edges may allow the search process to converge faster. Although forbidden edge constraints are not the best among all the edge constraints in terms of the accuracy metrics, they seem to perform well w.r.t faster convergence. ### Variation of Knowledge Proportion We perform an experiment by leveraging different amount of prior edges to investigate two aspects: (i) Do raising the amount of knowledge improves the discovery? and (ii) How varying the amount of knowledge affects the structure search. This experiment is done on the real datasets by \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{SHD} & \multicolumn{3}{c}{TPR} & \multicolumn{3}{c}{FDR} & \multicolumn{3}{c}{Run Time (s)} \\ \cline{2-11} & _Child_ & _Alarm_ & _Hepar2_ & _Child_ & _Alarm_ & _Hepar2_ & _Child_ & _Alarm_ & _Hepar2_ & _Child_ & _Alarm_ & _Hepar2_ \\ \hline GES & 34 & 56 & 70 & 0.38 & 0.74 & 0.5 & 0.89 & 0.61 & 0.23 & 93.3 & 762.1 & 3326.1 \\ KGS-d & 25 & 51 & **58** & 0.6 & **0.84** & **0.6** & 0.79 & 0.57 & 0.21 & 40.2 & 734.3 & 3225.1 \\ KGS-f & 26 & 48 & 67 & **0.62** & 0.82 & 0.51 & 0.79 & **0.54** & **0.19** & 35.1 & **428.4** & 1785.6 \\ KGS-u & **22** & 52 & 59 & 0.6 & 0.81 & 0.57 & 0.78 & 0.58 & 0.2 & 55.8 & 747.5 & 3260.5 \\ KGS-c & 23 & **47** & 61 & **0.62** & 0.79 & 0.55 & **0.74** & 0.55 & **0.19** & **33.5** & 452.4 & **1774.1** \\ \hline \hline \end{tabular} \end{table} Table 2: Real datasets results averaged over 5 seeds with the best performances for each dataset boldfaced. Figure 3: SHD (_lower better_), TPR (_higher better_), FDR (_lower better_) and Run Time (in sec.) plots of all the approaches on the real datasets : Child (\(d=20\)), Alarm (\(d=37\)), and Hepar2 (\(d=70\)). varying the amount of _directed_ edges from 0 to 25 percent. Each time the amount of prior knowledge is raised by 5%. From the plots in Figure 4, we see that using any amount of prior knowledge is always better than using no knowledge at all (0 percent). This is because all of the metrics SHD, TPR and FDR have better values for the experiments where at least some knowledge is used. Although in some cases the improvement can be very less or same (TPR of the experiments 0 % and 5 % for Alarm), in no cases there is a negative impact of using knowledge. Another finding from the plots is that increasing the amount of knowledge is not always proportional to better metric values. The drop in SHD and FDR, and also, the rise in TPR due to the changing % of knowledge is quite unsteady. However, it seems like leveraging any amount of causal knowledge is always better than leveraging no knowledge at all. ### Manual Correction of Edges We perform a sanity check to ensure if graphs discovered by KGS are better than those of GES even after manual correction of the estimated edges belonging to the knowledge set \(K\). As part of our sanity check, we investigate the number of edges required to be corrected in the graphs produced by GES and KGS. A lower number of corrections to be done and a higher number of true positives signifies a better performance. We perform this check for the _directed edges_ experiment (KGS-d) on one synthetic (d-10) and one real dataset (Child). From Table 3, we see that for d-10 and Child, the set of prior directed edges for KGS-d contains \(K\) = 5 and \(K\) = 6 edges respectively and GES uses no prior knowledge constraints. With this experimental setting, GES discovers less true positive edges (column TP) than KGS-d. The column TP \(\in K\) denotes the number of discovered TP edges that are also element of \(K\). Thus, the no. of manual corrections is \(C\) = \(K\) - (TP \(\in K\)). Here, we see that GES requires more edge corrections (\(C\)) than KGS-d. As a result, the total TPs after manual corrections (TP\({}_{C}\)) are higher for KGS-d than GES. This is the case for small networks. For medium to larger networks this difference in TPs can be higher. In terms of false positives (FP), KGS-d performs quite better than GES as their is a significant reduction in the number of estimated FP edges. Also, the false negatives (FN) are lower in KGS-d than GES for both the datasets. To summarize, guidance from prior knowledge is significant for conducting an efficient search w.r.t. true and false discoveries as well as estimating a more accurate graph than the one produced by GES without any knowledge guidance. ## 6 Conclusion In this study, we introduce KGS, which presents a novel application of prior edge constraints in a score-based causal search process. We evaluate KGS's performance on a variety of synthetic and real-world datasets, including networks of different sizes and edge densities. The encouraging results across multiple settings demonstrate the robustness and flexibility of our approach. Particularly, KGS-d, that leverages directed edges performs the best in most cases for improving the graphical accuracy as well as for faster convergence. This is quite understandable as directed edges provide a complete information about the causal relation between variables. Overall, any type of edge information improves the accuracy of the graph discovery as well as the run time compared to the performance of GES which uses no knowledge. Hence, it is evident that prior knowledge is helpful for guiding the search to a better trajectory. There are some limitations of this study. First, in this study, we consider only those causal knowledge that is completely true. That is any biased knowledge is out of the scope of this work. Second, this approach has a limited application when there is no prior knowledge available. Although it is often the case that at least some prior knowledge about any domain is available. In future, we want to extend this study to address biases in prior knowledge and also, study the effects of localizing the knowledge to a sub-area of the network and see how it impacts the discovery in other areas. Figure 4: Variation in SHD (_lower better_), TPR (_higher better_), and FDR (_lower better_) due to changing the proportion of knowledge constraints. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Data} & \multirow{2}{*}{Method} & \multirow{2}{*}{\(K\)} & \multirow{2}{*}{TP} & \multirow{2}{*}{\(\text{TP}\)\(\in\)\(K\)} & \multicolumn{3}{c}{ \begin{tabular}{c} \(\text{TP after }C\) \\ (\(\text{TP}_{C}\)) \\ \end{tabular} } & \multirow{2}{*}{FP} & \multirow{2}{*}{FN} \\ \cline{1-2} \cline{6-7} & GES & & & & & & 4+3 = 7 & 10 & 2 \\ & KGS-d & 5 & **6** & 3 & 2 & 6+2 = **8** & **5** & **0** \\ \hline \multirow{2}{*}{Child} & GES & & & 5 & 2 & 4 & 5+4 = 9 & 39 & 8 \\ & KGS-d & 6 & **8** & 4 & 2 & 8+2 = **10** & **28** & **5** \\ \hline \hline \end{tabular} \end{table} Table 3: Correction (\(C\)) of edges after graph estimation. ## Appendix A Additional Simulation Results We present the total number of estimated models by the different versions of KGS and GES for both synthetic and real datasets in Table 4. The total estimated models signify the number of models or graphs that were required to be estimated by the algorithm before reaching convergence (final best scored causal graph). A lower number of this metric signifies a better performance by the approach, and also a lower number of calls to the scoring function. ### Experimental Results of Varying the Knowledge Proportion We present the details of all the metric values for the experiment done with varying the amount of prior knowledge in Table 5. This experiment is done by varying the amount of constraints (directed edges) from 0 to 25 percent each time by raising the amount of knowledge by 5%. The results show that any amount of knowledge is good for improving search accuracy and hence should be leveraged during the search process. Although it is surprising that the increment in knowledge is not directly proportional to the increment in discovery accuracy. Still leveraging any percentage of knowledge is better than using no knowledge at all. ### Performance of Baseline Causal Discovery Approaches We report the performance of different baseline causal discovery approaches such as PC (constraint-based), LiNGAM (FCM-based) and NOTEARS (continuous optimization-based) on the synthetic and real datasets to see their comparative performance with respect to GES and KGS. We briefly discuss the methods below: **(i) PC algorithm:** The Peter-Clark (PC) algorithm (Spirtes et al. [2000]) is a very common constraint-based causal discovery approach that largely depends on conditional independence (CI) tests to find the underlying causal graph. Primarily, it works in three steps: (i) Skeleton construction, (ii) V-structures determination, and (iii) Edge orientations. **(ii) LiNGAM:** Linear Non-Gaussian Acyclic Model (LiNGAM) uses a statistical method known as independent component analysis (ICA) to discover the causal structure from observational data. It makes some strong assumptions such as the data generating process is linear, there are no unobserved confounders, and noises have non-Gaussian distributions with non-zero variances (Shimizu et al. [2006]). **DirectLiNGAM** (dLiNGAM) is an efficient variant of the LiNGAM approach that uses a direct method for learning a linear non-Gaussian structural equation model (Shimizu et al. [2011]). The direct method estimates causal ordering and connection strengths based on non-Gaussianity. **(iii) NOTEARS:** DAGs with NO TEARS (Zheng et al. [2018]) is a recently developed causal discovery algorithm that reformulates the structure learning problem as a purely continuous constrained optimization task. It uses an algebraic characterization of DAGs and defines a novel characterization of acyclicity that allows for a smooth global search, instead to a combinatorial search. For experimental purpose, the implementations of the algorithms have been adopted from the gCastle (Zhang et al. [2021]) repository. From the results reported in Table 6, we see that in few cases NOTEARS performs slightly better than KGS particularly for SHD and FDR metrics. However, it performs poorly in terms of discovering the true edges (TPR). Also, it could not even estimate a single graph for the case of Hepar2 dataset. The dLiNGAM approach performs better than KGS only twice, once w.r.t. SHD and once w.r.t FDR. Otherwise its performance is mostly moderate. PC does not perform well in case of any of the metrics at all. The boldfaced results in the table are the ones that are better than that of KGS for the corresponding datasets. ## Appendix B Performance Metrics Details \(\bullet\)**Structural Hamming Distance (SHD):** SHD is the sum of the edge additions (A), deletions (D) or reversals (R) that are required to convert the estimated graph into the true causal graph (Zheng et al. [2018], Cheng et al. [2022]). To estimate SHD it is required to determine the missing edges, extra edges and edges with wrong direction in the estimated graph compared to the true graph. Lower the SHD closer is the graph to the true graph and vice versa. The formula to calculate SHD is given below: \[SHD=A+D+R \tag{5}\] \(\bullet\)**True Positive Rate (TPR):** TPR denotes the proportion of the true edges in the actual graph that are correctly identified as true in the estimated graph. A higher value of the TPR metric means a better causal discovery. \[TPR=\frac{TP}{TP+FN} \tag{6}\] Here, TP means the true positives or the number of correctly identified edges and FN or false negatives denote the number of unidentified causal edges. \(\bullet\)**False Discovery Rate (FDR):** FDR is the ratio of false discoveries among all discoveries (Zheng et al. [2018]). FDR represents the fraction of the false edges over the sum of the true and false edges. Lower the FDR, better is the outcome of causal discovery. \[FDR=\frac{FP}{TP+FP} \tag{7}\] Here, FP or false positives represent the number of wrongly identified directed edges. ## Appendix C Code and Data The code for reproducing KGS and the datasets used in this study are publicly available in the following GitHub link: [https://github.com/UzmaHasan/KGS-Causal-Discovery-Using-Constraints](https://github.com/UzmaHasan/KGS-Causal-Discovery-Using-Constraints).
2308.03337
Solving Falkner-Skan type equations via Legendre and Chebyshev Neural Blocks
In this paper, a new deep-learning architecture for solving the non-linear Falkner-Skan equation is proposed. Using Legendre and Chebyshev neural blocks, this approach shows how orthogonal polynomials can be used in neural networks to increase the approximation capability of artificial neural networks. In addition, utilizing the mathematical properties of these functions, we overcome the computational complexity of the backpropagation algorithm by using the operational matrices of the derivative. The efficiency of the proposed method is carried out by simulating various configurations of the Falkner-Skan equation.
Alireza Afzal Aghaei, Kourosh Parand, Ali Nikkhah, Shakila Jaberi
2023-08-07T06:38:59Z
http://arxiv.org/abs/2308.03337v1
# Solving Falkner-Skan type equations via Legendre and Chebyshev Neural Blocks ###### Abstract In this paper, a new deep-learning architecture for solving the non-linear Falkner-Skan equation is proposed. Using Legendre and Chebyshev neural blocks, this approach shows how orthogonal polynomials can be used in neural networks to increase the approximation capability of artificial neural networks. In addition, utilizing the mathematical properties of these functions, we overcome the computational complexity of the backpropagation algorithm by using the operational matrices of the derivative. The efficiency of the proposed method is carried out by simulating various configurations of the Falkner-Skan equation. _Keywords--_ Falkner-Skan Model, Non-linear Ordinary Differential Equations, Orthogonal Polynomials, Deep Learning ## 1 Introduction Differential equations play an essential role in modeling natural and scientific phenomena, such as mechanics, biology, and engineering physics. Therefore, it has usually been an interesting topic for researchers. Neural networks and deep learning techniques have proven to be effective approaches for solving differential equation models [1]. Essentially, a neural network comprises multiple processing units that work in parallel and are arranged in a sequential manner. The first layer acquires the raw input data, similar to how the optic nerves function in human vision, while the final layer produces the output after processing the input data. These types of networks usually have a fixed architecture and variable weights. Scientists have successfully integrated these popular methods for solving differential equations with neural networks to achieve superior outcomes [2, 3]. Among the various numerical methods employed for solving functional equations, the method of weighted residuals stands out as a prominent technique within this field. This method involves approximating the solution as a linear combination of spectral basis functions, such as Jacobi polynomials, trigonometric functions, etc. Compared to finite element and finite difference methods, spectral methods possess exceptional properties of high spatial accuracy for well-behaved problems. This makes them highly suitable for numerical simulations aimed at predicting flows with a broad range of dynamically significant scales of motion. Jacobi polynomials and their special cases, Legendre and Chebyshev polynomials, have been used by various researchers to accelerate the numerical solution. These functions have the orthogonality property along with boundedness and symmetry which makes them powerful to handle various physical problems. In addition, the derivative of these polynomials is defined based on themselves. This property resulted in the operational matrices of derivatives[4, 5, 6]. These matrices can be utilized to simplify solving mathematical problems or accelerate the learning process of neural networks with orthogonal layers. In this paper, we focus on the second one and develop the Legendre and Chebyshev neural blocks (LegendreBlock and ChebyshevBlock) for the simulation of the non-linear differential equations. We will emphasize the validity of the proposed network by using several settings of the Falkner-Skan non-linear differential equation. The main contribution of our work can be summarized as follows: * Introducing the neural Legendre and Chebyshev Blocks * Designing deep neural networks based on them * Using operational matrices of the derivative * Solving various Falkner-Skan problems The subsequent sections of the paper are arranged as follows: In Section 1, an explanation of the Blasius equation is provided, followed by the definition of the General Falkner-Skan model. Section 2 elaborates on the formulation and operational matrix of Legendre and Chebyshev functions, which are prerequisites for the presented architecture. In Section 3, we provide a comprehensive explanation of our deep neural network architecture and discuss its application in solving the General Falkner-Skan equations using Legendre and Chebyshev blocks. The results and comparisons of our method are presented in Section 4. Finally, Section 5 is devoted to some conclusions. ### Blasius Equation This equation is related to the boundary layer problem, a fundamental concept in transfer phenomena. It refers to a fluid layer near a surface with high viscosity effects. The measure of a fluid's resistance to flow, known as viscosity, is an important parameter to consider for both liquids and gaseous fluids. The boundary layer is a fluid that is influenced by Heat-Motion size or mass transfer from a standard surface that can be stationary or moving. For example, we can refer to a layer of fluid that is located near a heat source, such as a radiator. The layer near the heat source creates a temperature profile by absorbing heat, which changes fluid density and initiates convection flow. Altogether, we have three types of boundary layers; Speed, mass transfer, and heat transfer. If all three phenomena coincide, then it imposes computational complexity. Nondimensional numbers can be used to express mathematical relationships between three boundary layers. We present some mathematical models of this problem. [7, 8, 9, 10]. The Blasius equation is given by: \[\frac{d^{3}f}{d\eta^{3}}+\frac{1}{2}f(\eta)\frac{df}{d\eta}=0, \tag{1}\] where \(f(\eta)\) represents the stream function, and \(\eta\) is the similarity variable defined as \(\eta=\frac{y}{\sqrt{2}}\), with \(x\) and \(y\) denoting the streamwise and wall-normal coordinates, respectively. The boundary conditions associated with the Blasius equation are given by: \[f(0)=f^{\prime}(0)=0,\quad f^{\prime}(\infty)=1, \tag{2}\] where \(f^{\prime}(x)\) denotes the derivative of \(f(\eta)\) with respect to \(\eta\). ### Falkner-Skan Equation The Falkner-Skan equation is a non-linear boundary value problem that arises in the study of fluid mechanics. It is a second-order ordinary differential equation that describes the boundary layer flow over a semi-infinite flat plate. The equation is named after the researchers Falkner and Skan, who introduced it in the early 1940s. It has various applications in aerodynamics and heat transfer, particularly in the analysis of laminar and turbulent boundary layers. It provides valuable insights into the flow characteristics and the determination of important quantities such as the velocity and temperature profiles near the solid surface. The solution of the Falkner-Skan equation involves determining the characteristics of the laminar boundary layer on an unbounded wedge with a vertex angle of \(\pi\beta\) for \(0\leq\beta\leq 2\), as shown in Figure 1. Simplifying the mass continuity and momentum equations in this case, results in the following ODE named Falkner-Skan model: \[g^{\prime\prime\prime}(x)+\alpha g(x)g^{\prime\prime}(x)+\beta(1-(g^{\prime}(x ))^{2})=0, \tag{3}\] with the boundary conditions \[g(0)=g^{\prime}(0)=0,\quad and\quad g^{\prime}(\infty)=1. \tag{4}\] ## 2 Orthogonal Polynomials Orthogonal polynomials are a unique class of polynomial functions that are characterized by a specific inner product defined on a finite or infinite interval [11]. The most widely recognized types of orthogonal polynomials comprise the Chebyshev polynomials, the Hermite polynomials, and the Legendre polynomials. This section will present a brief summary of the Legendre and Chebyshev polynomials, along with their corresponding operational derivative matrix. ### Legendre Polynomials Legendre polynomials are a sequence of orthogonal functions that can be obtained by applying the Gram-Schmidt orthogonalization process to the Taylor basis functions \(1,x,x^{2},\dots\)[11]. These polynomials are widely employed in scientific research and have recently been employed to enhance the capability of neural networks [12]. Legendre polynomials can be seen as the eigenfunctions of the Sturm-Liouville problem: \[\frac{\mathrm{d}}{\mathrm{d}x}((1-x^{2})\frac{\mathrm{d}}{\mathrm{d}x}L_{n}(x ))+\lambda_{n}L_{n}(x)=0, \tag{5}\] here \(\lambda_{n}=n(n+1)\) is the corresponding eigenvalue. Different intuitions can calculate these polynomials, for example, a simple one is the recurrence formula: \[\begin{split}& L_{0}(x)=1,\quad L_{1}(x)=x,\\ & L_{n+1}(x)=\left(\frac{2n+1}{n+1}\right)xL_{n}(x)-\left(\frac {n}{n+1}\right)xL_{n-1}(x),\quad n\geqslant 1.\end{split} \tag{6}\] One key property of the Legendre polynomials is their orthogonality with respect to the \(\mathcal{L}^{2}\)-inner product, which can be expressed as: \[\int_{-1}^{1}L_{m}(x)L_{n}(x)dx=\frac{2}{2n+1}\delta_{mn}, \tag{7}\] where \(\delta_{mn}\) denotes the Kronecker delta. This orthogonality property helps us to compute its derivatives based on previous terms in the sequence. In the following, we explain this feature. #### 2.1.1 Operational Matrix The operational matrix based on the Legendre polynomial method is a recent development that has proved useful in solving mathematical problems, especially differential equations. Mathematically, this matrix is defined as: \[\frac{d}{dx}L(x)=M^{(1)}L(x), \tag{8}\] where \(L(x)=[L_{0}(x),L_{1}(x),\dots,L_{n}(x)]^{T}\) is a vector containing a sequence of Legendre polynomials and \(M^{(1)}\) is an \((N+1)\times(N+1)\). The entries of this matrix can be computed using the orthogonality property of these functions. This matrix can be precomputed using the following formula: Figure 1: A boundary layer flow over a wedge \[M^{(1)}=\begin{cases}2j+1,&i>j,\,i+j\text{ is odd},\\ 0,&otherwise.\end{cases}\] For instance, for \(N=6\), the matrix \(M^{(1)}\) can be represented as: \[M^{(1)}=\begin{bmatrix}0&0&0&0&0&0\\ 1&0&0&0&0&0\\ 0&3&0&0&0&0\\ 1&0&5&0&0&0\\ 0&3&0&7&0&0\\ 1&0&5&0&9&0\end{bmatrix}. \tag{9}\] The higher derivatives of \(L(x)\) can be simply expressed as: \[\frac{d^{n}}{dx^{n}}L(x)=(M^{(1)})^{n}L(x)=M^{(n)}L(x)\quad where\quad n=1,2,...\;. \tag{10}\] ### Chebyshev Polynomials Chebyshev polynomials are another family of orthogonal polynomials that were introduced by the Russian mathematician Pafnuty Chebyshev in 1854. The first kind of Chebyshev polynomials can be computed recursively using the following recurrence relation [13]: \[C_{0}(x)=1,\quad C_{1}(x)=x, \tag{11}\] \[C_{n+1}(x)=2xC_{n}(x)-C_{n-1}(x),\quad n\geq 1.\] The orthogonality property of these polynomials can be seen with respect to weight function \(w(x)=1/\sqrt{1-x^{2}}\): \[\int_{-1}^{1}C_{n}(x)C_{m}(x)\frac{dx}{\sqrt{1-x^{2}}}=\left\{\begin{array}[] {ll}0&:n\neq m\\ \pi&:n=m=0\\ \frac{\pi}{2}&:n=m\neq 0\end{array}\right. \tag{12}\] The Chebyshev polynomials also satisfy the following second-order linear Sturm-Liouville differential equation: \[(1-x^{2})\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}L_{n}(x)-x\frac{\mathrm{d}}{ \mathrm{d}x}L_{n}(x)+n^{2}L_{n}(x)=0, \tag{13}\] #### 2.2.1 Operational Matrix The operational matrix of the derivative for the Chebyshev polynomials \(C(x)\) will be defined as: \[\frac{d}{dx}C(x)=M^{(1)}C(x). \tag{14}\] where \(C(x)=[C_{0}(x),C_{1}(x),\ldots,C_{n}(x)]^{T}\) and \(M^{(1)}\) is an \((N+1)\times(N+1)\) matrix defined by: \[M^{(1)}=\begin{cases}\frac{2i}{c_{j}},&i>j,\,i+j\text{ is odd},\\ 0,&otherwise.\end{cases}\] where \[c_{k}=\begin{cases}1,&l=1,\ldots,N,\\ 2,&l=0.\end{cases}\] For instance, if \(N=6\), the matrix \(M^{(1)}\) can be represented as: \[M^{(1)}=\begin{bmatrix}0&0&0&0&0&0\\ 1&0&0&0&0&0\\ 0&4&0&0&0&0\\ 3&0&6&0&0&0\\ 0&8&0&8&0&0\\ 5&0&10&0&10&0\end{bmatrix}. \tag{15}\] Also the higher derivatives of \(C(x)\) can be expressed as: \[\frac{d^{n}}{dx^{n}}C(x)=(M^{(1)})^{n}C(x)=M^{(n)}C(x)\quad where\quad n=1,2,...\;. \tag{16}\] ## 3 Methodology The previous sections explained the prerequisites for the paper. In the first section, we introduced the Blasius and Falkner-Skan equations, then discussed the Legendre and Chebyshev polynomials. Now we explain our proposed method and finally present an architecture to solve the equation. Here, we suggest using orthogonal functions in the network. Recently orthogonal functions have been used in machine learning for approximating differential equations. Support vector machines and neural networks are some of the well-known efforts [14, 15, 16, 17]. The applications of orthogonal functions in neural networks achieved more accurate results. In most articles, these functions have been employed as activation functions of neurons. However, they may face some difficulties. For example, the domain of these functions is bounded. This issue has been handled by using them at the first layer of the network or after the normalization technique. We present a new architecture that addresses these problems. This method presents a neural block that includes orthogonal functions and other non-linear neurons. The input and the output of the block are vectors. The input vector is encoded into a scalar using a neuron with an appropriate activation function. Then the \(n-th\) order polynomials are generated by utilizing the output of this neuron. In our case, we have used Legendre and Chebyshev, whose domain is \([-1,1]\). So the activation function can be considered as the hyperbolic tangent. By doing this, we will esure that the domain of the function is satisfied. A graphical design of these blocks with Legendre and Chebyshev polynomials can be seen in figure 2. It will not have a computational overhead. That is because the forward phase evaluates the polynomials with a fixed small order. Moreover, the backward phase uses the operational matrices of the derivative instead of the backpropagation algorithm. These neural blocks can be used anywhere in a deep neural network. Figure 3 shows an example network that employs these blocks in this architecture. We utilized this model in the rest of the work. This architecture can be used to learn classification or regression tasks. In addition, it can learn the hidden dynamics of a differential equation. This is done by defining the equation's residual as the network's loss function. This paper focuses on solving the Falkner-Skan type differential equation using this network. In the following, we will describe the work in detail and present the proposed Algorithm 1. **Step 1.**_To fit the network, we need a training data set. This data set for a specific interval, possibly an unbounded domain, can be produced in different ways, such as equidistant points and random nodes. In spectral analysis, these are known as collocation points._ **Step 2.**_In our case, the dimensionality of the input and output is one. However, the proposed model can be easily adjusted for solving multi-dimensional problems such as partial differential equations. Figure 3 shows the network we have used for our simulation. This model is obtained by running a hyperparameter optimization algorithm._ Figure 2: Legendre and Chebyshev Blocks **Step 3**.: _Now we can define the residual function of the (3) according to the boundary conditions (4) and the loss function:_ \[Residual(x)=g^{\prime\prime\prime}(x)+\alpha g(x)g^{\prime\prime}(x)+\beta(1-(g^{ \prime}(x))^{2}). \tag{17}\] \[Loss(x)=\frac{1}{n}\sum_{i}Residual(x_{i})^{2}+[g(0)^{2}+g^{\prime}(0)^{2}+g^{ \prime}(\infty)^{2}]. \tag{18}\] **Step 4**.: _In this step, the network should be trained. These can be done using a first or second-order gradient-based optimization technique. In this paper, we recommend a two-stage approach. To do these, we first use the Adam optimizer to find suboptimal weights. In the second stage, the LBFGS algorithm with the initial weights obtained in the previous stage._ ## 4 Results and Discussion This section presents the application of two proposed models, namely Legendre Deep Neural Network (LDNN) that only employs the Legendre Block, and Legendre-Chebyshev Deep Neural Network (LCDNN), to obtain solutions of the General Falkner-Skan equation for different values of \(\alpha\) and \(\beta\). Notably, the flows mentioned in Table 1 are widely examined and correspond to the \((\alpha,\beta)\) pairs in equation (3). \begin{table} \begin{tabular}{c l} \hline **Input** & **:**\(\alpha\) and \(\beta\) \\ **Input** & **:**\(D\) **As** domain \\ **Input** & **:**\(n\) **As** number of discretization \\ **Input** & **:** Optimizers parameters \(epoch_{a},lr_{a},epoch_{l},lr_{l},eps\) \\ **Initialize** & \(X\gets GenerateTrainingSet(D,n)\) \\ & \(M\gets LegendreChebyshevModel()\) & // Based on Figure (3) \\ **Function** GetLoss(\(X\)): \\ **compute**: Derivatives \(g^{\prime}(x),g^{\prime\prime}(x),g^{\prime\prime\prime}(x)\) \\ **compute**: Boundaries & // Based on Eq. (4) \\ **compute**: Residual & // Based on Eq. (17) \\ **compute**: Loss & // Based on Eq. (18) \\ **return** Loss \\ **for**\(i=0\)**to**\(epoch_{a}\)**do** \\ **set** & **:**\(loss\gets GetLoss(X)\) **As** initial solution \\ **solve** & **:**\(AdamOptimizer(lr_{a})\) \\ **set** & **:** obtained solutions to \(X\) \\ **end** \\ **\(previous\gets 0\)**for**\(i=0\)**to**\(epoch_{l}\)**do** \\ **set** & **:**\(loss\gets GetLoss(X)\) \\ **solve** & **:**\(LBFGSOptimizer(lr_{l})\) \\ **set** & **:** obtained solutions to \(X\) \\ **if**\(abs(previous-loss)<eps\)**then** \\ **// Converged** \\ **break** \\ **end** \\ **end** \\ **end** \\ **end** \\ **end** \\ **Output** & **:** solutions \(X\) \\ \hline \end{tabular} \end{table} Table 1: Some famous particular types of Falkner-Skan flows. We consider domain \([0,6]\) with \(18000\) discrete points as the training set. We used the settings shown in Table 2 as optimizers parameters. The obtained values of \(g^{\prime\prime}(0)\) for listed flows and corresponding errors obtained by the present methods and the comparison with the results of previous methods at Blasius flow where \(\alpha=0.5\) and \(\beta=0\) are shown in Table 3. Also, the graphs of the loss and residual functions and the comparison between exact values with our predicted values for this case are shown in Figure 4. Furthermore, we conducted various experiments to confirm the effectiveness of LCDNN in solving the Falkner-Skan problem in the Blasius Flow. To determine the accuracy of the numerical results, we computed the error, for \(n\) test data, using the following criteria: \[MSE=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-y_{i}^{pred})^{2},\] \[MAE=\frac{1}{n}\sum_{i=1}^{n}|y_{i}-y_{i}^{pred}|.\] For Pohlhausen-Flow where \(\alpha=0\) and \(\beta=1\), Ames [27] shows the solution as \[g^{\prime\prime}(0)=\frac{2}{\sqrt{3}}\approx 1.154700538.\] Table 4 compares the benchmark solutions for various flows listed in Table 1 using both the LDNN and LCDNN methods. The results demonstrate that the LCDNN method exhibits greater accuracy than the LDNN method, and the solutions obtained by LCDNN are consistent with the previously computed solutions of the Falkner-Skan problem. Tables 5 and 6 present the obtained values of \(g^{\prime\prime}(0)\) using our proposed methods along with a comparison with the QLM method in [1], Taylor method in [28], and Higher-order method in [29]. The results are presented for various values of \(\alpha\) and \(\beta\). \begin{table} \begin{tabular}{l c c c} \hline \hline Optimizer & Epoch & Learning Rate & \(eps\) \\ \hline Adam & 100 & 0.015 & 1e-10 \\ LBFGS & 10000 & 0.015035 & 1e-10 \\ \hline \hline \end{tabular} \end{table} Table 2: Optimizers parameters Figure 3: Legendre-Chebyshev Deep Neural Network Architecture \begin{table} \begin{tabular}{l c c c c} \hline \hline \(\beta\) & LCDNN & LDNN & Taylor[28] & Higher-order[29] \\ \hline -0.1800 & 0.128635 & 0.125828 & 0.128637 & 0.128638 \\ -0.1000 & 0.319270 & 0.319447 & 0.319270 & 0.319270 \\ 0.0000 & 0.469600 & 0.469872 & 0.469600 & 0.469600 \\ 0.5000 & 0.927680 & 0.927891 & 0.927680 & 0.927680 \\ 1.0000 & 1.232590 & 1.232883 & 1.232589 & 1.232588 \\ 2.0000 & 1.687218 & 1.687618 & 1.687218 & 1.687218 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of \(g^{\prime\prime}(0)\) values obtained by the present methods with previous results where \(\alpha=1\) and \(\beta\in[10,40]\). \begin{table} \begin{tabular}{l c} \hline \hline \(\beta\) & LCDNN & LDNN \\ \hline 10 & 3.675234 & 3.675234 & 3.675234 \\ 15 & 4.491488 & 4.492465 & 4.491487 \\ 20 & 5.180718 & 5.180718 & 5.180718 \\ 30 & 6.338208 & 6.339053 & 6.338209 & 6.338208 \\ 40 & 7.314785 & 7.315768 & 7.314785 & 7.314785 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of \(g^{\prime\prime}(0)\) values obtained by the present methods with previous results where \(\alpha=1\) and \(\beta\in[10,40]\). \begin{table} \begin{tabular}{l c} \hline \hline \(\beta\) & LCDNN & LDNN \\ \hline 10 & 3.675234 & 3.675234 & 3.675234 \\ 15 & 4.491488 & 4.492465 & 4.491487 \\ 20 & 5.180718 & 5.180718 & 5.180718 \\ 30 & 6.338208 & 6.339053 & 6.338209 & 6.338208 \\ 40 & 7.314785 & 7.315768 & 7.314785 & 7.314785 \\ \hline \hline \end{tabular} \end{table} Table 3: (a) Solutions and (b) Errors for Falkner-Skan problem at Blasius-Flow \begin{table} \begin{tabular}{l c} \hline \hline Error Name & Value \\ \hline \(MSE\) & 8.942052e-08 \\ \(l^{1}\)-norm & 8.348959e-02 \\ \(l^{2}\)-norm & 5.179397e-03 \\ \(l^{\infty}\)-norm & 3.911157e-04 \\ \(MAE\) & 2.782986e-04 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of \(g^{\prime\prime}(0)\) values obtained by the present methods with standard solutions for each flow. \begin{table} \begin{tabular}{l c c c c} \hline \hline Type of Flow & LCDNN & LDNN & \(g^{\prime\prime}(0)\) \\ \hline Pohlhausen [19] & 1.154709 & 1.154779 & 1.154700 \\ Homann [20] & 1.311937 & 1.312304 & 1.311938 \\ Hiemenz [21] & 1.232548 & 1.230178 & 1.232589 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of \(g^{\prime\prime}(0)\) values obtained by the present methods with previous results where \(\alpha=1\) and \(\beta\in[-0.18,2]\). ## 5 Conclusion In this study, we have proposed an orthogonal neural block for efficiently learning the solution to non-linear differential equations. The presented architecture leverages the Chebyshev and the Legendre orthogonal polynomials as an activation function of the neurons. These polynomials benefit from various mathematical properties, including operational matrices of the derivative. This property reduces the time complexity of the backpropagation by prescribing the backward derivative operations. The proposed block was applied in a deep-learning architecture to solve some non-linear boundary value problems of the Falkner-Skan type. To validate the effectiveness and accuracy of this approach, various settings of this equation, including Blasius, Hastings, and Craven flow were simulated. Then several comparisons have been made with other numerical and analytical methods, and the results showed a demonstrating agreement between our results with previous works. The developed method was implemented using the PyTorch framework on Python 3.10 with Graphic Processing Unit (GPU) to accelerate the learning process. As a recommended avenue for future work, we suggest developing the method to tune the network's hyperparameters and layers to further improve the learning speed and achieve even more accurate results. In addition, we suggest validating this technique to high dimensional differential equations with initial and boundary conditions. Figure 4: (a) Graph of loss function vs. epoch. (b) Graph of residual vs. x. (c) Comparison of exact values and predicted values. Acknowledgements The authors affirm that there are no competing interests to disclose in relation to the publication of this paper.
2302.00665
Necessary and sufficient conditions for posterior propriety for generalized linear mixed models
Generalized linear mixed models (GLMMs) are commonly used to analyze correlated discrete or continuous response data. In Bayesian GLMMs, the often-used improper priors may yield undesirable improper posterior distributions. Thus, verifying posterior propriety is crucial for valid applications of Bayesian GLMMs with improper priors. Here, we consider the popular improper uniform prior on the regression coefficients and several proper or improper priors, including the widely used gamma and power priors on the variance components of the random effects. We also construct an approximate Jeffreys' prior for objective Bayesian analysis of GLMMs. We derive necessary and sufficient conditions for posterior propriety for Bayesian GLMMs where the response variables have distributions from the exponential family. For the two most widely used GLMMs, namely, the binomial and Poisson GLMMs, we further refine our results by providing easily verifiable conditions compared to the currently available results. Finally, we use examples involving one-way and two-way random effects models to demonstrate the theoretical results derived here.
Yalin Rao, Vivekananda Roy
2023-02-01T18:50:50Z
http://arxiv.org/abs/2302.00665v1
# Necessary and sufficient conditions for posterior propriety for generalized linear mixed models ###### Abstract Generalized linear mixed models (GLMMs) are commonly used to analyze correlated discrete or continuous response data. In Bayesian GLMMs, the often-used improper priors may yield undesirable improper posterior distributions. Thus, verifying posterior propriety is crucial for valid applications of Bayesian GLMMs with improper priors. Here, we consider the popular improper uniform prior on the regression coefficients and several proper or improper priors, including the widely used gamma and power priors on the variance components of the random effects. We also construct an approximate Jeffreys' prior for objective Bayesian analysis of GLMMs. We derive necessary and sufficient conditions for posterior propriety for Bayesian GLMMs where the response variables have distributions from the exponential family. For the two most widely used GLMMs, namely, the binomial and Poisson GLMMs, we further refine our results by providing easily verifiable conditions compared to the currently available results. Finally, we use examples involving one-way and two-way random effects models to demonstrate the theoretical results derived here. _Key words and phrases:_ Bayesian GLMM, Diffuse prior, Improper prior, Jeffreys' prior, Objective, Bayes, Reference prior, Variance component ## 1 Introduction Generalized linear mixed models (GLMMs) are widely used statistical models. Suppose that the \(i\)th response variable \(Y_{i}\) has the density \[p(y_{i}\mid\theta_{i},\rho)=\exp[\rho(y_{i}\theta_{i}-b(\theta_{i}))+d(y_{i}, \rho)],\quad i=1,\ldots,n, \tag{1}\] where \(\rho\) is the scalar dispersion parameter, \(\theta_{i}\) is the canonical parameter and the functions \(b(\theta_{i})\), \(d(y_{i},\rho)\) are known. Conditional on \(\theta_{i}\) and \(\rho\), assume that \(Y_{i}\)'s are independent. We denote the observed data by \(y=(y_{1},\ldots,y_{n})\). Binomial, Poisson, and several other popular distributions can be represented in the form of the exponential family (1) (McCullagh and Nelder, 2019). Here, we assume that \(\rho\) is known and is one, which is the case for binomial and Poisson families. Let \(X\) and \(Z\) be the \(n\times p\) matrix of the \(p\) predictor variables and the \(n\times q\) random effects design matrix, respectively. Suppose \(x_{i}^{\top}\) and \(z_{i}^{\top}\) indicate the \(i\)th row of \(X\) and \(Z\), respectively, \(i=1,...,n\). Let \(\beta\in\mathbb{R}^{p}\) be the regression coefficients vector and \(u\in\mathbb{R}^{q}\) be the random effects vector. In GLMMs, the canonical parameter \(\theta_{i}\) is related to \(X\), \(Z\), \(\beta\) and \(u\) by \(\theta_{i}=\theta(\eta_{i})\), where \(\eta_{i}=x_{i}^{\top}\beta+z_{i}^{\top}u\) and \(\theta\) is a monotonic differentiable function, referred to as the \(\theta\)-link. In general, GLMMs can be built with a link function that connects the expectation of \(Y=(Y_{1},\ldots,Y_{n})\) with \(X\) and \(Z\). Note that, \(\mathrm{E}(Y_{i})=b^{\prime}(\theta_{i})\). When \(\theta_{i}=\eta_{i}\), the link is said to be canonical. Assuming \(u\sim N(0,\Psi)\) where \(\Psi\) is a positive definite matrix, the likelihood function of \((\beta,\Psi)\) is \[L(\beta,\Psi\mid y)=\int_{\mathbb{R}^{q}}\left[\prod_{i=1}^{n}\exp[(y_{i} \theta_{i}-b(\theta_{i}))+d(y_{i})]\right]\phi_{q}(u;0,\Psi)du, \tag{2}\] where \(\phi_{q}(u;0,\Psi)\) denotes the probability density function of the \(q\)-dimensional normal distribution with mean vector \(0\), covariance matrix \(\Psi\) evaluated at \(u\) and \(d(y_{i})=d(y_{i},\rho)\). Thus, the likelihood function of a GLMM is available only as an intractable integral. In the Bayesian framework, one specifies priors on \(\beta\) and \(\Psi\). In situations where there is little prior information, statisticians often use improper priors to express ignorance. We consider an improper prior on \(\beta\), \(\pi(\beta)\propto 1\) and the prior on \(\Psi\) is denoted by \(\pi(\Psi)\). We assume \(\beta\) and \(\Psi\) are apriori independent. The corresponding posterior density of \((\beta,\Psi)\) is \[\pi(\beta,\Psi\mid y)=\frac{1}{c(y)}L(\beta,\Psi\mid y)\pi(\beta)\pi(\Psi), \tag{3}\] where \(c(y)=\int_{\mathfrak{A}}\int_{\mathbb{R}^{p}}L(\beta,\Psi\mid y)\pi(\beta)\pi (\Psi)d\beta d\Psi\) is the marginal density of \(y\) and \(\mathfrak{A}\) is an appropriate space of positive definite matrices where \(\Psi\) lies. If \(c(y)\) is finite, that is, if \[c(y)=\int_{\mathfrak{A}}\int_{\mathbb{R}^{p}}\int_{\mathbb{R}^{q}}\left[\prod_ {i=1}^{n}\exp[(y_{i}\theta_{i}-b(\theta_{i}))+d(y_{i})]\right]\phi_{q}(u;0, \Psi)du\pi(\Psi)d\beta d\Psi<\infty, \tag{4}\] then the joint posterior density of \((\beta,\Psi)\) is proper. It is important to establish that \(c(y)\) is finite before making inference based on the posterior density (3). Also, since (3) is intractable in the sense that means with respect to this density are not available in closed-form, generally Markov chain Monte Carlo (MCMC) algorithms are used to explore the posterior density (3) (Roy, 2022). However, it is known that the standard MCMC estimator converges to zero with probability one if the MCMC chain corresponds to an improper posterior distribution (Athreya and Roy, 2014). Further, in general, MCMC samplers may be incapable of providing a red flag when the posterior distribution is improper (Hobert and Casella, 1996). Thus, one has to undertake theoretical analysis to establish posterior propriety, that is, \(c(y)<\infty\). In this article, we study conditions guaranteeing (4). A few articles in the literature address the posterior propriety of Bayesian GLMMs. Natarajan and McCulloch (1995) derive necessary and sufficient conditions for the posterior propriety for a single variance component Bernoulli GLMM under the assumption that \(\beta\) is known. In Natarajan and Kass (2000), sufficient and necessary conditions for posterior propriety for the Bernoulli GLMM with a general covariance matrix \(\Psi\) and the improper uniform prior on \(\beta\) have been studied. Natarajan and Kass (2000) also provide sufficient conditions for posterior propriety for other GLMMs. In Chen, Shao and Xu (2002), sufficient conditions for the posterior propriety include that for at least \(p\) observations, \(y_{i}>0\) for the Poisson family and \(0<y_{i}<m_{i}\) for the binomial family where \(m_{i}\) is the maximum possible value for the \(i\)th binomial random variable. Chen et al. (2002) also assume that the sub-matrix of \(X\) corresponding to the observations satisfying these conditions has full rank. In Natarajan and Kass (2000) also, sufficient conditions for the posterior propriety for Poisson GLMMs include these conditions. In general, Natarajan and Kass's (2000) conditions require some complex integrals to be finite verifying which seems as difficult as directly establishing (4). On the other hand, in Sections 3 and 4 of this article, we present easily verifiable necessary and sufficient conditions for posterior propriety for the binomial and Poisson GLMMs. The prior on \(\Psi\) used in Natarajan and Kass (2000) is not the same as that in Chen et al. (2002), but the techniques employed in the proofs of their sufficient conditions for propriety are similar. Sufficient conditions for posterior propriety for a binary GLMM with a particular prior on \(\Psi\) have been derived in Chen et al.'s (2002) Section 4.2. One of their conditions is that \(Z\) is of full rank. Michalak and Morris (2016) study posterior propriety for GLMMs when an 'exponentiated norm bound' condition holds for the likelihood function. Michalak and Morris (2016) mention that if (1) is a log-concave function of \(\eta\), and the maximum likelihood estimator (MLE) of \(\eta\) exists and is unique, then the exponentiated norm bound condition holds. But, as proved in (Chen and Shao, 2001, Theorem 3.1) for binary models, the necessary and sufficient conditions of posterior propriety and that of the existence of MLE overlap. Thus, the exponentiated norm bound condition assumes as much as the conclusion. Further, unless the prior on \(\Psi\) is proper, Michalak and Morris's (2016) conditions require that \(Z\) is of full rank. Geometric ergodicity of the Gibbs samplers for binary logistic mixed models, binary probit mixed models, and normal linear mixed models with the improper uniform prior on the regression coefficients and proper or improper priors on the variance components, has been explored in Rao and Roy (2021), Wang and Roy (2018) and Roman and Hobert (2012), respectively. Note that the geometric ergodicity of the Markov chain implies that the invariant posterior density is proper. In all of these papers, the main results on geometric ergodicity include different conditions on the random effect matrix \(Z\). Furthermore, in Wang and Roy (2018), the sufficient conditions for posterior propriety for binary GLMMs require a matrix closely related with \(Z\) is of full rank. Here, some sets of our sufficient conditions on posterior propriety of binomial and Poisson GLMMs do _not_ put any restriction on the \(Z\) matrix. When \(Z\) is a matrix consisting of only zeros and ones, which is often the case in practice, the assumption of the full rank of \(Z\) may not hold. Indeed, in Section 6 of this article, we provide an example where \(Z\) is not of full rank, but the results of this paper can be used to establish the posterior propriety. The rest of the manuscript is organized as follows. In Section 2, we provide sufficient and necessary conditions for posterior propriety for the general exponential family GLMMs. In Section 3, we provide sufficient conditions for posterior propriety for the binomial and Poisson GLMMs. Section 4 presents the necessary conditions for posterior propriety for the binomial data, including the special case of binary data under both cases when \(Z\) is full rank or not full rank. In Section 5, we derive an approximate Jeffreys' prior for the parameters in GLMMs. In Section 6, we consider examples of one-way and two-way random effects models and check posterior propriety by employing the conditions derived in this article. Finally, some concluding remarks appear in Section 7. Proofs of most of the theoretical results are given in the Appendix. ## 2 Posterior propriety for GLMMs ### Sufficient conditions for posterior propriety In this section, we consider the propriety of the posterior density (3). **Theorem 1**.: _Assume the following conditions are satisfied_ 1. \(X\) _is of full rank, that is, rank_ \((X)=p\)_;_ 2. _The prior of_ \(\Psi\)_,_ \(\pi(\Psi)\) _is proper;_ 3. _For_ \(i=1,\ldots,n\)_,_ \(\exp[(y_{i}\theta(\eta_{i})-b(\theta(\eta_{i})))]\leq M\) _for some constant_ \(M\)_. Let_ \(X_{s}=(x_{s_{1}},\ldots,x_{s_{p}})^{\top}\) _be a_ \(p\times p\) _full rank sub-matrix of_ \(X\) _with_ \(\int_{\mathbb{R}}\exp[(y_{s_{i}}\theta(\eta_{s_{i}})-b(\theta(\eta_{s_{i}})))]d \eta_{s_{i}}<\infty\) _for_ \(i=1,\ldots,p\)_._ _Then (4) holds, that is, the resulting posterior density is proper._ **Remark 1**.: _In Chen et al. (2002), sufficient conditions for posterior propriety for binomial and Poisson data with the Wishart prior on \(\Psi^{-1}\) have been derived. Here, we consider a general proper prior on \(\Psi\), although the proof here is in the same line as that in Chen et al. (2002). For completeness, we include the proof here. For the integration in condition 3 to be finite, we need the restrictions of \(y_{s_{i}}>0\) for the Poisson GLMMs and \(0<y_{s_{i}}<m_{s_{i}}\) for the binomial GLMMs, where \(m_{s_{i}}\) is the maximum possible value for the \(s_{i}\)th binomial response, \(i=1,\ldots,p\). On the other hand, in the sufficient conditions derived in Section 3, we do not have these constraints on the observed responses._ **Remark 2**.: _In Sun, Tsutakawa and He's (2001) Theorem 4, sufficient conditions for posterior propriety for exponential family GLMMs have been studied. They allow the scalar dispersion parameter \(\rho\) to be unknown while we assume it to be known here. As in Theorem 1 here, Sun et al. (2001) and Michalak and Morris (2016) assume that \(X\) is of full rank. Sun et al. (2001) and Michalak and Morris (2016) allow both proper and improper priors on \(\Psi\) whereas Theorem 1 here considers only proper priors on \(\Psi\). Under improper priors, Michalak and Morris (2016) requires \(Z\) to be full rank and complicated integrals to be finite, which seems hard to verify. Furthermore, as mentioned in the introduction, the method considered in Michalak and Morris (2016) to verify their exponentiated norm bound condition is tantamount to checking posterior propriety, at least in some models. Sun et al. (2001) use normal and gamma distribution to demonstrate that the finite integration condition in their Theorem 4 can be satisfied. The integration in condition 3 here holds for several distributions including normal, gamma, binomial, and Poisson models. Also, Theorem 1 here does not include any condition on the rank of \(Z\) whereas the conditions in Sun et al.'s (2001) Theorem 4 put some constraints on it._ ### Necessary conditions for posterior propriety We have the following necessary conditions for the propriety of (3). **Theorem 2**.: _Assume \(\pi(\Psi)\) is a proper prior for \(\Psi\) and \(b(\theta)\) in (1) is a monotone function, then rank \((X)=p\) is a necessary condition for the propriety of the posterior density of \((\beta,\Psi)\), that is (4), in exponential family GLMMs._ **Remark 3**.: _The function \(b(\theta)\) is monotone for binomial, Poisson, gamma, and inverse Gaussian families. On the other hand, this condition does not hold for the normal distribution._ **Remark 4**.: _The necessary condition of rank \((X)=p\) with an extra assumption on the link function is proved for generalized linear models (GLMs) for binary data in Chen and Shao's (2001) Theorem 2.2. In Sun et al.'s (2001) Theorem 2, the assumption of \(X\) full rank is included for establishing the necessary conditions for posterior propriety for linear mixed models._ ## 3 Sufficient conditions for posterior propriety for some popular GLMMs In this section, we consider posterior propriety for the two most common non-Gaussian GLMMs: binomial and Poisson GLMMs. In the case when \(Y_{1},Y_{2},...,Y_{n}\) are binomial random variables, we assume \(\mathrm{E}(Y_{i})=m_{i}F(\eta_{i})\), where \(F\) is a cumulative distribution function (cdf). The two most frequently used binomial GLMMs are logistic and probit GLMMs, where \(F(\cdot)\) is the cdf of the standard logistic and the standard normal random variables, respectively. If \((Y_{1},Y_{2},...,Y_{n})\) is the vector of Poisson response variables, we assume \(\mathrm{E}(Y_{i})=\exp(\eta_{i})\). Thus, for the binomial GLMMs, we have \[Y_{i}\mid\beta,u,\tau\stackrel{{ ind}}{{\sim}}Binomial\left(m_{i},F(x_{i}^{\top}\beta+z_{i}^{\top}u)\right) \text{ for }i=1,\ldots,n, \tag{5}\] and for the Poisson GLMMs, we have \[Y_{i}\mid\beta,u,\tau\stackrel{{ ind}}{{\sim}}Poisson(\exp[x_{i }^{\top}\beta+z_{i}^{\top}u])\text{ for }i=1,\ldots,n. \tag{6}\] Without loss of generality, we assume there are \(r\) random effects \(u_{1}^{\top},u_{2}^{\top},\ldots,u_{r}^{\top}\), where \(u_{j}\) is a \(q_{j}\times 1\) vector with \(q_{j}>0\), and \(q_{1}+q_{2}+\cdots+q_{r}=q\). Let \(u_{j}\stackrel{{ ind}}{{\sim}}N(0,(1/\tau_{j})\mathrm{I}_{q_{j}})\), where \(\tau_{j}>0\) for \(j=1,\ldots,r\). Thus, \(u=(u_{1}^{\top},\ldots,u_{r}^{\top})^{\top}\) and let \(\tau=(\tau_{1},\ldots,\tau_{r})\). The likelihood function for \((\beta,\tau)\) is \[L(\beta,\tau\mid y)=\int_{\mathbb{R}^{q}}\prod_{i=1}^{n}p(y_{i}\mid\beta,u) \phi_{q}(u;0,D(\tau)^{-1})du, \tag{7}\] where \(p(y_{i}\mid\beta,u)\) is the pmf of \(y_{i}\) and \(D(\tau)^{-1}=\oplus_{j=1}^{r}(1/\tau_{j})\mathrm{I}_{q_{j}}\) with \(\oplus\) denoting the direct sum. For binomial GLMMs, \(p(y_{i}\mid\beta,u)\) follows from (5) and for Poisson GLMMs, \(p(y_{i}\mid\beta,u)\) follows from (6). As before, we consider an improper prior on \(\beta\), \(\pi(\beta)\propto 1\). The prior for \(\tau_{j}\) is \[\pi(\tau_{j})\propto\tau_{j}^{a_{j}-1}e^{-\tau/b_{j}},\ j=1,...,r, \tag{8}\] which may be proper or improper depending on the values of \(a_{j}\) and \(b_{j}\). Also, we assume that \(\beta\) and \(\tau\) are apriori independent and all the \(\tau_{j}\)s are also apriori independent. Here, we will derive the conditions on the values of \(a_{j}\) and \(b_{j}\) for guaranteeing posterior propriety. ### Sufficient conditions for posterior propriety for binomial data For binomial data, the likelihood function for \((\beta,\tau)\) in (7) becomes \[L(\beta,\tau\mid y)=\int_{\mathbb{R}^{q}}\left[\prod_{i=1}^{n}\binom{m_{i}}{y_ {i}}\Big{[}F(x_{i}^{\top}\beta+z_{i}^{\top}u)\Big{]}^{y_{i}}\Big{[}1-F(x_{i}^ {\top}\beta+z_{i}^{\top}u)\Big{]}^{m_{i}-y_{i}}\right]\phi_{q}(u;0,D(\tau)^{- 1})du.\] Thus the posterior density \(\pi(\beta,\tau\mid y)=L(\beta,\tau\mid y)\pi(\beta)\pi(\tau)/c(y)\) is proper if and only if \[c(y)=\int_{\mathbb{R}^{r}_{+}}\int_{\mathbb{R}^{p}}\int_{\mathbb{ R}^{q}}\left[\prod_{i=1}^{n}\binom{m_{i}}{y_{i}}\Big{[}F(x_{i}^{\top}\beta+ z_{i}^{\top}u)\Big{]}^{y_{i}}\Big{[}1-F(x_{i}^{\top}\beta+z_{i}^{\top}u) \Big{]}^{m_{i}-y_{i}}\right]\] \[\times\phi_{q}(u;0,D(\tau)^{-1})du\pi(\beta)\pi(\tau)d\beta d\tau<\infty. \tag{9}\] Before we present the results for binomial data, we introduce some notations. Let \(N=\{1,2,...,n\}\). As in Roy and Kaiser (2013), we partition the index set as \(N=I_{1}\uplus J_{2}\uplus J_{3}\), where we define \(I_{1}=\{i\in N:y_{i}=0\}\), \(I_{2}=\{i\in N:y_{i}=m_{i}\}\), \(I_{3}=\{i\in N:1\leq y_{i}\leq m_{i}-1\}\) and \(k\) is the cardinality of \(I_{3}\). Recall that \(X\) is the \(n\times p\) design matrix with the \(i\)th row \(x_{i}^{\top}\). Let \(\breve{X}\) be the \(k\times p\) matrix with rows \(x_{i}^{\top}\) where \(i\in I_{3}\) and the \((n+k)\times p\) matrix \(X_{\triangle}=(X^{\top}\breve{X}^{\top})^{\top}\) with the \(i\)th row \(x_{\triangle i}^{\top}\). Define \(X_{\triangle}^{*}\) be the \((n+k)\times p\) matrix with the \(i\)th row as \(t_{i}x_{\triangle i}^{\top}\) where \(t_{i}=1\) if \(i\in I_{1}\cup I_{3}\), \(t_{i}=-1\) if \(i\in I_{2}\) and \(t_{n+j}=-1\) for \(j=1,\ldots,k\). Also recall \(Z\) is the \(n\times q\) random effect matrix with the \(i\)th row \(z_{i}^{\top}\). Define \(\breve{Z}\) be the \(k\times q\) matrix with rows \(z_{i}^{\top}\) where \(i\in I_{3}\) and \(Z_{\triangle}=(Z^{\top}\breve{Z}^{\top})^{\top}\) with the \(i\)th row \(z_{\triangle i}^{\top}\). Let \(Z_{\triangle}^{*}\) be a \((n+k)\times q\) matrix with the \(i\)th row as \(t_{i}z_{\triangle i}^{\top}\). The following Theorem states the conditions for posterior propriety for binomial data when the prior for \(\beta\) is \(\pi(\beta)\propto 1\) and the prior for \(\tau_{j}\) is as in (8). **Theorem 3**.: _Assume the following conditions are satisfied_ 1. \(X\) _is of full rank, and there exists a positive vector_ \(e>0\) _such that_ \(e^{\top}X_{\triangle}^{*}=0\)_;_ 2. \(a_{j}>p/2\)_,_ \(b_{j}>0\) _for_ \(j=1,...,r\)_;_ 3. \(\mathrm{E}|\delta_{\circ}|^{p}<\infty\)_, where_ \(\delta_{\circ}\sim F\)_._ _Then (9) holds._ **Remark 5**.: _The condition 1 in Theorem 3 can be easily verified by a simple optimization method (Roy and Hobert, 2007) using publicly available software packages (see Section 6 for details). Also, for the probit or logistic GLMMs, the condition 3 is satisfied as moments of all orders for the normal and logistic random variables are finite._ **Remark 6**.: _Posterior propriety for nonlinear mixed models is studied in Chen et al. (2002). For binomial data, the sufficient conditions in that paper include \(0<y_{i}<m_{i}\) for at least \(p\) observations with the corresponding sub-matrix of \(X\) being full rank. These conditions do not hold for the examples in Section 6, whereas the conditions of Theorem 3 are satisfied._ **Remark 7**.: _Sun et al. (2001) investigate the necessary and sufficient conditions for posterior propriety for linear mixed models. In their Theorem 2, the sufficient conditions include rank conditions for \(Z\) and some conditions on \(q_{j}\) while we do not have any conditions on \(Z\) or \(q_{j}\) in Theorem 3._ **Remark 8**.: _Sufficient conditions for geometric ergodicity of certain Gibbs samplers have been established for linear mixed models (Roman and Hobert, 2012) and GLMMs (Rao and Roy, 2021; Wang and Roy, 2018). These conditions also imply posterior propriety, and the conditions involve \(Z\) and \(q_{j}\). On the other hand, Theorem 3 here does not put any constraints on \(Z\) or \(q_{j}\)._ **Remark 9**.: _Under the improper uniform prior on \(\beta\) and proper priors on variance components, Michalak and Morris (2016) provide conditions for posterior propriety that does not put restrictions on the rank of \(Z\). But, Michalak and Morris (2016) assume the exponentiated norm bound condition._ #### 3.1.1 Sufficient conditions for posterior propriety for binary data Although the Bernoulli distribution is a special case of the binomial distribution, the need for analyzing binary data is ubiquitous. Thus, in this section, we separately state the sufficient conditions for posterior propriety for the important special case when \(y_{i}=0\) or \(1\), \(i=1,\ldots,n\), the prior for \(\beta\) is \(\pi(\beta)\propto 1\) and the prior for \(\tau_{j}\) is as in (8). **Corollary 1**.: _Assume the following conditions are satisfied_ 1. \(X\) _is of full rank, and there exists a positive vector_ \(e>0\) _such that_ \(e^{\top}X^{*}=0\) _where_ \(X^{*}\) _is an_ \(n\times p\) _matrix with the ith row_ \(c_{i}x_{i}^{\top}\) _and_ \(c_{i}=1-2y_{i}\)_,_ \(i=1,\ldots,n\)_;_ 2. \(a_{j}>p/2\)_,_ \(b_{j}>0\) _for_ \(j=1,...,r\)_;_ 3. \(\mathrm{E}|\delta_{\circ}|^{p}<\infty\)_, where_ \(\delta_{\circ}\sim F\)_._ _Then (9) holds for binary responses._ **Remark 10**.: _In Chen et al.'s (2002) Theorem 4.2, the sufficient conditions for posterior propriety for binary data include that \((X,Z)\) is of full rank and there exists a positive vector \(e\) such that \(e^{\top}(X^{*},Z^{*})=0\). Here, \((X^{*},Z^{*})\) is defined in the same way as our \(X^{*}\). Since in practice, \(Z\) is often a matrix consisting of \(0\)'s and \(1\)'s, the full rank assumption of \(Z\) may not hold. Furthermore, in Wang and Roy (2018), the sufficient conditions for posterior propriety for binary data include \(W\), which is closely related to \((X,Z)\), is of full rank, and there exists a positive vector \(e\) such that \(e^{\top}W^{*}=0\). Note that \(W^{*}\) is defined in the same way as our \(X^{*}\). It also includes some conditions on \(a_{j}\)'s and \(q_{j}\)'s. In the above Corollary, we do not assume any conditions on the \(Z\) matrix or \(q_{j}\)'s._ The proof of the Corollary can be derived from the proof of Theorem 3 since \(X^{*}_{\triangle}\) becomes \(X^{*}\) and \(k=0\) in the case of binary responses. ### Sufficient conditions for posterior propriety for Poisson GLMMs For Poisson GLMMs, if the log link function is used, the likelihood function for \((\beta,\tau)\) is \[L(\beta,\tau\mid y)=\int_{\mathbb{R}^{q}}\prod_{i=1}^{n}\frac{\exp[-\exp(x_{i} ^{\top}\beta+z_{i}^{\top}u)]\exp[(x_{i}^{\top}\beta+z_{i}^{\top}u)y_{i}]}{y_{ i}!}\phi_{q}(u;0,D(\tau)^{-1})du.\] Thus, if \[c(y)=\int_{\mathbb{R}^{\tau}_{+}}\int_{\mathbb{R}^{\tau}}\int_{\mathbb{R}^{ \tau}}\prod_{i=1}^{n}\frac{\exp[(x_{i}^{\top}\beta+z_{i}^{\top}u)y_{i}]}{\exp[ \exp(x_{i}^{\top}\beta+z_{i}^{\top}u)]y_{i}!}\phi_{q}(u;0,D(\tau)^{-1})du\pi( \beta)\pi(\tau)d\beta d\tau<\infty, \tag{10}\] the posterior density of \((\beta,\tau)\) is proper. In Corollary 2, we provide sufficient conditions for posterior propriety for Poisson data when the prior for \(\beta\) is \(\pi(\beta)\propto 1\) and the prior for \(\tau_{j}\) is as in (8). Let \(y_{(n)}=\max(y_{1},y_{2},\ldots,y_{n})\). Letting \(y_{(n)}=m_{i},i=1,\ldots,n\), we define \(X^{*}_{\triangle}\) as that in Section 3.1. **Corollary 2**.: _Assume the following conditions are satisfied_ 1. \(X\) _is of full rank, and there exists a positive vector_ \(e>0\) _such that_ \(e^{\top}X^{*}_{\triangle}=0\)_;_ 2. \(a_{j}>p/2,\,b_{j}>0\) _for_ \(j=1,...,r\)_._ _Then (10) holds for Poisson GLMMs with the log link._ **Remark 11**.: _In their sufficient conditions for posterior propriety for Poisson GLMMs, both Chen et al.'s (2002) Theorem 3.1 and Natarajan and Kass's (2000) Corollary 1 include \(y_{i}>0\) for at least \(p\) observations. They also require the sub-matrix of \(X\) corresponding to these observations to have full rank. These conditions do not hold for the Poisson GLMMs examples in Section 6, whereas the conditions of Corollary 2 are satisfied._ ### Sufficient conditions for posterior propriety for binomial and Poisson GLMMs with power priors In this Section, we assume \(b_{j}=0\) in (8), that is, \(\tau_{j}\)'s have the so-called power priors, \(\pi(\tau_{j})\propto\tau_{j}^{a_{j}-1},j=1,\ldots,r\). As mentioned in Roman and Hobert (2012), for the two-level normal model, the standard diffuse prior on \(\tau_{j}\) is \(\pi(\tau_{j})\propto\tau_{j}^{-1/2-1}\), which is among the priors recommended by Gelman (2006). For the prior on \(\beta\), we assume \(\pi(\beta)\propto 1\). Under these priors, we provide the following sufficient conditions for posterior propriety for binomial responses with the notations as in Section 3.1. **Theorem 4**.: _Assume the following conditions are satisfied_ 1. \((X,Z)\) _is of full rank, and there exists a positive vector_ \(e>0\) _such that_ \(e^{\top}(X_{\triangle}^{*},Z_{\triangle}^{*})=0\)_;_ 2. \(-q_{j}/2<a_{j}<0\) _for_ \(j=1,...,r\)_;_ 3. \(\mathrm{E}|\delta_{\circ}|^{p-2\Sigma_{j-1}^{\prime}a_{j}}<\infty\)_, where_ \(\delta_{\circ}\sim F\)_._ _Then (9) holds._ **Remark 12**.: _For the power prior on the variance components of the random effects in GLMMs, Chen et al. (2002) examine the sufficient conditions for posterior propriety for binary data in Theorem 4.2. Here, we derive sufficient conditions for posterior propriety for binomial data in GLMMs. The conditions in Theorem 4 are similar to those in Chen et al.'s (2002) Theorem 4.2 for binary data._ **Remark 13**.: _In Wang and Roy's (2018) Theorem 1, the sufficient conditions for posterior propriety for binary data in GLMMs with the power prior on the variance components for the random effects have also been studied. Their conditions on \((a_{j},q_{j})\) are \(a_{j}<0\), \(q_{j}\geq 2\) and \(a_{j}+q_{j}/2>1/2\) for \(j=1,\ldots,r\). On the other hand, the condition 2 of Theorem 4 matches the hyperparameter condition in Natarajan and McCulloch (1995) who consider single variance component Bernoulli GLMM with known \(\beta\)._ For binary data, we thus have the following Corollary. **Corollary 3**.: _Assume the following conditions are satisfied_ 1. \((X,Z)\) _is of full rank, and there exists a positive vector_ \(e>0\) _such that_ \(e^{\top}(X^{*},Z^{*})=0\) 2. \(-q_{j}/2<a_{j}<0\) _for_ \(j=1,...,r\)_;_ 3. \(\mathrm{E}|\delta_{\circ}|^{p-2\sum_{j=1}^{r}a_{j}}<\infty\)_, where_ \(\delta_{\circ}\sim F\)_._ _Then (9) holds for binary responses._ For Poisson GLMMs with the log link, as in Section 3.2, we let \(y_{(n)}=m_{i}\) for \(i=1,\ldots,n\) and define \((X_{\triangle}^{*},Z_{\triangle}^{*})\) following the notations in Section 3.1. Also, using the relationship between the Poisson and binomial likelihoods as in the proof of Corollary 2, we have the following result. **Corollary 4**.: _Assume the following conditions are satisfied_ 1. \((X,Z)\) _is of full rank, and there exists a positive vector_ \(e>0\) _such that_ \(e^{\top}(X_{\triangle}^{*},Z_{\triangle}^{*})=0\)_;_ 2. \(-q_{j}/2<a_{j}<0\) _for_ \(j=1,...,r\)_._ _Then (10) holds for Poisson GLMMs with the log link._ ## 4 Necessary conditions for posterior propriety for some popular GLMMs ### Necessary conditions for posterior propriety for binomial data In this section, we discuss necessary conditions for posterior propriety for binomial data when the prior for \(\beta\) is \(\pi(\beta)\propto 1\) and the prior for \(\tau_{j}\) is as in (8) with \(b_{j}\geq 0\) and the values of \(a_{j}\) specified in the results of this section. **Theorem 5**.: _For binomial data, the following two conditions are necessary for the posterior propriety, that is, for (9) to hold:_ 1. \(X\) _is of full rank;_ 2. \(a_{j}+q_{j}/2>0\) _for_ \(j=1,...,r\)_._ If \(Z\) is of full rank, we have another necessary condition for posterior propriety in the case of binomial data when \(\pi(\beta)\propto 1\) and the prior for \(\tau_{j}\) is as in (8). **Theorem 6**.: _Suppose \(Z\) has full rank. The following are necessary conditions for posterior propriety, that is, (9):_ 1. \(X\) _is of full rank;_ 2. _There exists a positive vector_ \(e>0\) _such that_ \(e^{\top}X_{\triangle}^{*}=0\)_;_ 3. \(a_{j}+q_{j}/2>0\) _for_ \(j=1,...,r\)_._ **Remark 14**.: _Sun et al. (2001) study necessary conditions for posterior propriety for linear mixed models. In their Theorem 2, the necessary conditions on the hyperparameters of the prior of the variance for the random effects include the condition 2 of Theorem 5 (condition 3 of Theorem 6). In addition, we prove that \(X\) full rank is a necessary condition while Sun et al. (2001) derive their necessary conditions under the assumption that \(X\) has full rank. Chen et al.'s (2002) Theorem 3.2 also provides necessary conditions for posterior propriety for GLMMs. Their conditions are on the hyperparameter of the priors based on the spectral decomposition for the covariance matrix of the random effects._ #### 4.1.1 Necessary conditions for posterior propriety for binary data The necessary conditions for posterior propriety for binary data follow from Theorems 5 and 6. **Corollary 5**.: _For binary data, the following are necessary conditions for posterior propriety, that is, (9):_ 1. \(X\) _is of full rank;_ 2. \(a_{j}+q_{j}/2>0\) _for_ \(j=1,...,r\)_._ **Corollary 6**.: _If \(Z\) has full rank, for binary data, the following are necessary conditions for posterior propriety, that is, (9):_ 1. \(X\) _is of full rank;_ 2. _There exists a positive vector_ \(e>0\) _such that_ \(e^{\top}X^{*}=0\) _where_ \(X^{*}\) _is an_ \(n\times p\) _matrix with the with row_ \(c_{i}x_{i}^{\top}\) _and_ \(c_{i}=1-2y_{i}\) _for_ \(i=1,\ldots,n\)_;_ 3. \(a_{j}+q_{j}/2>0\) _for_ \(j=1,...,r\)_._ Proof.: Since \(X_{\triangle}^{*}\) becomes \(X^{*}\) for binary responses, the proof can be derived from that for Theorem 6. **Remark 15**.: _The necessary conditions for posterior propriety for GLMs for binary responses have been explored in Chen and Shao (2001). The common necessary condition between their Theorem 2.2 and our result is that \(X\) is of full rank. Further, under \(Z\) full rank, we derive the same necessary condition as their Theorem 2.2. Note that their necessary conditions have been derived under an assumption on the link function. Furthermore, we establish another necessary condition on \(a_{j}\) and \(q_{j}\), which does not arise for GLMs._ ## 5 An approximate Jeffreys' Prior for GLMMs with canonical link Jeffreys' prior (Jeffreys, 1946) is a reference prior widely used for objective Bayesian inference. In this section, we construct an approximate 'independence Jeffreys' prior' (Berger, De Oliveira and Sanso, 2001) for GLMMs. For ease of presentation, we assume that \(\Psi=D(\tau)^{-1}=\oplus_{j=1}^{r}(1/\tau_{j})\mathrm{I}_{q_{j}}\), as in Section 3. Recall that the likelihood function for \((\beta,\tau)\), \(L(\beta,\tau\mid y)\) follows from (2). Define the complete likelihood function \[L(\beta,\tau\mid y,u) =\prod_{i=1}^{n}p(y_{i}\mid\beta,u)\phi_{q}(u;0,D(\tau)^{-1})\] \[=\prod_{i=1}^{n}p(y_{i}\mid\beta,u)(2\pi)^{-q/2}\prod_{j=1}^{r} \tau_{j}^{q_{j}/2}\exp(-\tau_{j}u_{j}^{\top}u_{j}/2). \tag{11}\] From Casella (2001) we have \[\frac{d^{2}}{d\tau_{j}^{2}}\log L(\beta,\tau\mid y)=\mathrm{E}\bigg{[}\frac{d ^{2}}{d\tau_{j}^{2}}\log L(\beta,\tau\mid y,u)\Big{|}\tau_{j},y\bigg{]}+\mathrm{ Var}\bigg{[}\frac{d}{d\tau_{j}}\log L(\beta,\tau\mid y,u)\Big{|}\tau_{j},y \bigg{]}, \tag{12}\] where the expectation and variance are with respect to \(\pi(u_{j}\mid\tau_{j},y)\). Next, using (11), we obtain \[\frac{d}{d\tau_{j}}\log L(\beta,\tau\mid y,u)=q_{j}\tau_{j}^{-1}/2-u_{j}^{\top }u_{j}/2,\text{ and }\frac{d^{2}}{d\tau_{j}^{2}}\log L(\beta,\tau\mid y,u)=-q_{j}\tau_{j}^{-2}/2. \tag{13}\] Then, by using (13), we get the expectation in (12) as \[\mathrm{E}\bigg{[}\frac{d^{2}}{d\tau_{j}^{2}}\log L(\beta,\tau\mid y,u)\Big{|} \tau_{j},y\bigg{]}=\mathrm{E}(-q_{j}\tau_{j}^{-2}/2\mid\tau_{j},y)=-q_{j}\tau_ {j}^{-2}/2. \tag{14}\] Also, by using (13), we have the variance in (12) as \[\mathrm{Var}\bigg{[}\frac{d}{d\tau_{j}}\log L(\beta,\tau\mid y,u)\Big{|}\tau_ {j},y\bigg{]}=\mathrm{Var}(u_{j}^{\top}u_{j}\mid\tau_{j},y)/4\approx\sum_{m=1} ^{q_{j}}\mathrm{Var}(u_{jm}^{2}\mid\tau_{j},y)/4, \tag{15}\] Here, we have ignored the covariance terms. Let \(y_{1.}=(y_{11},\ldots,y_{1n_{1}})\) denote the observations for the level 1 of the random effect \(u_{j}\). The conditional distribution of \(u_{j1}\mid y_{1.},\beta,\tau_{j}\) is \[f(u_{j1}\mid y_{1.},\beta,\tau_{j})=\frac{\prod_{k=1}^{n_{1}}p(y_{1k}\mid\beta,u_{j1})\phi(u_{j1};0,1/\tau_{j})}{\int_{\mathbb{R}}\prod_{k=1}^{n_{1}}p(y_{1k} \mid\beta,u_{j1})\phi(u_{j1};0,1/\tau_{j})du_{j1}}, \tag{16}\] where \(p(y_{1k}\mid\beta,u_{j1})\) is the exponential density (1) with \(\rho=1\), and \(\phi(u_{j1};0,1/\tau_{j})\) is the normal density for \(u_{j1}\) with mean \(0\) and variance \(1/\tau_{j}\). Booth and Hobert (1998) derive some approximations to the conditional mean and variance of the random effects. Following Booth and Hobert (1998), let \[l=\log\prod_{k=1}^{n_{1}}p(y_{1k}\mid\beta,u_{j1})\phi(u_{j1};0,1/\tau_{j})= \sum_{k=1}^{n_{1}}\left(y_{1k}\theta_{1k}-b(\theta_{1k})+d(y_{1k})\right)-u_{j1 }^{2}\tau_{j}/2-\log(2\pi)/2+\log\tau_{j}/2.\] Since \(\theta_{1k}=\eta_{1k}\), we have \[l^{(1)}=\frac{d}{du_{j1}}l=\sum_{k=1}^{n_{1}}(y_{1k}-b^{\prime}(\theta_{1k}))z _{j1k}-\tau_{j}u_{j1},\] and \[l^{(2)}=\frac{d^{2}}{du_{j1}^{2}}l=-\tau_{j}-\sum_{k=1}^{n_{1}}z_{j1k}^{2}\;t ^{\prime}(x_{1k}^{\top}\beta+z_{1k}^{\top}u).\] Here \(z_{j1k}\) is the corresponding value from the \(Z\) matrix. Note that \(b^{\prime}(\theta_{1k})=\mathrm{E}(y_{1k})=t(x_{1k}^{\top}\beta+z_{1k}^{\top}u)\), where \(t(\eta_{ik})\) is a function of \(\eta_{ik}\), and \(t^{\prime}(x_{1k}^{\top}\beta+z_{1k}^{\top}u)=t^{\prime}(\eta_{ik})\) is the derivative with respect to \(\eta_{ik}\). Letting \(l^{(1)}=0\), we get \[\tilde{u}_{j1}=\mathrm{E}(u_{j1}\mid\tau_{j},y_{1.})=\frac{\sum_{k=1}^{n_{1}} z_{j1k}(y_{1k}-\mathrm{E}(y_{1k}))}{\tau_{j}}.\] Furthermore, as in Booth and Hobert (1998), we assume \(\mathrm{Var}(u_{j1}\mid\tau_{j},y_{1.})\approx-(l^{(2)})^{-1}\). Also, in the above expression for the variance, we replace \(t^{\prime}(x_{1k}^{\top}\beta+z_{1k}^{\top}u)\) with \(t^{\prime}(x_{1k}^{\top}\hat{\beta})\), where \(\hat{\beta}\) is the MLE of the regression coefficients from the GLM without the random effects. Then, it follows that \[\tilde{v}_{j1}=\mathrm{Var}(u_{j1}\mid\tau_{j},y_{1.})\approx\left(\tau_{j}+ \sum_{k=1}^{n_{1}}z_{j1k}^{2}\;t^{\prime}(x_{1k}^{\top}\hat{\beta})\right)^{-1}\] Next, we assume the conditional distribution of \(u_{j1}\mid\tau_{j},y_{1.}\) is the normal distribution with mean \(\tilde{u}_{j1}\) and variance \(\tilde{v}_{j1}\). Then we have \(\mathrm{Var}(u_{j1}^{2}\mid\tau_{j},y_{1.})=2\tilde{v}_{j1}^{2}+4\tilde{v}_{j1 }\tilde{u}_{j1}^{2}\). Repeating the above method for \(u_{jm}\), \(m=2,\ldots,q_{j}\), (15) becomes \[\mathrm{Var}\bigg{[}\frac{d}{d\tau_{j}}\log L(\beta,\tau\mid y,u )\Big{|}\tau_{j}\bigg{]} \approx\sum_{m=1}^{q_{j}}[\tilde{v}_{jm}^{2}/2+\tilde{v}_{jm} \tilde{u}_{jm}^{2}]\] \[\approx\sum_{m=1}^{q_{j}}\tilde{v}_{jm}^{2}/2=\sum_{m=1}^{q_{j}} \Big{(}\tau_{j}+\sum_{k=1}^{n_{n}}z_{jmk}^{2}\;t^{\prime}(x_{mk}^{\top}\hat{ \beta})\Big{)}^{-2}/2, \tag{17}\] where the second approximation is using \(\tilde{u}_{jm}\approx 0\). Finally, by using (14), (17) in (12), we obtain the expected Fisher information for \(\tau_{j}\) as \(q_{j}\tau_{j}^{-2}/2-\sum_{m=1}^{q_{j}}\left(\tau_{j}+\sum_{k=1}^{n_{m}}z_{jmk}^{ 2}\;t^{\prime}(x_{mk}^{\top}\hat{\beta})\right)^{-2}/2\). Thus, the approximate Jeffreys' prior for \(\tau_{i}\) is \[\pi_{J}(\tau_{i})=\left[q_{i}\tau_{i}^{-2}/2-\sum_{m=1}^{q_{i}}\left(\tau_{i}+ \sum_{k=1}^{n_{m}}z_{imk}^{2}\;t^{\prime}(x_{mk}^{\top}\hat{\beta})\right)^{-2} /2\right]^{1/2},\;\text{for}\;i=1,\ldots,r. \tag{18}\] Finally, the 'independence Jeffreys' prior' (Berger et al., 2001; Kass and Wasserman, 1996) for \((\beta,\tau)\) is \(\pi_{J}(\beta,\tau)=\prod_{i=1}^{r}\pi_{J}(\tau_{i})\). ### Posterior Propriety for binomial and Poisson GLMMs For binomial and Poisson families, \(t^{\prime}(x_{mk}^{\top}\hat{\beta})\) in (18) is positive. Denoting \(\sum_{k=1}^{n_{m}}z_{imk}^{2}\;t^{\prime}(x_{mk}^{\top}\hat{\beta})\) by \(c_{im}\geq 0\), from (18), we have \[\pi_{J}(\tau_{i})=\left[q_{i}\tau_{i}^{-2}/2-\sum_{m=1}^{q_{i}}(\tau_{i}+c_{im })^{-2}/2\right]^{1/2}=\left[\sum_{m=1}^{q_{i}}\left\{\frac{1}{2\tau_{i}^{2}} -\frac{1}{2(\tau_{i}+c_{im})^{2}}\right\}\right]^{1/2}. \tag{19}\] Now, \[\frac{1}{2\tau_{i}^{2}}-\frac{1}{2(\tau_{i}+c_{im})^{2}} =\frac{c_{im}^{2}+\tau_{i}c_{im}+\tau_{i}c_{im}}{2\tau_{i}^{2}( \tau_{i}+c_{im})^{2}}=\frac{c_{im}(\tau_{i}+c_{im})+\tau_{i}c_{im}}{2\tau_{i}^ {2}(\tau_{i}+c_{im})^{2}}\] \[=\frac{c_{im}}{2\tau_{i}^{2}(\tau_{i}+c_{im})}+\frac{c_{im}}{2 \tau_{i}(\tau_{i}+c_{im})^{2}}\leq\frac{\sqrt{c_{im}}}{4\tau_{i}^{5/2}}+\frac {\sqrt{c_{im}}}{4\tau_{i}^{5/2}}=\frac{\sqrt{c_{im}}}{2\tau_{i}^{5/2}}, \tag{20}\] where we use \(\tau_{i}+c_{im}\geq 2\sqrt{\tau_{i}c_{im}}\) and \((\tau_{i}+c_{im})^{2}\geq 2\tau_{i}\sqrt{\tau_{i}c_{im}}\) for the inequality. Thus, from (19) and (20), we have \[\pi_{J}(\tau_{i})\leq\bigg{(}\sum_{m=1}^{q_{i}}\sqrt{c_{im}}/2\bigg{)}^{1/2} \tau_{i}^{-5/4}. \tag{21}\] Since the upper bound in (21) is a power prior with \(a_{i}=-1/4\), sufficient conditions for posterior propriety with the proposed Jeffreys' prior \(\pi_{J}(\beta,\tau)\) for binomial and Poisson GLMMs follows from Theorem 4 and Corollary 4, respectively. Thus, if the condition 1 of Theorem 4 holds then the posterior densities of binomial and Poisson GLMMs with canonical links and the prior \(\pi_{J}(\beta,\tau)\) are proper. **Remark 16**.: _Natarajan and Kass (2000) derive conditions for posterior propriety for GLMMs with an approximate Jeffreys' prior (see Section 5.2 for some details on this prior). In general, Natarajan and Kass's (2000) conditions require a complex, multi-dimensional integral to be finite. For Poisson GLMMs with the log link, their conditions require \(y_{i}>0\) for at least \(p\) observations, the corresponding sub-matrix of \(X\) is full rank, and for the binary GLMM, Natarajan and Kass (2000) assume some intractable integrals to be finite, along with other conditions on \((X,y)\), whereas the sufficient condition given here for posterior propriety with the proposed Jeffreys' prior can be easily verified (Roy and Hobert, 2007)._ ### Comparison with Natarajan and Kass's (2000) prior Natarajan and Kass (2000) provide an approximate Jeffreys' prior for \(\Psi\) for GLMMs. The general form of Natarajan and Kass's (2000) prior is complex involving sums of traces of inverse of certain matrices and derivatives of some matrices, although closed-form expressions can be derived for binary and Poisson one-way random intercept GLMMs. For example, if \(p=q=q_{1}=r=1\), Natarajan and Kass's (2000) prior for binary data is \[\pi_{NK}(\tau)\propto\left(1+ne^{\hat{\beta}}\tau/(1+e^{\hat{\beta}})^{2} \right)^{-1},\] and for Poisson data, is \[\pi_{NK}(\tau)\propto\left(1+ne^{\hat{\beta}}\tau\right)^{-1},\] where \(n\) is the number of observations. Note that for binary data, \(d(1+e^{-\eta})^{-1}/d\eta=(1+e^{-\eta})^{-2}e^{-\eta}\), thus, \(t^{\prime}(x_{mk}^{\top}\hat{\beta})=(1+e^{-\hat{\beta}})^{-2}e^{-\hat{\beta}}\), and for the Poisson GLMMs, \(de^{\eta}/d\eta=e^{\eta}\), thus, \(t^{\prime}(x_{mk}^{\top}\hat{\beta})=e^{\hat{\beta}}\). Hence, from (19), if \(p=q=q_{1}=r=1\), for binary responses we have \[\pi_{J}(\tau)\propto\left[\tau^{-2}/2-\left(\tau+ne^{\hat{\beta}}/(1+e^{\hat{ \beta}})^{2}\right)^{-2}/2\right]^{1/2},\] and for the Poisson GLMMs \[\pi_{J}(\tau)\propto\left[\tau^{-2}/2-\left(\tau+ne^{\hat{\beta}}\right)^{-2} /2\right]^{1/2}.\] To compare the behavior of tails of \(\pi_{J}(\tau)\) and \(\pi_{NK}(\tau)\), let \(f(\tau)=\log\pi_{J}(\tau)-\log\pi_{NK}(\tau)\). For the binary GLMMs, the derivative of \(f(\tau)\) \[f^{\prime}(\tau) =\frac{-\tau^{-3}+(\tau+ne^{\hat{\beta}}/(1+e^{\hat{\beta}})^{2} )^{-3}}{\tau^{-2}-(\tau+ne^{\hat{\beta}}/(1+e^{\hat{\beta}})^{2})^{-2}}+\frac {ne^{\hat{\beta}}}{(1+e^{\hat{\beta}})^{2}+ne^{\hat{\beta}}\tau}\] \[=\frac{\tau^{3}-(\tau+ne^{\hat{\beta}}/(1+e^{\hat{\beta}})^{2})^{ 3}}{\tau(\tau+ne^{\hat{\beta}}/(1+e^{\hat{\beta}})^{2})(2\tau+ne^{\hat{\beta} }/(1+e^{\hat{\beta}})^{2})ne^{\hat{\beta}}/(1+e^{\hat{\beta}})^{2}}+\frac{ne^ {\hat{\beta}}}{(1+e^{\hat{\beta}})^{2}+ne^{\hat{\beta}}\tau}\] \[=\frac{-\frac{3\tau^{2}ne^{\hat{\beta}}}{(1+e^{\hat{\beta}})^{2} }-\frac{3\tau n^{2}e^{\hat{\beta}}}{(1+e^{\hat{\beta}})^{4}}-\frac{n^{3}e^{ \hat{\beta}}}{(1+e^{\hat{\beta}})^{6}}-\frac{n^{3}e^{\hat{\beta}}}{(1+e^{\hat {\beta}})^{6}}-\frac{3\tau^{2}ne^{\hat{\beta}}}{(1+e^{\hat{\beta}})^{2}}- \frac{3\tau^{2}e^{\hat{\beta}}}{(1+e^{\hat{\beta}})^{4}}-\frac{2n^{3}e^{\hat{ \beta}}}{(1+e^{\hat{\beta}})^{6}}-\frac{6\tau^{2}ne^{\hat{\beta}}}{(1+e^{\hat {\beta}})^{2}}-\frac{6\tau n^{2}e^{\hat{\beta}}}{(1+e^{\hat{\beta}})^{2}}- \frac{\tau^{3}n^{2}e^{\hat{\beta}}}{(1+e^{\hat{\beta}})^{2}}-\frac{\tau^{3}n^ {2}e^{\hat{\beta}}}{(1+e^{\hat{\beta}})^{2}}+\frac{6\tau n^{2}e^{\hat{\beta}} }{(1+e^{\hat{\beta}})^{2}}-\frac{\tau^{3}n^{2}e^{\hat{\beta}}}{(1+e^{\hat{ \beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2 }}+\frac{6\tau n^{2}e^{\hat{\beta}}}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^ {\hat{\beta}})^{2}}+\frac{6\tau n^{2}e^{\hat{\beta}}}{(1+e^{\hat{\beta}})^{2} }+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{6\tau n^{2}e^{\hat{\beta}}}{(1+e^{ \hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{6\tau n^{2}e^{ \hat{\beta}}}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+ \frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{ (1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{ \beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2} }+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{ (1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{ \beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2} }+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{ (1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{ \beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2} }+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{ (1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{ \beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+ \frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{ (1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{ \beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2} }+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{ (1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{ \beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+ \frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{ \hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+ \frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{ (1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{ \beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\hat{\beta}})^{2}}+ \frac{1}{(1+e^{\hat{\beta}})^{2}}+\frac{1}{(1+e^{\ Similarly, for the Poisson GLMMs, \[f^{\prime}(\tau) =\frac{-\tau^{-3}+(\tau+ne^{\hat{\beta}})^{-3}}{\tau^{-2}-(\tau+ne^ {\hat{\beta}})^{-2}}+\frac{ne^{\hat{\beta}}}{1+ne^{\hat{\beta}}\tau}\] \[=\frac{-3\tau^{2}ne^{\hat{\beta}}-3\tau n^{2}e^{2\hat{\beta}}-n^{ 3}e^{3\hat{\beta}}-n^{2}e^{2\hat{\beta}}\tau^{3}}{(\tau^{-2}-(\tau+ne^{\hat{ \beta}})^{-2})(1+ne^{\hat{\beta}}\tau)(\tau+ne^{\hat{\beta}})^{3}\tau^{3}}<0.\] Thus, \(f(\tau)\) is decreasing in both cases. Also, simple calculations show that \(f(\tau)\rightarrow\infty\) as \(\tau\to 0\) and \(f(\tau)\rightarrow-\infty\) as \(\tau\rightarrow\infty\). Thus, there is \(\tau_{0}>0\) with \(f(\tau_{0})=0\) such that \(\pi_{J}(\tau)\geqq\pi_{NK}(\tau)\) if \(\tau\leqq\tau_{0}\). In Table 1, we provide the values of \(\tau_{0}\) such that \(f(\tau_{0})\approx 0\) for \(n=30\) and various values of \(\hat{\beta}\). We observe that for the Poisson GLMM, \(\tau_{0}\) can be quite large as \(\hat{\beta}\) increases implying that practically \(\pi_{J}\) is more diffuse than \(\pi_{NK}\). Table 1 Values of \(\tau_{0}\) for different \(\hat{\beta}\) values. \begin{tabular}{c|c c c c c c c c} \hline \hline & \(\hat{\beta}\) & -2 & -1.5 & -1.0 & -0.5 & 0 & 0.5 & 1.0 \\ \hline Binary GLMM & \(\tau_{0}\) & 26 & 81 & 196 & 341 & 411 & 341 & 196 \\ \hline Poisson GLMM & \(\tau_{0}\) & 61 & 291 & 1326 & 5996 & 26956 & 120931 & 542186 \\ \hline \end{tabular} ## 6 Examples In this section, we use two popular examples, namely, the one-way random effects model and the two-way random effects model to demonstrate how our theoretical results can be easily applied to verify posterior propriety for GLMMs. ### An example involving one-way random effects models The data model is \(Y_{i}\mid\beta,u,\tau\overset{ind}{\sim}Binomial(m_{i},F(x_{i}^{\top}\beta+z_{i }^{\top}u))\) for \(i=1,\ldots,n\) with \(u\mid\tau\sim N(0,[1/\tau]I)\). Here, we consider \(p=2\) and \(r=1\) random effect with \(q=q_{1}=2\). Also, we consider \(n=6\), and \(y=(0,4,2,4,3,5)\) as the observed binomial responses. Let \(m_{1}=3,\,m_{2}=4,\,m_{3}=5,\,m_{4}=4,\,m_{5}=3\) and \(m_{6}=5\). The design matrix \(X\) and the random effect matrix \(Z\) are given by \[X^{\top}=\begin{pmatrix}1&1&1&1&1&1\\ 2.9&1.7&2.6&3.1&3.8&4.2\end{pmatrix},\,\text{and}\,\,Z^{\top}=\begin{pmatrix}1& 1&1&0&0&0\\ 0&0&0&1&1&1\end{pmatrix}\] Then, based on the notations in Section 3.1, we obtain \[X_{\triangle}^{\top}=\begin{pmatrix}1&1&1&1&1&1&1\\ 2.9&1.7&2.6&3.1&3.8&4.2&2.6\end{pmatrix},\text{ and}\] \[X_{\triangle}^{*\top}=\begin{pmatrix}1&-1&1&-1&-1&-1&-1\\ 2.9&-1.7&2.6&-3.1&-3.8&-4.2&-2.6\end{pmatrix}\] Note that \(X\) is of full rank. To apply Theorem 3, we need to check if there exists a positive vector \(e>0\) such that \(e^{\top}X_{\triangle}^{*}=0\). By using the function'simplex' in the R package 'boot', we easily find the above condition is satisfied (see Roy and Hobert's (2007) Appendix A for details). That is, the condition 1 in Theorem 3 holds. We can choose the hyperparamters for the prior of \(\tau\) satisfying the condition 2 in Theorem 3. If we consider the probit or the logit link, the condition 3 in Theorem 3 is satisfied. Therefore, the resulting posterior densities of \((\beta,\tau)\) are proper. On the other hand, since many observed \(y_{i}\)'s are zero as well as \(m_{i}\), results in Chen et al. (2002) cannot be used to establish posterior propriety here. In particular, as \(0<y_{i}<m_{i}\) only for \(i=3\), there does not exist a full rank sub-matrix \(X_{s}\) of \(X\) with \(0<y_{s_{i}}<m_{s_{i}},i=1,\ldots,p\). Thus, Chen et al.'s (2002) results are not applicable. We also consider the Poisson GLMM where \(Y_{i}\mid\beta,u,\tau\stackrel{{ ind}}{{\sim}}Poisson(\exp[x_{i}^{ \top}\beta+z_{i}^{\top}u])\) for \(i=1,\ldots,n\) with \(u\mid\tau\sim N(0,[1/\tau]I).\) We consider \(p=2\) and \(r=1\) random effect with \(q=q_{1}=2\). Also, we have \(n=6\) and \(y=(0,\,0,\,0,\,2,\,0,\,0)\) as the observed Poisson responses. The design matrix \(X\) and the random effect matrix \(Z\) are given by \[X^{\top}=\begin{pmatrix}1&1&1&1&1&1\\ 9.4&8.7&10.2&9.1&8.9&9.5\end{pmatrix},\text{ and }Z^{\top}=\begin{pmatrix}1&1&1&0&0&0 \\ 0&0&0&1&1&1\end{pmatrix}\] Then, based on the notations in Section 3.1 and 3.2, we obtain \[X_{\triangle}^{\top}=\begin{pmatrix}1&1&1&1&1&1\\ 9.4&8.7&10.2&9.1&8.9&9.5\end{pmatrix},\text{ and}\] \[X_{\triangle}^{*\top}=\begin{pmatrix}1&1&1&-1&1&1\\ 9.4&8.7&10.2&-9.1&8.9&9.5\end{pmatrix}\] Here, \(X\) has full rank. Using similar methods as the last example, the conditions in Corollary 2 hold. Hence, the propriety of the posterior of \((\beta,\tau)\) is obtained. Since many observed \(y_{i}\)'s are zero, existing results in Chen et al. (2002) and Natarajan and Kass (2000) cannot be used to establish posterior propriety for this example. In particular, as \(0<y_{i}\) only for \(i=4\), there does not exist a full rank sub-matrix \(X_{s}\) of \(X\) with \(0<y_{s_{i}},i=1,\ldots,p\). Thus, Chen et al.'s (2002) or and Natarajan and Kass's (2000) results are not applicable. ### An example involving two-way random effects models Consider the binomial GLMM is \(Y_{i}\mid\beta,u,\tau\stackrel{{ ind}}{{\sim}}Binomial(m_{i},F(x _{i}^{\top}\beta+z_{i}^{\top}u))\) for \(i=1,\ldots,n\), with \(u=(u_{1},u_{2})^{\top}\) and \(u_{j}\mid\tau_{j}\stackrel{{ ind}}{{\sim}}N(0,[1/\tau_{j}]{\rm I }_{q_{j}}),j=1,2\). Suppose we have \(r=2\) random effects, \(p=2\), \(q_{1}=3\), \(q_{2}=2\) and \(q=5\). Also, we consider \(y=(0,\,1,\,2,\,0,\,2,\,2)\) as the observed binomial responses. Let \(m_{1}=m_{2}=m_{3}=m_{4}=m_{5}=m_{6}=2\). The design matrix \(X\) and the random effect matrix \(Z\) are given by \[X^{\top}=\left(\begin{array}{cccccc}1&1&1&1&1&1\\ 1.8&2.1&3.2&4.9&5.3&6.1\end{array}\right)\text{ and }Z^{\top}=\left( \begin{array}{cccccc}1&1&0&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&0&1&1\\ 1&0&1&0&1&0\\ 0&1&0&1&0&1\end{array}\right)\] Then, based on the notations in Section 3.1, we obtain \[X_{\triangle}^{\top}=\left(\begin{array}{cccccc}1&1&1&1&1&1&1\\ 1.8&2.1&3.2&4.9&5.3&6.1&2.1\end{array}\right),\text{ and }\] \[X_{\triangle}^{*\top}=\left(\begin{array}{cccccc}1&1&-1&1&-1&-1&-1\\ 1.8&2.1&-3.2&4.9&-5.3&-6.1&-2.1\end{array}\right)\] In this example, \(X\) is a full rank matrix while \(Z\) is not. By employing the same method as that in Section 6.1, we find that there exists a positive vector \(e>0\) such that \(e^{\top}X_{\triangle}^{*}=0\). That is, the condition 1 of Theorem 3 holds. We can choose the hyperparamters for the priors of \(\tau_{1}\) and \(\tau_{2}\) satisfying the condition 2 in Theorem 3. For probit or logistic GLMMs, the condition 3 in Theorem 3 is satisfied. Hence, the resulting posterior densities of \((\beta,\tau)\) are proper according to Theorem 3. This example demonstrates that \(Z\) full rank is not a necessary condition for posterior propriety for GLMMs with the improper uniform prior on \(\beta\). Also, since \(0<y_{i}<m_{i}\) only for \(i=2\), there does not exist a full rank sub-matrix \(X_{s}\) of \(X\) with \(0<y_{s_{i}}<m_{s_{i}},i=1,\ldots,p\). Thus, although Chen et al.'s (2002) results do not apply to this two-way random effects example, their conditions on \((y,X)\) for binomial GLMMs do not hold. One can analyze datasets arising in diverse disciplines by fitting a two-way random effects GLMM. For example, from the popular R package 'lme4' (Bates, Machler, Bolker and Walker, 2015), consider the 'grouseticks' dataset, which can be analyzed by fitting a Poisson GLMM for the response variable 'ticks' with 'year' and 'height' as the fixed effects, and 'brood' and 'location' as the random effects. Then, from Corollary 2, propriety of the posterior of \((\beta,\tau)\) follows when the improper uniform prior is used for \(\beta\) and proper gamma priors satisfying the condition 2 in Corollary 2 are placed on \(\tau\). Here, \(\beta=(\beta_{0},\beta_{1},\beta_{2},\beta_{3})\) with the intercept term \(\beta_{0}\), the fixed effects parameters \(\beta_{1},\beta_{2}\) of the levels 96 and 97, respectively, of the variable year and the regression coefficient \(\beta_{3}\) of the continuous variable height. Also, \(\tau=(\tau_{1},\tau_{2})\) with \(1/\tau_{1}(1/\tau_{2})\) being the variance of the random effect brood (location). Finally, since this example involves two random effects, existing results in Chen et al. (2002) and Natarajan and Kass (2000), who consider models with one-way random effects, are not readily applicable. ## 7 Discussion We have derived the necessary and sufficient conditions for posterior propriety for GLMMs under various widely used reference priors, including the Jeffreys' prior. Unlike the available results in the literature, the conditions presented here for binomial and Poisson GLMMs can be easily verified. For example, our results do not assume the strong exponentiated norm bound condition of Michalak and Morris (2016) or an intractable, multi-dimensional integral as in Natarajan and Kass (2000) to be finite. Also, some existing results on posterior propriety for GLMMs put some constraints on the random effects design matrix \(Z\) that may not hold in practice. On the other hand, some of our sufficient conditions for posterior propriety do not put any constraint on \(Z\), although they assume proper priors on the variance components of the random effects. We also provide sufficient conditions for posterior propriety when the improper power prior or an approximate Jeffreys' prior is used on the variance parameters of the random effects. Exploiting a relationship between the likelihoods of the Poisson GLMMs with the log link and the binomial GLMMs with the logit link, this article derives easily verifiable conditions for posterior propriety for Poisson GLMMs with the log link. In Section 3 for priors on the \(r\) random effects, we have considered the practical settings where either \(b_{j}=0\) for all \(j=1,\ldots,r\) or \(b_{j}>0\) for all \(j=1,\ldots,r\), although it might be of theoretical interest to study posterior propriety in the situations where \(b_{j}=0\) for some \(j\) while it is strictly positive for the other random effects. Finally, in many modern datasets, \(p\) is often larger than \(n\). Thus, a potential future work is to develop posterior propriety conditions for GLMMs in the case of \(p>n\). ## Appendix: Proofs of theoretical results Proof of Theorem 1.: Since \(\exp[(y_{i}\theta_{i}-b(\theta_{i}))]\leq M\) for \(i=1,\ldots,n\), we have \[c(y) =\int_{\mathfrak{A}}\int_{\mathbb{R}^{p}}\int_{\mathbb{R}^{q}} \left[\prod_{i=1}^{n}\exp[(y_{i}\theta_{i}-b(\theta_{i}))+d(y_{i})]\right] \phi_{q}(u;0,\Psi)du\pi(\Psi)d\beta d\Psi\] \[\propto\int_{\mathfrak{A}}\int_{\mathbb{R}^{p}}\int_{\mathbb{R}^{ q}}\left[\prod_{i=1}^{n}\exp[(y_{i}\theta_{i}-b(\theta_{i}))]\right]\phi_{q}(u;0, \Psi)du\pi(\Psi)d\beta d\Psi\] \[\leq M^{n-p}\int_{\mathfrak{A}}\int_{\mathbb{R}^{p}}\int_{ \mathbb{R}^{q}}\left[\prod_{i=1}^{p}\exp[(y_{i}\theta(\eta_{i})-b(\theta(\eta_ {i})))]\right]\phi_{q}(u;0,\Psi)du\pi(\Psi)d\beta d\Psi\] \[=M^{n-p}\mathrm{E}_{u}\left[\int_{\mathbb{R}^{p}}\prod_{i=1}^{p} \exp[(y_{i}\theta(\eta_{i})-b(\theta(\eta_{i})))]d\beta\right], \tag{22}\] where \(\eta_{i}=x_{i}^{\top}\beta+z_{i}^{\top}u\) and \(\mathrm{E}_{u}\) indicates expectation with respect to the \(u\)- marginal distribution of \((u,\Psi)\) where \(u|\Psi\sim N_{q}(0,\Psi)\) and \(\Psi\sim\pi(\Psi)\). By changing the variables \(\eta=(\eta_{1},\ldots,\eta_{p})=X_{s}\beta+Z_{p\times q}u\) for fixed \(u\) as in Chen et al. (2002), since \(X_{s}\) is of full rank, we have \[\mathrm{E}_{u}\Bigg{[}\int_{\mathbb{R}^{p}}\prod_{i=1}^{p}\exp[( y_{i}\theta(\eta_{i})-b(\theta(\eta_{i})))]d\beta\Bigg{]}=\left|\det(X_{s}^{-1}) \right|\mathrm{E}_{u}\Bigg{[}\int_{\mathbb{R}^{p}}\prod_{i=1}^{p}\exp[(y_{i} \theta(\eta_{i})-b(\theta(\eta_{i})))]d\eta\Bigg{]}\] \[=\left|\det(X_{s}^{-1})\right|\mathrm{E}_{u}\Bigg{[}\prod_{i=1}^ {p}\int_{\mathbb{R}}\exp[(y_{i}\theta(\eta_{i})-b(\theta(\eta_{i})))]d\eta_{ i}\Bigg{]}<\infty, \tag{23}\] where \(\det(X_{s}^{-1})\) is the Jacobian of the transformation. The expectation in the last step is finite since the condition 3 holds and \(\prod_{i=1}^{p}\int_{\mathbb{R}}\exp[(y_{i}\theta(\eta_{i})-b(\theta(\eta_{i}) ))]d\eta_{i}\) is free of \(u\). Therefore, \(c(y)<\infty\) holds, and the posterior propriety is proved. Proof of Theorem 2.: We need to show that \(c(y)\) in (4) diverges to infinity if \(X\) is not of full rank. Now, if \(X\) is not a full rank matrix, then there exists \(\beta^{*}\neq 0\) such that \(X\beta^{*}=0\), that is \(x_{i}^{\top}\beta^{*}=0\) for \(i=1,\ldots,n\). For \(\epsilon^{\prime}>0\), define \(D_{\epsilon^{\prime}}=\left\{\tilde{\beta}\in\mathbb{R}^{p}:\left|x_{i}^{\top} \tilde{\beta}\right|<\epsilon^{\prime},\,i=1,\ldots,n\right\}\). Recall that \(\theta\) is a monotone function, \(\eta_{i}=x_{i}^{\top}\beta+z_{i}^{\top}u\) and \(\mathrm{E}_{u}\) indicates the expectation with respect to the marginal distribution of \(u\). Since \(b(\theta)\) is a monotone function, we have \[c(y)=\int_{\mathfrak{A}}\int_{\mathbb{R}^{p}}\int_{\mathbb{R}^{q}}\left[\prod_{ i=1}^{n}\exp[(y_{i}\theta_{i}-b(\theta_{i}))+d(y_{i})]\right]\phi_{q}(u;0, \Psi)du\pi(\Psi)d\beta d\Psi\] \[\propto\int_{\mathbb{M}}\int_{\mathbb{R}^{p}}\int_{\mathbb{R}^{q}} \left[\prod_{i=1}^{n}\exp[(y_{i}\theta_{i}-b(\theta_{i}))]\right]\phi_{q}(u;0, \Psi)du\pi(\Psi)d\beta d\Psi\] \[=\mathrm{E}_{u}\Bigg{[}\int_{\mathbb{R}^{p}}\prod_{i=1}^{n}\exp[( y_{i}\theta(\eta_{i})-b(\theta(\eta_{i})))]d\beta\Bigg{]}\] \[\geq\mathrm{E}_{u}\left[\prod_{i=1}^{n}\exp[(y_{i}\theta(\pm \epsilon^{\prime}+z_{i}^{\top}u)-b(\theta(\pm\epsilon^{\prime}+z_{i}^{\top}u)) )]\right]\int_{\beta\in D_{\epsilon^{\prime}}}d\beta. \tag{24}\] Here, \(b(\theta(\pm\epsilon^{\prime}+z_{i}^{\top}u))\) is \(b(\theta(\epsilon^{\prime}+z_{i}^{\top}u))\) if \(b(\cdot)\) is increasing and is \(b(\theta(-\epsilon^{\prime}+z_{i}^{\top}u))\) if \(b(\cdot)\) is decreasing. Similarly, \(y_{i}\theta(\pm\epsilon^{\prime}+z_{i}^{\top}u)\) is \(y_{i}\theta(\epsilon^{\prime}+z_{i}^{\top}u)\) if \(y_{i}\leq 0\) and is \(y_{i}\theta(-\epsilon^{\prime}+z_{i}^{\top}u)\) if \(y_{i}>0\). Note that \(\sum_{i=1}^{n}(x_{i}^{\top}\bar{\beta})^{2}<\epsilon(=\epsilon^{\prime 2}) \Rightarrow\bar{\beta}\in D_{\epsilon^{\prime}}.\) Now \(\sum_{i=1}^{n}(x_{i}^{\top}\bar{\beta})^{2}<\epsilon\Leftrightarrow\bar{\beta }^{\top}X^{\top}X\bar{\beta}<\epsilon\). By spectral decomposition for \(X^{\top}X\), we have \(X^{\top}X=P\Lambda P^{\top}\) where \(\Lambda\) is the \(p\times p\) diagonal matrix with eigenvalues \(\lambda_{1},\ldots,\lambda_{p}\) and the \(i\)th eigenvector of \(X^{\top}X\) is \(p_{i}^{\top}\), the \(i\)th row of \(P^{\top}\). Thus, \[\bar{\beta}^{\top}P\Lambda P^{\top}\bar{\beta}<\epsilon\Leftrightarrow\lambda _{1}g_{1}^{2}+\cdots+\lambda_{p}g_{p}^{2}<\epsilon, \tag{25}\] where \(g_{i}=p_{i}^{\top}\bar{\beta}\) for \(i=1,\ldots,p\). Since \(X\) is not a full rank matrix, \(\mathrm{rank}(X)\leq p-1\). Suppose \(\mathrm{rank}(X)=p-1\), then without loss of generality, let \(\lambda_{p}=0\) and \(\lambda_{1},\ldots,\lambda_{p-1}\) are all positive. Hence, (25) becomes \(\lambda_{1}g_{1}^{2}+\cdots+\lambda_{p-1}g_{p-1}^{2}<\epsilon\). Let \(g=(g_{1},\ldots,g_{p})^{\top}\). Let \(B_{1}=\left\{g\in\mathbb{R}^{p}:|g_{i}|<\sqrt{\epsilon/\mathrm{tr}(X^{\top}X)}, 1\leq i\leq p-1,\,g_{p}\in\mathbb{R}\right\}\). By change of variables \(\beta\to g\), from (24) we have \[c(y)\geq\mathrm{E}_{u}\left[\prod_{i=1}^{n}\exp[(y_{i}\theta(\pm\epsilon+z_{i}^ {\top}u)-b(\theta(\pm\epsilon+z_{i}^{\top}u)))]\right]\int_{g\in B_{1}}dg=\infty.\] Note that the Jacobian of the transformation \(\beta\to g\) is \(\det(P)\) and \(\left|\det(P)\right|=1\). Therefore, the proof is complete. Proof of Theorem 3.: As in the proof of Theorem 1 of Roy and Kaiser (2013), we have \[\binom{m_{i}}{y_{i}}\left[F(x_{i}^{\top}\beta+z_{i}^{\top}u)\right]^{y_{i}} \left[1-F(x_{i}^{\top}\beta+z_{i}^{\top}u)\right]^{m_{i}-y_{i}}\leq\left\{ \begin{array}{ll}1-F(x_{i}^{\top}\beta+z_{i}^{\top}u)&\text{if}\,i\in I_{1} \\ F(x_{i}^{\top}\beta+z_{i}^{\top}u)&\text{if}\,i\in I_{2}\\ \binom{m}{y_{i}}F(x_{i}^{\top}\beta+z_{i}^{\top}u)\Big{[}1-F(x_{i}^{\top} \beta+z_{i}^{\top}u)\Big{]}&\text{if}\,i\in I_{3}.\end{array}\right.\] Thus from (9) we have \[c(y)\leq\] \[\left\{\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}}F(x_{i}^{\top}\beta+ z_{i}^{\top}u)\Big{[}1-F(x_{i}^{\top}\beta+z_{i}^{\top}u)\Big{]}\right\}\phi_{q}(u;0, D(\tau)^{-1})du\pi(\tau)d\beta d\tau. \tag{26}\] If the random variable \(\xi\sim F(\cdot)\), then \(1-F(x)=\mathrm{E}\mathrm{I}(\xi>x)\) and \(F(x)=\mathrm{E}\mathrm{I}(-\xi\geq-x)\). Let \(\delta_{1},\delta_{2},\ldots,\delta_{n+k}\stackrel{{ iid}}{{\sim}}F(\cdot)\). Let \(\delta=(\delta_{1},\delta_{2},\ldots,\delta_{n+k})^{\top}\) and \(\delta^{*}=(t_{1}\delta_{1},t_{2}\delta_{2},\ldots,t_{n+k}\delta_{n+k})^{\top}\), where \(t_{i}\) is as defined before. Thus, using (8), the inequality (26) becomes \[c(y) \leq(2\pi)^{-\frac{q}{2}}\int_{\mathbb{R}_{+}^{\tau}}\int_{ \mathbb{R}^{q}}\Bigg{[}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}}\Bigg{]}\int_{ \mathbb{R}^{p}}\mathrm{E}\Big{[}\mathrm{I}\{t_{i}(x_{i}^{\top}\beta+z_{i}^{ \top}u)\leq t_{i}\delta_{i};1\leq i\leq n+k\}\Big{]}d\beta\] \[\quad\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp[-\tau_{j}(b_{j }+u_{j}^{\top}u_{j}/2)]dud\tau \tag{27}\] \[=(2\pi)^{-\frac{q}{2}}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}}\int_ {\mathbb{R}^{q}}\prod_{j=1}^{r}\frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^{\top }u_{j}/2)^{a_{j}+q_{j}/2}}\mathrm{E}\Big{[}\int_{\mathbb{R}^{p}}\mathrm{I}\{t_ {i}(x_{i}^{\top}\beta+z_{i}^{\top}u)\leq t_{i}\delta_{i};1\leq i\leq n+k\}d \beta\Big{]}du\] \[=(2\pi)^{-\frac{q}{2}}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}}\int_ {\mathbb{R}^{q}}\prod_{j=1}^{r}\frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^{\top }u_{j}/2)^{a_{j}+q_{j}/2}}\mathrm{E}\Big{[}\int_{\mathbb{R}^{p}}\mathrm{I}\{t _{i}x_{i}^{\top}\beta\leq t_{i}\delta_{i}-t_{i}z_{i}^{\top}u;1\leq i\leq n+k\}d \beta\Big{]}du\] \[=(2\pi)^{-\frac{q}{2}}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}}\int_ {\mathbb{R}^{q}}\prod_{j=1}^{r}\frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^{\top }u_{j}/2)^{a_{j}+q_{j}/2}}\mathrm{E}\Big{[}\int_{\mathbb{R}^{p}}\mathrm{I}\{X_ {\triangle}^{*}\beta\leq\delta^{*}-Z_{\triangle}^{*}u\}d\beta\Big{]}du\] \[\leq(2\pi)^{-\frac{q}{2}}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}} \int_{\mathbb{R}^{q}}\prod_{j=1}^{r}\frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^{ \top}u_{j}/2)^{a_{j}+q_{j}/2}}\mathrm{E}\Big{[}\int_{\mathbb{R}^{p}}\mathrm{I }\Big{\{}\|\beta\|\leq l\Big{\|}\delta^{*}-Z_{\triangle}^{*}u\Big{\|}\Big{\}}d \beta\Big{]}du, \tag{28}\] where the first equality follows from Tonelli's Theorem and the condition 2 in Theorem 3. Note that \(X\) is of full rank by the condition 1. Then \(X_{\triangle}\) is also of full rank. Also, there exists a positive vector \(e>0\) such that \(e^{\top}X_{\triangle}^{*}=0\). Thus Lemma 4.1 in Chen and Shao (2001) can be used to get the last inequality, where \(l\) is a constant depending on \(X_{\triangle}^{*}\). Since \(\left\|\delta^{*}-Z_{\triangle}^{*}u\right\|\leq\left\|\delta^{*}\right\|+ \left\|Z_{\triangle}^{*}u\right\|\), from (28), we have \[c(y) \leq(2\pi)^{-\frac{q}{2}}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}} \int_{\mathbb{R}^{q}}\prod_{j=1}^{r}\frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+\frac {1}{2}u_{j}^{\top}u_{j})^{a_{j}+q_{j}/2}}\mathrm{E}\Big{[}\int_{\mathbb{R}^{p} }\mathrm{I}\Big{\{}\|\beta\|\leq l\Big{(}\left\|\delta^{*}\right\|+\left\|Z_{ \triangle}^{*}u\right\|\Big{)}\Big{\}}d\beta\Big{]}du\] \[=(2\pi)^{-\frac{q}{2}}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}} \int_{\mathbb{R}^{q}}\prod_{j=1}^{r}\frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^ {\top}u_{j}/2)^{a_{j}+q_{j}/2}}\mathrm{E}\Big{[}2^{p}l^{p}\Big{(}\left\| \delta^{*}\right\|+\left\|Z_{\triangle}^{*}u\right\|\Big{)}^{p}\Big{]}du\] \[=(2\pi)^{-\frac{q}{2}}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}} 2^{2p-1}l^{p}\Bigg{[}\int_{\mathbb{R}^{q}}\prod_{j=1}^{r}\frac{\Gamma(a_{j}+q_{ j}/2)}{(b_{j}+u_{j}^{\top}u_{j}/2)^{a_{j}+q_{j}/2}}\mathrm{E}\big{\|}\delta^{*} \big{\|}^{p}du\] \[\qquad\qquad\qquad+\int_{\mathbb{R}^{q}}\prod_{j=1}^{r}\frac{ \Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^{\top}u_{j}/2)^{a_{j}+q_{j}/2}}\Big{\|}Z_{ \triangle}^{*}u\Big{\|}^{p}du\Bigg{]}\] \[\leq(2\pi)^{-\frac{q}{2}}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}} 2^{2p-1}l^{p}\Bigg{[}\mathrm{E}\|\delta\|^{p}\int_{\mathbb{R}^{q}}\prod_{j=1}^{r} \frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^{\top}u_{j}/2)^{a_{j}+q_{j}/2}}du\] \[\qquad\qquad\qquad+\lambda^{p/2}\int_{\mathbb{R}^{q}}\prod_{j=1}^{r }\frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^{\top}u_{j}/2)^{a_{j}+q_{j}/2}}( \sum_{j=1}^{r}u_{j}^{\top}u_{j})^{\frac{p}{2}}du\Bigg{]}, \tag{29}\] where the second inequality follows from the fact that \((a+b)^{c}\leq 2^{c-1}(a^{c}+b^{c})\) for \(c\geq 1\), \(a\geq 0\) and \(b\geq 0\). Since \(\left\|Z_{\triangle}^{*}u\right\|^{p}=(u^{\top}Z_{\triangle}^{*}{}^{\top}Z_{ \triangle}^{*}u)^{p/2}\leq\lambda^{p/2}\|u\|^{p}=\lambda^{p/2}(\sum_{j=1}^{r}u_ {j}^{\top}u_{j})^{\frac{p}{2}}\), where \(\lambda\) is the largest eigenvalue of \(Z_{\triangle}^{*}{}^{\top}Z_{\triangle}^{*}\), the last inequality follows. Next, we will work on the integration in the first term on the right-hand side of (29). Because \[\int_{\mathbb{R}^{q}}\prod_{j=1}^{r}\frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^ {\top}u_{j}/2)^{a_{j}+q_{j}/2}}du\leq\max_{1\leq j\leq r}\Gamma(a_{j}+q_{j}/2) \prod_{j=1}^{r}\int_{\mathbb{R}^{q_{j}}}\frac{1}{(b_{j}+u_{j}^{\top}u_{j}/2)^ {a_{j}+q_{j}/2}}du_{j},\] we focus on the following integration \[\int_{\mathbb{R}^{q_{j}}}\frac{1}{(b_{j}+u_{j}^{\top}u_{j}/2)^{a_{j}+q_{j}/2} }du_{j}. \tag{30}\] For \(q_{j}=1\), (30) becomes \(2^{a_{j}+1/2}\int_{0}^{\infty}(2b_{j}+u_{j}^{2})^{-a_{j}-1/2}du_{j}\)which is finite by condition 2. For any integer \(q_{j}\geq 2\), we consider the polar transformation as \(u_{j1}=r\cos\theta_{1}\), \(u_{j2}=r\sin\theta_{1}\cos\theta_{2},\ldots,u_{jq_{j}-1}=r\sin\theta_{1}\ldots \sin\theta_{q_{j}-2}\cos\theta_{q_{j}-1}\), \(u_{jq_{j}}=r\sin\theta_{1}\ldots\sin\theta_{q_{j}-2}\sin\theta_{q_{j}-1}\). Here, \(r>0\), \(0<\theta_{q_{j}-1}<2\pi\), \(0<\theta_{i}<\pi\), \(1\leq i\leq q_{j}-2\) and the Jacobian is \(r^{q_{j}-1}\prod_{i=1}^{q_{j}-2}(\sin\theta_{i})^{q_{j}-1-i}\). Note that, when \(q_{j}=2\), the Jacobian is \(r\). Therefore, (30) becomes \[\int_{\mathbb{R}^{q_{j}}}\frac{1}{(b_{j}+u_{j}^{\top}u_{j}/2)^{a_ {j}+q_{j}/2}}du_{j}\] \[= \int_{0}^{2\pi}\int_{0}^{\pi}\cdots\int_{0}^{\pi}\int_{0}^{\infty }\frac{r^{q_{j}-1}[\mathrm{I}(q_{j}\geq 3)\prod_{i=1}^{q_{j}-2}(\sin\theta_{i})^{q _{j}-1-i}+\mathrm{I}(q_{j}=2)]}{(b_{j}+r^{2}/2)^{a_{j}+q_{j}/2}}drd\theta_{1} \,d\theta_{2}\cdots d\theta_{q_{j}-1}\] \[\leq \int_{0}^{2\pi}\int_{0}^{\pi}\cdots\int_{0}^{\pi}\int_{0}^{\infty }\frac{r^{q_{j}-1}}{(b_{j}+r^{2}/2)^{a_{j}+q_{j}/2}}drd\theta_{1}\,d\theta_{2 }\cdots d\theta_{q_{j}-1}, \tag{31}\] where the inequality is due to the fact \(0<\prod_{i=1}^{q_{j}-2}(\sin\theta_{i})^{q_{j}-1-i}\leq 1\) for \(0<\theta_{i}<\pi\), \(1\leq i\leq q_{j}-2\). Now, we work on the integration for \(r\) in (31), considering \(a_{j}>0,b_{j}>0\) and using \(r=\sqrt{2b_{j}}\tan\alpha\), \[\int_{0}^{\infty}\frac{r^{q_{j}-1}}{(b_{j}+r^{2}/2)^{a_{j}+q_{j}/2 }}dr =\int_{0}^{\frac{\pi}{2}}2^{\frac{q_{j}}{2}}b_{j}^{-a_{j}}(\tan \alpha)^{q_{j}-1}(\sec\alpha)^{2-q_{j}-2a_{j}}\,d\alpha\] \[=2^{\frac{q_{j}}{2}}b_{j}^{-a_{j}}\int_{0}^{\frac{\pi}{2}}\tan \alpha\Big{(}\frac{\sin\alpha}{\cos\alpha}\Big{)}^{q_{j}-2}(\sec\alpha)^{2-q _{j}-2a_{j}}\,d\alpha\] \[\leq 2^{\frac{q_{j}}{2}}b_{j}^{-a_{j}}\int_{0}^{\frac{\pi}{2}}\tan \alpha(\sec\alpha)^{-2a_{j}}\,d\alpha=\frac{2^{\frac{q_{j}}{2}-1}b_{j}^{-a_{j}} }{a_{j}}\] where the inequality is due to the fact that \((\sin\alpha)^{q_{j}-2}\leq 1\) for \(q_{j}\geq 2\) and \(\alpha\in[0,\pi/2]\). Hence, by combining the results for \(q_{j}=1\) and \(q_{j}\geq 2\), we obtain: if \(a_{j}>1/2,b_{j}>0\), for \(j=1,\ldots,r\), and \(\mathrm{E}\|\delta\|^{p}<\infty\), the first term from the right hand side of (29) is finite. Note that for \(p\geq 1\), the integration in the second term on the right-hand side of (29) is \[\int_{\mathbb{R}^{q}}\Bigg{(}\sum_{j=1}^{r}u_{j}^{\top}u_{j}\Bigg{)} ^{\frac{p}{2}}\prod_{j=1}^{r}\frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^{\top}u_ {j}/2)^{a_{j}+q_{j}/2}}du\] \[\leq\int_{\mathbb{R}^{q}}\max\Big{(}2^{(r-1)(\frac{p}{2}-1)},1 \Big{)}\Bigg{[}\sum_{j=1}^{r}(u_{j}^{\top}u_{j})^{\frac{p}{2}}\Bigg{]}\prod_{j =1}^{r}\frac{\Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^{\top}u_{j}/2)^{a_{j}+q_{j}/2 }}du, \tag{32}\] where the inequality is due to the fact that for \(a\geq 0\) and \(b\geq 0\), \((a+b)^{c}\leq 2^{c-1}(a^{c}+b^{c})\) for \(c\geq 1\) and \((a+b)^{c}\leq a^{c}+b^{c}\) for \(0<c<1\). Then we focus on one term in the expression on the right-hand side of (32), \[\int_{\mathbb{R}^{q}}\max\Big{(}2^{(r-1)(\frac{p}{2}-1)},1\Big{)} (u_{j}^{\top}u_{j})^{\frac{p}{2}}\prod_{j=1}^{r}\frac{\Gamma(a_{j}+q_{j}/2)}{ (b_{j}+u_{j}^{\top}u_{j}/2)^{a_{j}+q_{j}/2}}du\] \[=\max\Big{(}2^{(r-1)(\frac{p}{2}-1)},1\Big{)}\int_{\mathbb{R}^{q -q_{j}}}\int_{\mathbb{R}^{q_{j}}}(u_{j}^{\top}u_{j})^{\frac{p}{2}}\frac{ \Gamma(a_{j}+q_{j}/2)}{(b_{j}+u_{j}^{\top}u_{j}/2)^{a_{j}+q_{j}/2}}du_{j}\] \[\prod_{j^{\prime}\neq j}\frac{\Gamma(q_{j^{\prime}}/2+a_{j^{ \prime}})}{(b_{j^{\prime}}+u_{j^{\prime}}^{\top}u_{j^{\prime}}/2)^{q_{j^{ \prime}}/2+a_{j^{\prime}}}}du\setminus u_{j}. \tag{33}\] Next, for the inner integration with respect to \(u_{j}\), \(\int_{\mathbb{R}^{q_{j}}}(u_{j}^{\top}u_{j})^{\frac{p}{2}}/[(b_{j}+u_{j}^{\top }u_{j}/2)^{a_{j}+q_{j}/2}]du_{j}\), again we consider \(q_{j}=1\) and \(q_{j}\geq 2\) separately. When \(q_{j}=1\), by changing the variable \(\sqrt{2b_{j}}\tan\theta=u_{j}\), we obtain \[\int_{\mathbb{R}^{q_{j}}}\frac{(u_{j}^{\top}u_{j})^{\frac{p}{2}}} {(b_{j}+u_{j}^{\top}u_{j}/2)^{a_{j}+q_{j}/2}}du_{j} =2^{a_{j}+1/2}\int_{0}^{\infty}\frac{u_{j}^{p}}{(2b_{j}+u_{j}^{2} )^{a_{j}+1/2}}du_{j}=2^{(p+1)/2}b_{j}^{p/2-a_{j}}\int_{0}^{\frac{\pi}{2}}(\sin \theta)^{p}(\sec\theta)^{p+1-2a_{j}}d\theta\] \[=2^{(p+1)/2}b_{j}^{p/2-a_{j}}\int_{0}^{\frac{\pi}{2}}\tan\theta( \sin\theta)^{p-1}(\sec\theta)^{p-2a_{j}}d\theta\] \[\leq 2^{(p+1)/2}b_{j}^{p/2-a_{j}}\int_{0}^{\frac{\pi}{2}}\tan\theta( \sec\theta)^{p-2a_{j}}d\theta=2^{(p+1)/2}b_{j}^{p/2-a_{j}}/(2a_{j}-p),\] where the inequality is due to the fact that \((\sin\theta)^{p-1}\leq 1\) for \(p\geq 1\) and \(\theta\in[0,\pi/2]\), and the integration in the last inequality is finite if \(a_{j}>p/2\). For \(q_{j}\geq 2\), we apply the polar transformation again and have \[\int_{\mathbb{R}^{q_{j}}}\frac{(u_{j}^{\top}u_{j})^{\frac{p}{2}}} {(b_{j}+u_{j}^{\top}u_{j}/2)^{a_{j}+q_{j}/2}}du_{j}\] \[=\int_{0}^{2\pi}\int_{0}^{\pi}\cdots\int_{0}^{\pi}\int_{0}^{\infty }\frac{r^{p+q_{j}-1}[\mathrm{I}(q_{j}\geq 3)\prod_{i=1}^{q_{j}-2}(\sin\theta_{i})^{q_{j}-1-i}+ \mathrm{I}(q_{j}=2)]}{(b_{j}+r^{2}/2)^{a_{j}+q_{j}/2}}drd\theta_{1}\,d\theta_{2 }\cdots d\theta_{q_{j}-1}\] \[\leq\int_{0}^{2\pi}\int_{0}^{\pi}\cdots\int_{0}^{\pi}\int_{0}^{ \infty}\frac{r^{p+q_{j}-1}}{(b_{j}+r^{2}/2)^{a_{j}+q_{j}/2}}drd\theta_{1}\,d \theta_{2}\cdots d\theta_{q_{j}-1}.\] As before, using \(r=\sqrt{2b_{j}}\tan\alpha\), we have \[\int_{0}^{\infty}\frac{r^{p+q_{j}-1}}{(b_{j}+r^{2}/2)^{a_{j}+q_{j}/ 2}}dr =\int_{0}^{\frac{\pi}{2}}2^{(p+q_{j})/2}b_{j}^{p/2-a_{j}}(\tan\alpha)^{p+q_{j }-1}(\sec\alpha)^{2-q_{j}-2a_{j}}\,d\alpha\] \[=2^{(p+q_{j})/2}b_{j}^{p/2-a_{j}}\int_{0}^{\frac{\pi}{2}}\tan \alpha\frac{(\sin\alpha)^{p+q_{j}-2}}{(\cos\alpha)^{p+q_{j}-2}}(\sec\alpha)^{2- q_{j}-2a_{j}}\,d\alpha\] \[\leq 2^{(p+q_{j})/2}b_{j}^{p/2-a_{j}}\int_{0}^{\frac{\pi}{2}}\tan \alpha(\sec\alpha)^{p-2a_{j}}\,d\alpha=2^{(p+q_{j})/2}b_{j}^{p/2-a_{j}}/(2a_{j} -p).\] The integration for \(\alpha\) in the last inequality is finite if \(a_{j}>p/2\). Thus, when \(a_{j}>p/2,\,b_{j}>0\) for \(j=1,\ldots,r\), the inner integration respect to \(u_{j}\) in (33) is finite. For the outer integration respect to \(u\setminus u_{j}\) in (33), we can use the conditions for the integration in the first term from (29). Hence if \(a_{j}>1/2\), \(b_{j}>0\) for \(j=1,\ldots,r\), the outer integration respect to \(u_{j^{\prime}}\) in (33) is finite. Thus, Theorem 3 is proved. Proof of Corollary 2.: If \(\pi(\beta)\propto 1\), and the prior for \(\tau_{j}\) is as in (8), \(c(y)\) in (10) becomes \[c(y) =(2\pi)^{-\frac{q}{2}}\int_{\mathbb{R}_{+}^{r}}\int_{\mathbb{R}^{ \mathcal{P}}}\int_{\mathbb{R}^{q}}\prod_{i=1}^{n}\frac{\exp[(x_{i}^{\top}\beta+ z_{i}^{\top}u)y_{i}]}{\exp[\exp(x_{i}^{\top}\beta+z_{i}^{\top}u)]y_{i}!}\prod_{j= 1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp[-\tau_{j}(b_{j}+u_{j}^{\top}u_{j}/2)]udud \beta d\tau\] \[=(2\pi)^{-\frac{q}{2}}B\int_{\mathbb{R}_{+}^{r}}\int_{\mathbb{R}^ {\mathcal{P}}}\int_{\mathbb{R}^{q}}\prod_{i=1}^{n}\binom{y_{(n)}}{y_{i}}\frac{ \exp[(x_{i}^{\top}\beta+z_{i}^{\top}u)y_{i}]}{\exp[\exp(x_{i}^{\top}\beta+z_{i} ^{\top}u)]}\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp[-\tau_{j}(b_{j}+u_{j} ^{\top}u_{j}/2)]udud\beta d\tau\] \[\leq(2\pi)^{-\frac{q}{2}}B\int_{\mathbb{R}_{+}^{r}}\int_{\mathbb{ R}^{\mathcal{P}}}\int_{\mathbb{R}^{q}}\prod_{i=1}^{n}\binom{y_{(n)}}{y_{i}} \frac{d\exp[(x_{i}^{\top}\beta+z_{i}^{\top}u)y_{i}]}{[1+\exp(x_{i}^{\top}\beta+ z_{i}^{\top}u)]^{y_{(n)}}}\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp[-\tau_{j}(b_{j}+u_{j} ^{\top}u_{j}/2)]udud\beta d\tau, \tag{34}\] where \(B=\prod_{i=1}^{n}(y_{(n)}-y_{i})!/y_{(n)}!\) and the last inequality is because there exists a constant \(d\) such that \[(1+\exp(w))^{y_{(n)}}\leq d\exp[\exp(w)] \tag{35}\] for \(w\in\mathbb{R}\). We now prove (35). When \(y_{(n)}=0\), it is straightforward to see that (35) is satisfied with \(d=1\). When \(y_{(n)}=1\), note that \(\exp[\exp(w)]\geq 1+\exp(w)\) since \(\exp[\exp(w)]-\exp(w)-1\) is an increasing function and \(\lim_{w\rightarrow-\infty}\exp[\exp(w)]-\exp(w)-1=0\). If \(y_{(n)}\geq 2\), let \(g(w)=\exp(w)-y_{(n)}\log(1+\exp(w))\), then \(g^{\prime}(w)=[\exp(2w)-\exp(w)(y_{(n)}-1)]/[1+\exp(w)]\). Note that \(g^{\prime}(w)\lessneqq 0\) if and only if \(\exp(w)\lessneqq y_{(n)}-1\), that is, \(w\lessneqq\log(y_{(n)}-1)\). Hence, \(g(w)\geq g(\log(y_{(n)}-1))\), that is \(\exp(w)\geq y_{(n)}\log(1+\exp(w))+g(\log(y_{(n)}-1))\). Thus, we have (35), where \(d=\exp(-g(\log(y_{(n)}-1)))=\exp(1-y_{(n)})y_{(n)}^{y_{(n)}}\). Also, from (34), we obtain \[c(y)\leq Bd(2\pi)^{-\frac{q}{2}}\int_{\mathbb{R}_{+}^{r}}\int_{\mathbb{R}^{ \mathcal{P}}}\int_{\mathbb{R}^{\mathcal{P}}}\prod_{i=1}^{n}\binom{y_{(n)}}{y_{i} }\Big{[}F_{L}(x_{i}^{\top}\beta+z_{i}^{\top}u)\Big{]}^{y_{i}}\Big{[}1-F_{L}(x_ {i}^{\top}\beta+z_{i}^{\top}u)\Big{]}^{y_{(n)}-y_{i}}\] \[\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp[-\tau_{j}(b_{j}+u_{j}^{\top}u_{j}/2)] dud\beta d\tau, \tag{36}\] where \(F_{L}(t)=e^{t}/(1+e^{t})\) is the cdf of the standard logistic random variable. Now, we can observe the integrand in (36) is the same as that in (9) when the prior on \(\beta\) is \(\pi(\beta)\propto 1\), the prior on \(\tau\) is as in (8) and \(F\equiv F_{L}\). Since the standard logistic random variable has all finite moments, the Corollary is proved based on Theorem 3. Proof of Theorem 4.: Using (27) and \(b_{j}=0\), we have \[c(y) \leq(2\pi)^{-\frac{\alpha}{2}}\int_{\mathbb{R}_{+}^{\prime}}\int _{\mathbb{R}^{q}}\left[\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}}\right]\int_{ \mathbb{R}^{p}}\mathrm{E}\left[\mathrm{I}\{t_{i}(x_{i}^{\top}\beta+z_{i}^{ \top}u)\leq t_{i}\delta_{i};1\leq i\leq n+k\}\right]d\beta\] \[\quad\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp(-\tau_{j}u_{j} ^{\top}u_{j}/2)dud\tau\] \[=(2\pi)^{-\frac{\alpha}{2}}\int_{\mathbb{R}_{+}^{\prime}}\int_{ \mathbb{R}^{q}}\Big{[}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}}\Big{]}\mathrm{E} \Big{[}\int_{\mathbb{R}^{p}}\mathrm{I}\{(X_{\triangle}^{*},Z_{\triangle}^{*})( \beta^{\top},u^{\top})^{\top}\leq\delta^{*}\}d\beta\Big{]}\prod_{j=1}^{r}\tau_{ j}^{a_{j}+q_{j}/2-1}\exp(-\tau_{j}u_{j}^{\top}u_{j}/2)dud\tau\] \[\leq(2\pi)^{-\frac{\alpha}{2}}\int_{\mathbb{R}_{+}^{\prime}}\int _{\mathbb{R}^{q}}\Big{[}\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}}\Big{]}\mathrm{ E}\Big{[}2^{p}l^{\prime p}\|\delta^{*}\|^{p}\mathrm{I}\{\|u\|\leq l^{\prime}\| \delta^{*}\|\}\Big{]}\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp(-\tau_{j}u_{ j}^{\top}u_{j}/2)dud\tau\] \[\leq\kappa\mathrm{E}\Bigg{[}\|\delta^{*}\|^{p}\int_{\mathbb{R}_{+ }^{\prime}}\prod_{j=1}^{r}\int_{\mathbb{R}^{q_{j}}}\mathrm{I}\{\|u_{j}\|\leq l^ {\prime}\|\delta^{*}\|\}\tau_{j}^{a_{j}+q_{j}/2-1}\exp(-\tau_{j}u_{j}^{\top}u_{ j}/2)du_{j}d\tau\Bigg{]}, \tag{37}\] where \(\kappa\) is a constant. Here, we have used the condition 1 and Chen and Shao's (2001) Lemma 4.1 to obtain the second inequality, where \(l^{\prime}\) depends on \((X_{\triangle}^{*},Z_{\triangle}^{*})\). Note that \(\delta=(\delta_{1},\delta_{2},...,\delta_{n+k})\), \(\delta^{*}=(t_{1}\delta_{1},t_{2}\delta_{2},\ldots,t_{n+k}\delta_{n+k})^{\top}\), where \(t_{i}\) is defined as before, and \(\|\delta^{*}\|=\|\delta\|\). Then from (37), applying similar techniques as in the proof of Theorem 4.2 in Chen et al. (2002), we have \[c(y) \leq\kappa\mathrm{E}\Big{[}\|\delta\|^{p}\int_{\mathbb{R}_{+}^{ \prime}}\prod_{j=1}^{r}\tau_{j}^{a_{j}-1}\int_{\mathbb{R}^{q_{j}}}\mathrm{I}\{ \big{\|}u_{j}\big{\|}\leq l^{\prime}\|\delta\|\}\tau_{j}^{q_{j}/2}\exp(-\tau_{ j}u_{j}^{\top}u_{j}/2)du_{j}d\tau\Big{]}\] \[\leq\kappa_{1}\,\mathrm{E}\Big{[}\|\delta\|^{p}\int_{\mathbb{R}_ {+}^{\prime}}\prod_{j=1}^{r}\tau_{j}^{a_{j}-1}\min\Big{(}1,2^{q_{j}/2}\pi^{-q_{ j}/2}l^{q_{j}}\tau_{j}^{q_{j}/2}\|\delta\|^{q_{j}}\Big{)}d\tau\Big{]}\] \[=\kappa_{1}\,\mathrm{E}\Big{\{}\|\delta\|^{p}\prod_{j=1}^{r}\Big{[} \int_{t_{1}}^{\infty}\tau_{j}^{a_{j}-1}d\tau_{j}+2^{q_{j}/2}\pi^{-q_{j}/2}l^{q_{ j}}\|\delta\|^{q_{j}}\int_{0}^{t_{1}}\tau_{j}^{a_{j}+q_{j}/2-1}d\tau_{j}\Big{]} \Big{\}}<\infty,\] where \(\kappa_{1}\) is a constant, \(l_{1}=\pi/(2l^{\prime 2}\|\delta\|^{2})\) and the integrations in the last line are finite as the conditions 2 and 3 hold. Therefore, the Theorem is proved. Proof of Theorem 5.: When we have binomial responses, from (9) we have \[c(y) =(2\pi)^{-\frac{q}{2}}\int_{\mathbb{R}_{+}^{\prime}}\int_{\mathbb{R} ^{p}}\int_{\mathbb{R}^{q}}\Bigg{[}\prod_{i=1}^{n}\binom{m_{i}}{y_{i}}\Big{[}F(x _{i}^{\top}\beta+z_{i}^{\top}u)\Big{]}^{y_{i}}\Big{[}1-F(x_{i}^{\top}\beta+z_{i }^{\top}u)\Big{]}^{m_{i}-y_{i}}\Bigg{]}\] \[\Bigg{[}\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp\left[-\tau_{ j}(b_{j}+u_{j}^{\top}u_{j}/2)\right]\Bigg{]}dud\beta d\tau. \tag{38}\] If \(X\) is not a full rank matrix, as in the proof of Theorem 2, we can show that \[c(y) \geq(2\pi)^{-\frac{q}{2}}\int_{\mathbb{R}_{+}^{\prime}}\int_{ \mathbb{R}^{q}}\prod_{i=1}^{n}\binom{m_{i}}{y_{i}}\Big{[}F(-\epsilon^{\prime} +z_{i}^{\top}u)\Big{]}^{y_{i}}\Big{[}1-F(\epsilon^{\prime}+z_{i}^{\top}u) \Big{]}^{m_{i}-y_{i}}\] \[\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp\left[-\tau_{j}(b_{j }+u_{j}^{\top}u_{j}/2)\right]dud\tau\int_{\beta\in D_{\epsilon^{\prime}}}d\beta. \tag{39}\] and \(\int_{\beta\in D_{\epsilon^{\prime}}}d\beta=\infty\). Since the integrand in the multiple of \(\int_{\beta\in D_{\epsilon^{\prime}}}d\beta\) on the right-hand side of (39) is nonnegative and is not zero (a.e.), the integral is strictly positive. Therefore, \(c(y)\) diverges to infinity. As for the second necessary condition, we focus on the following part in (38): \[\int_{\mathbb{R}_{+}^{\prime}}\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp \left[-\tau_{j}(b_{j}+u_{j}^{\top}u_{j}/2)\right]d\tau=\prod_{j=1}^{r}\int_{ \mathbb{R}_{+}}\tau_{j}^{a_{j}+q_{j}/2-1}\exp\left[-\tau_{j}(b_{j}+u_{j}^{\top }u_{j}/2)\right]d\tau_{j}.\] For fixed \(u_{j}\), if \(b_{j}+u_{j}^{\top}u_{j}/2\leq 0\), then we obtain \(\int_{0}^{\infty}\tau_{j}^{a_{j}+q_{j}/2-1}\exp\left[-\tau_{j}(b_{j}+u_{j}^{ \top}u_{j}/2)\right]d\tau_{j}\geq\int_{0}^{\infty}\tau_{j}^{a_{j}+q_{j}/2-1}d \tau_{j}=\infty\). Also, when \(b_{j}+u_{j}^{\top}u_{j}/2>0\) and \(a_{j}+q_{j}/2\leq 0\), we have \(\int_{0}^{\infty}\tau_{j}^{a_{j}+q_{j}/2-1}\exp\left[-\tau_{j}(b_{j}+u_{j}^{ \top}u_{j}/2)\right]d\tau_{j}\geq\exp\left[-(b_{j}+u_{j}^{\top}u_{j}/2)\right] \int_{0}^{1}\tau_{j}^{a_{j}+q_{j}/2-1}d\tau_{j}=\infty\). Consequently, the second necessary condition for posterior propriety is proved. Proof of Theorem 6.: Recall \(X_{\triangle}^{*}\) as defined in Section 3.1. As in the proof of Theorem 2 of Roy and Kaiser (2013) (see also Chen and Shao, 2001), if the condition 2 is not satisfied, we have: \[t_{i}x_{\triangle i}^{\top}h\leq 0,\,i=1,2,\ldots,n+k, \tag{40}\] where \(h=(h_{1},\ldots,h_{p})^{\top}\in\mathbb{R}^{p}\) is a non-zero vector, \(t_{i}\) and \(x_{\triangle i}^{\top}\) are defined in Section 3.1. Then we have \[x_{\triangle i}^{\top}h\leq 0\text{ for }i\in I_{1};\quad x_{ \triangle i}^{\top}h\geq 0\text{ for }i\in I_{2};\quad x_{\triangle i}^{\top}h=0\text{ for }i\in I_{3}. \tag{41}\] Without loss of generality, assume that \(h_{1}\neq 0\), \(\beta=s_{1}h+(0,s_{2},\ldots,s_{p})^{\top}\) and \(s=(s_{1},\ldots,s_{p})^{\top}\). For fixed \(a>0\), let us define \(B_{2}=\Big{\{}u\in\mathbb{R}^{q}:-a\leq z_{i}^{\top}u\leq a,\,i=1,\ldots,n \Big{\}}\). Since \(Z\) has full rank, from (38), we have \[c(y) =(2\pi)^{-\frac{q}{2}}\int_{\mathbb{R}_{+}^{\prime}}\int_{\mathbb{ R}^{p}}\int_{\mathbb{R}^{q}}\Bigg{[}\prod_{i=1}^{n}\binom{m_{i}}{y_{i}}\Big{[}F(x _{i}^{\top}\beta+z_{i}^{\top}u)\Big{]}^{y_{i}}\Big{[}1-F(x_{i}^{\top}\beta+z_{ i}^{\top}u)\Big{]}^{m_{i}-y_{i}}\Bigg{]}\] \[\left[\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp\left[-\tau_{j}(b_{ j}+u_{j}^{\top}u_{j}/2)\right]\right]dud\beta d\tau\] \[\geq(2\pi)^{-\frac{\alpha}{2}}\int_{\mathbb{R}_{+}^{r}}\int_{ \mathbb{R}^{p}}\int_{u\in B_{2}}\left[\prod_{i=1}^{n}\binom{m_{i}}{y_{i}} \left[F(x_{i}^{\top}\beta-a)\right]^{y_{i}}\left[1-F(x_{i}^{\top}\beta+a) \right]^{m_{i}-y_{i}}\right]\] \[\left[\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp\left[-\tau_{j }(b_{j}+u_{j}^{\top}u_{j}/2)\right]\right]dud\beta d\tau\] \[=\left|h_{1}\right|(2\pi)^{-\frac{\alpha}{2}}\int_{\mathbb{R}_{+}^ {r}}\int_{\mathbb{R}^{p}}\int_{u\in B_{2}}\prod_{i\in I_{1}}\left[1-F(s_{1}x_ {i}^{\top}h+x_{i}^{\top}(0,s_{2},\ldots,s_{p})^{\top}+a)\right]^{m_{i}}\prod_ {i\in I_{2}}\left[F(s_{1}x_{i}^{\top}h+x_{i}^{\top}(0,s_{2},\ldots,s_{p})^{ \top}-a)\right]^{m_{i}-y_{i}}\] \[\prod_{i\in I_{3}}\binom{m_{i}}{y_{i}}\left[F(x_{i}^{\top}(0,s_{2 },\ldots,s_{p})^{\top}-a)\right]^{y_{i}}\left[1-F(x_{i}^{\top}(0,s_{2},\ldots, s_{p})^{\top}+a)\right]^{m_{i}-y_{i}}\] \[\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp\left[-\tau_{j}(b_{j }+u_{j}^{\top}u_{j}/2)\right]dudsd\tau, \tag{42}\] where the above inequality is based on the definition of the set \(B_{2}\) and the last equality follows from a change of variables \(\beta\to s\) with the Jacobian of the transformation being \(h_{1}\). For fixed \(r_{1}>0\), define \(B_{3}=\left\{s\in\mathbb{R}^{p}:s_{1}\geq 0,|s_{k}|\leq r_{1},\,2\leq k\leq p\right\}\). By Cauchy-Schwarz inequality, for \(s\in B_{3}\), we have \[\left|x_{i}^{\top}(0,s_{2},\ldots,s_{p})^{\top}\right|\leq\left\|x_{i}\right\| \sqrt{(p-1)r_{1}^{2}}\leq\left\|x_{i}\right\|pr_{1} \tag{43}\] From (41), when \(i\in I_{2}\cup I_{3}\), \(x_{i}^{\top}h\geq 0\). If \(s_{1}\geq 0\), then we have \(s_{1}x_{i}^{\top}h\geq 0\) and thus, for \(s\in B_{3}\), \(s_{1}x_{i}^{\top}h+x_{i}^{\top}(0,s_{2},\ldots,s_{p})^{\top}\geq-\left\|x_{i} \right\|pr_{1}\). Then for \(i\in I_{2}\cup I_{3}\), we have \[F\left(s_{1}x_{i}^{\top}h+x_{i}^{\top}(0,s_{2},\ldots,s_{p})^{\top}-a\right) \geq F\Big{(}-\left\|x_{i}\right\|pr_{1}-a\Big{)}. \tag{44}\] Similarly from (41), when \(i\in I_{1}\cup I_{3},x_{i}^{\top}h\leq 0\). If \(s_{1}\geq 0\), then we have \(s_{1}x_{i}^{\top}h\leq 0\). Since (43) holds, we have \(s_{1}x_{i}^{\top}h+x_{i}^{\top}(0,s_{2},\ldots,s_{p})^{\top}\leq\left\|x_{i} \right\|pr_{1}\). Thus, for \(i\in I_{1}\cup I_{3}\), we obtain \[1-F\left(s_{1}x_{i}^{\top}h+x_{i}^{\top}(0,s_{2},\ldots,s_{p})^{\top}+a\right) \geq 1-F\Big{(}\left\|x_{i}\right\|pr_{1}+a\Big{)}. \tag{45}\] Applying (44), (45), from (42), we have \[c(y)\geq \left|h_{1}\right|(2\pi)^{-\frac{\alpha}{2}}\prod_{i\in I_{3}} \binom{m_{i}}{y_{i}}\int_{\mathbb{R}_{+}^{r}}\int_{s\in B_{3}}\int_{u\in B_{2 }}\prod_{i\in I_{2}\cup I_{3}}\left[F\Big{(}-\left\|x_{i}\right\|pr_{1}-a \Big{)}\right]^{y_{i}}\] \[\prod_{i\in I_{1}\cup I_{3}}\left[1-F\Big{(}\left\|x_{i}\right\| pr_{1}+a\Big{)}\right]^{m_{i}-y_{i}}\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp \left[-\tau_{j}(b_{j}+u_{j}^{\top}u_{j}/2)\right]dudsd\tau\] \[\geq \left|h_{1}\right|(2\pi)^{-\frac{\alpha}{2}}\prod_{i\in I_{3}} \binom{m_{i}}{y_{i}}\int_{\mathbb{R}_{+}^{r}}\int_{u\in B_{2}}\prod_{i\in I_{2 }\cup I_{3}}\left[F(-r_{2}-a)\right]^{y_{i}}\prod_{i\in I_{1}\cup I_{3}}\left[1 -F(r_{2}+a)\right]^{m_{i}-y_{i}}\] \[\prod_{j=1}^{r}\tau_{j}^{a_{j}+q_{j}/2-1}\exp{[-\tau_{j}(b_{j}+u_{j}^{\top}u_{j}/2) ]}dud\tau\int_{s\in B_{4}}ds=\infty,\] where fixed \(r_{1}\) and \(r_{2}\) are such that \(B_{4}\equiv\left\{s\in\mathbb{R}^{p}:s_{1}\geq 0,|s_{k}|\leq r_{1},\,2\leq k \leq p,\,\max_{1\leq i\leq n}\|x_{i}\|\,pr_{1}\leq r_{2}\right\}\) is nonempty.